Sunday, 4 February 2018

Using Blogger as a source of website content

For the last few years I have been webmaster for a local residents' association.

I put together a basic website using what I knew, which was html and shtml, and I created and maintained all the content (about 250 entries altogether) using that well known development tool Notepad.

Now it is time to hand the task over to others.

I realised a while ago that I had to find a better method of content editing, for speed, accuracy, and consistency, and also so that I could share the load with others with less technical skills.

Most of the entries are very short; a lot of them involve images; and many contain links to uploaded PDFs. Entries usually start off as 'highlights', linked to from the home page and notified to members via email and Twitter. Over time they lose their 'highlight' status; finally they are moved to the archive page.

I had a play with MODX, which we use for the i-Community website, but it's really far too complicated for the purpose (it's unnecessarily complicated for i-Community, really).

I had spent a bit of time in late 2017 learning some PHP (forced into it by the sudden demise of the Twitter feed mechanism on the i-Community home page) and a bit of Javascript (in order to implement an urgently needed non-Flash MP3 player for

It occurred to me that I might solve the content editing problem via a Blogger blog like this one, pulling the content into the website via the RSS feed that is automatically provided for any Blogger blog.

Here's an example blog post (for a 'highlight'):
Here's its rendition on the website:

And here's the link on the home page:

The basics turned out to be pretty easy to do, using an open source tool called Feed2JS. This uses a second open source tool, Magpie, to read the RSS feed; the shipped version of Feed2JS, as the name suggests, then exposes the content via Javascript, with various formatting options.

I separated the content into the many necessary categories (News 'highlight', older News, News archive, Local Events, etc.) using Blogger's tags called Labels (each of which is available as an individual RSS feed), and automated the generation of the Feed2JS scripts (one per content category) via batch files.

So far so good.

The three remaining problems were: migrating all the existing content; implementing the 'highlight' links on the home page; and automating the 'last amended' date on each page.

Migration was a long and boring task. Eventually (and absolutely no thanks to Google's Blogger documentation or forum) I found some sensible import xml examples and succeeded in getting the content migrated, including an appropriate publication date for each entry, via a combination of manual editing, much changing of relative to absolute paths, SQL Server csv-xml conversion, Blogger import (not helped by some ridiculous daily import attempt limits), and manual application of Labels.

Here's an example (containing 2 entries) of the xml import.

The other two requirements turned out to be easiest to achieve by installing my own instance of Feed2JS. (I did have to put a redirect in place, effectively interpreting the Blogger site as a subfolder of the main website, as otherwise the website's PHP server was not prepared to access the RSS feed, for completely appropriate security reasons.)

I eventually used two much simplified copies of the Feed2JS code, with no Javascript; en route I took a considerable liking to PHP - a very friendly and easily learned language, it seems to me (I particularly admire the array handling).

The only remaining problem was the need to wait for Magpie's hour-long cache period to elapse before blog changes were reflected on the main website. In practice we just wait, although occasionally I do set the cache period to 1 minute temporarily (which has pretty awful effects on website performance, which is usually acceptable, if not as fast as the pre-blog version).

Finally I created a couple of little PHP utilities permitting the upload of PDFs and full size images to the website without FTP. (I subsequently had problems with these being used for malware purposes, which I should have anticipated - I have now moved them into a password-protected directory.)

Anyway it's all had the required effect: editing is now much easier, and my colleagues are happy to share the load.

Saturday, 21 October 2017

Hints and Tips update (aka, SQL on #ibmi and #sqlserver: divided by a common language)

Decided to take a nice 'split string' table function I'd borrowed from some kind person on the internet for the Coordinate My Care data warehouse (SQL Server), and use it as a basis for a similar function to split multi-value attributes on IBM i, e.g. list of special authorities on a user profile. Cue a frustrating, if eventually successful, afternoon.

I really don't like #ibmi table functions: for starters, why can't I INSERT direct into the table to be returned, as in #sqlserver ?

However I did learn a few useful things that are worth sharing I think:

#ibmi Hints and Tips #18: create/replace table in QTEMP from SQL:
(You can then use the table absolutely normally as QTEMP.[table].)

#ibmi Hints and Tips #19: FOR var1 as cur1 CURSOR FOR [SELECT statement] DO
[whatever, using column names direct from result set]; END FOR;
(I don't know whether you can do this in SQL Server, but I am definitely going to investigate.)

#ibmi Hints and Tips #20: table function usage: esp. note the final identifier (x here):
SELECT * FROM TABLE(MyTableFunction(parm1, ...)) x
(Don't like this super complicated syntax, either!)

Tuesday, 25 July 2017

Making what you have work better: IBM DB2 for i SQL Performance Monitoring, Analysis and Tuning Workshop, IBM South Bank, 14-16 November 2017: not to be missed!

Mike Cain, of the IBM Labs, Rochester, Minnesota, a long term good friend of i-Community, will be coming to IBM South Bank on the 14th, 15th and 16th November to run his fabulous “IBM DB2 for i SQL Performance Monitoring, Analysis and Tuning Workshop”.

This workshop will ensure that those staff using SQL will understand the impact of doing it right and wrong and understand the best practice methodology for using SQL and the supporting tools to write efficient and performant code.

I attended this course myself in 2009 and cannot praise it too highly.

The agenda includes:
·       Introduction to DB2 for i Query Optimizer and Database Engine
·       Database engine methods for data access and data processing
·       Indexing and Statistics strategies
·       SQL application design and programming best practices
·       Behaviour of static SQL and dynamic SQL
·       Best practices for popular SQL interfaces, such as CLI, ODBC and JDBC
·       Parallel database processing - DB2 Symmetric Multiprocessor (SMP)
·       State of the art tools, including System i Navigator Performance Centre

If you are interested, drop me an email at and I will put you in touch with the course organisers.

Sunday, 23 April 2017

Still here and still busy!

Apologies for yet another big gap in posting. Again both work and domestic matters have got in the way, but I am looking forward to a less manic period when some of the blog posts I have in my head might make it here.

Sorry not to have followed up on the 'auction' post promised last time but I decided it might not be a public domain matter.

Saturday, 9 July 2016

Quick update July 2015 to July 2016

Apologies for the complete silence - since my last post I have been busier than I ever remember.

Highlights of the last year (well, highlights in retrospect, in some cases):

  • Fantastic trip to Brisbane (son's wedding), Sydney, and San Francisco last July (re wonderful SF public transport system with spot on approach to customer service, see right)
  • Yet another excellent i-Community Rochester trip, September
  • Successful go-live of POWER 8/IBM i V7R2/Flash storage at one of my insurance customers, November
  • Magna Carta 800th anniversary concert, November
  • Go-live of new Coordinate My Care system, also in November, after an 18-month procurement process and an 8-month implementation project (shouldn't it be the other way round?, I hear you say)
  • Demonstration of CMC 2-way interoperability proof of concept at eHealth Week at Olympia, April 2016
  • The Bath unexploded bomb (left under future school playground by Baedeker raids in 1942)

Next blog post will cover something rather alarming (broadly IT related) that my husband encountered when endeavouring to bid at an auction.

Wednesday, 25 June 2014

Calling PASE functionality from ILE #ibmi

I have recently carried out a proof of concept in this area - see this link for details and example ILE and PASE (AIX) source code. Any questions, drop me a note via my website.
UPDATE 21.10.17: the example code and PowerPoint are now on here: