Making URIs Cool again
You may have seen our recent blog post discussing using the web as it was intended. We spoke about Cool URIs (which don't change) and gave an example of recent development on a PHE Coronavirus tracking dashboard which inadvertently removed some features people were using.
I see this as ignoring the needs of a significant subset of the users. Prior releases of the dashboard made the raw data available at a fixed URL. This meant that people could easily build tools which imported and processed this for their own needs. Our own Dashboard was an example of exactly this. The new dashboard is undeniably a much better experience for the interactive users. The batch or system users, however, are sadly left with nothing.
Thankfully, the PHE team have published their code under an open source license, so I was able to reverse engineer the processing that they did to download the data. My process will
- Determine the latest file to use by grabbing the XML file and finding the latest json file
- Store the json file unaltered in the data subdirectory
- Create a couple of CSV files extracting the lab-confirmed cases and hospital deaths into the format offered by the dashboard app as a Javascript-driven download
I've put these live in the ODI Leeds Coronavirus Data GitHub project. The code that Im using to locate and process the data is also available there. Ive set it to automatically run every hour using GitHub actions, so it should be pretty much up to date. Any changes are checked back in to the repository, so become immediately available for use. The data files are located in the Coronavirus-data repository data folder.
P.S. I welcome feedback and contributions to this code. If you can, GitHub issues or Pull Requests are ideal!