All posts by KingJ


Optimising WordPress for Performance

WordPress is a very powerful yet accessible platform. It powers many blogs and websites, including this one. However, many WordPress sites can often be slow to load and navigate. For a user, this is frustrating and will often result in them leaving the site and going elsewhere.

This isn’t WordPress’s fault per-se, a lot of optimisations are simply not possible out of the box because of the wide variety of hosting environments.

In this post, i’m going to explain the optimisations i’ve made to make this site load (reasonably) fast. It’s not the fastest site out there, but compared to an un-optimised version, it is several times faster. Some of the changes are more complicated than others and some will require more control over your hosting environment than you have available. However, even implementing a few of the simpler changes will still provide a performance boost.

One of the key methods for increasing performance is caching. Why have the server work several times to produce the same output to several users? Why not take the output that was generated for one user and serve it up to all of the users, so that the server only has to do the work of producing the page once?

Another key method is to decrease the delay in loading resources, including those which have been cached. It is several orders of magnitude faster to load something from memory than it is disk. Memory is cheap and plentiful these days, so it makes sense to store as much as possible in memory.

Finally, parallelisation. Whilst servers often have lots of memory these days, they also have a number of processor cores. If a number of tasks can be performed in parallel, then the overall processing time can be reduced.

Of course, these optimisation techniques are some of the most difficult things to achieve in computing. Thankfully though, many others have put a lot of hard work in to making tools and packages which achieve the previous optimisations, so with these methods in mind, here are the tools and packages that are responsible for serving up this site fast.

This post won’t delve in to the exact installation details of each as it will vary depending on your hosting environment and with time. However, a little bit of Googling will often reveal the instructions to install each in your specific situation. Hopefully this post can act as an overall guide on how to optimise performance and also why each change optimises performance.

Server Software


nginx is a web server, similar to Apache. However, unlike Apache, nginx is designed to be high-performance, lightweight webserver. That isn’t to say Apache is bad, but nginx tends to perform better and is more scalable.


PHP, which is the scripting language used by WordPress, can be run in a number of ways, primarily as CGI, FastCGI or FPM. FastCGI and FPM both spawn pools of processes to run PHP code, improving performance. An important difference however is that FPM allows for a single cache to be used between all of the processes – there’s little point caching something if it’s not going to be shared after all.

The setup of php-fpm is a bit more involved than FastCGI, but the shared cache is definitely worth it. Additionally, there are two ways for the web server to communicate with php-fpm – over a TCP socket or a Unix socket. The latter is recommended as latency (and thus overall processing time) is reduced, but can be difficult to get working in a secure fashion.

Caching & Memory Storage

While nginx and php-fpm will help improve your performance in general, the real performance boost comes from caching and storing resources in memory.

OpCode Caching

PHP is an interpreted language. Thus, when a PHP script is run, the interpreter will convert the instructions to lower level code which is then run. The advantage of this is that new code can be deployed and run instantly, without the need to go through a compilation process. However, it means that the code will need to be interpreted and converted every time it is run.

However, it is possible to cache the output of the interpretation and save it for the next time the code is run, saving time subsequently. One popular way to achieve this in PHP was via the use of APC (the Alternative PHP Cache). However, with the release of PHP5.5, the ZendOpcache is now part of PHP’s standard distribution and is the recommended cache. Enabling it in a PHP5.5 installation is extremely simple and does not require any changes to existing PHP scripts – it’ll work against any PHP script out of the box as it is an OpCode cache. OpCode caches are a great way to get a quick, free performance boost without making any major changes.

Object Caching (Userland Caching)

One other feature of the aforementioned APC cache was its ability to cache static objects in memory, such as images and Javascript in addition to OpCodes. However, the preferred ZendOpcache only acts as an OpCode cache and thus does not provide any Object/Userland Caching.

The good news however is that a userland-only caching version of APC exists called APCu.  Whilst not part of the standard PHP installation, it is available through most package managers and also PECL.

However, Object/Userland Caching will not work unless your PHP script specifically makes use of it. More on how to get WordPress to use your shiny new cache later on.


If you look at the HTML source for most sites, you’ll often see a nicely laid out document, with sensible human-readable formatting and naming. But why does your browser need to see pretty indentation? Everything could all be on one line and still perfectly parsable by a browser, but then the markup wouldn’t be readable to the designer who has to write the page.

It might seem like a very small optimisation to shorten names, cut out whitespace and so on, but “minified” pages and Javascript libraries can often cut down the overall size of a file by double-digit percentages – not something to be sniffed at. Smaller files mean faster transfers and more files that can be stored in your memory cache. Just what we want!

As a compromise between human-readability for designers and minified files for parsing by the browser, we can use a minifier to transparently perform the process in between. However, the process isn’t free – your server will of course need to do some work to perform the minification. Thus, to see a performance boost from minification, having both an OpCode and Userland Cache is extremely important.

How do you perform the minification though? As before, more on this later.

Bringing it all together – adding support to WordPress

Several of the aforementioned features, such as Userland Caching, need the PHP script to be specifically adapted to take advantage of them. Without support in the PHP script, they won’t be utilised and no performance benefit will be realised.

WordPress by default does not provide support for many of these features as it needs to ensure maximum compatibility across a wide range of web hosting platforms. There’s nothing wrong with this and it is a good decision for the WordPress team to make.

However, a number of plugins are available which do add this functionality in to WordPress. The most notable is the W3 Total Cache. This plugin adds a huge number of caching options to WordPress and is highly configurable to almost any setup, supporting a wide range of backend caches.

If you have a low traffic blog (such as, sadly, this one), it’s important to remember that caches have a lifetime – and if the resources that are in the cache are not accessed within a certain time, they will be purged, meaning that they have to be generated and retrieved the next time someone requests them, wiping out any performance benefit you may have hoped to achieve. Increasing the cache lifetime is not a good idea – all resources should have a finite lifetime as they can and will change! Instead, you can ensure the cache is kept primed and ready by having an automated crawler access of all of the relevant resources. W3 Total Cache does have an option for this and it should definitely be enabled. Even with a high-traffic site, enabling this ensures that all users receive a consistently quick loading experience.


Now that you’ve implemented some or all of the above methods, how do you actually verify it’s working and that all your effort wasn’t for nothing?

Cache Statistics

Both the ZendOpcache and APCu offer the ability to see statistics relating to cache utilisation and also the contents of the cache. This helps you to verify that the cache is being used and that it is large enough to contain all of the content you wish to cache.

For ZendOpcache a number of scripts exist to check the cache status. I use opcache-status as it provides a single script that presents all of the relevant information in a clean, accessible format, other scripts are available though.

For APCu, a script is distributed with the installation which offers similar functionality. Check your distribution’s package as to where it might be, but under Ubuntu it can be found in /usr/share/doc/php-apc/apc.php.

Load Testing

The other way to test if everything is working is to simply load the website and examine the time taken to return the results. You can either use your browser’s developer tools to gather network statistics, or use a tool such as GTmetrix or Load Impact. GTmetrix is great for seeing how individual resources on your site load and what causes delay, whereas Load Impact is better suited for seeing how your site performs when lots and lots of people are accessing it at the same time.

Finally, if you’re minifying content, make sure you view the source of the page to ensure it actually has been minified!

Do keep in mind that if you’re logged in to your blog, W3 Total Cache won’t serve cached or minified pages to you. To see cached and minified pages, either log out or better, open up an Incognito browser window.


There’s a good reason why these changes are not available out of the box – they can be incompatible in a large number of environments and require extra setup which can put off a large number of potential users. With time though, the results can be impressive with load times that are several times quicker, and more consistent, than previously.

Despite the hard work to get some of these methods working, I hope that it has produced a noticeable benefit to your blog or site. Improving the performance of your site through these methods not only makes your site load faster, but crucially it helps serve your site up to more simultaneous users without an increase in hardware performance.

Good luck and happy optimising!


Running the Dibbler DHCPv6 Client as a Service

In a previous post, I explained how to install and configure the Dibbler DHCPv6 client, which is necessary for using IPv6 at certain providers such as Perplexingly, I could only get the client to run interactively and not as a service. This meant that I would need to make sure that I ran the client every time the server started, and that I needed to restart the client if it ever crashed and exited. Less than ideal.

Running dibbler-client.exe install will install a system service, but as mentioned in my previous post when running dibbler-client.exe start to start the service, it errors with “Service DHCPv6Client startup failed” with no further information on the console or in the logs.

Delving in to the System log using the Windows Event Viewer however reveals two very interesting events attributed to the Service Control Manager;

The Microsoft IPv6 Protocol Driver service failed to start due to the following error: 
The system cannot find the file specified.

The Dibbler - a DHCPv6 client service depends on the Microsoft IPv6 Protocol Driver service which failed to start because of the following error: 
The system cannot find the file specified.

So in short, the Dibbler client service can’t start because it depends on the Microsoft IPv6 Protocol Driver, which has also failed to start. A little Googling reveals this might be caused by running Hyper-V (which I am), but with no solid confirmation. There is also no information as to why the Microsoft IPv6 Protocol Driver fails to start. Considering however that the Dibbler client runs fine interactively, this suggests that it isn’t really dependent on the Microsoft IPv6 Protocol Driver. Therefore, if the service dependency is removed, the Dibbler client service should run.

Removing the dependency isn’t too complicated, but does require editing the registry. Open the Registry Editor by running regedit and browse to HKLM\SYSTEM\CurrentControlSet\Services\DHCPv6Client. Edit the DependOnService key so that it only contains “winmgmt” and does not include “tcpip6″.

Close down the Registry Editor and restart your system. When it comes back up again, check the status of the Dibber client in the service manager, the status should show as “Running”. If it isn’t, check the System Event Log again for any errors. You can also check the dibber-client.log file in Dibber’s installation directory for any further errors.

Installing Newer Kernels in Ubuntu

As part of the ongoing TV streaming project i’m working on, I tried a new DVB-T tuner (the August DVB-T210). Whilst it is supported by the Linux TV project, support is only available in kernel version 3.14 and newer.

At the time of writing, Kernel 3.14 is so new that it only received a stable release a week ago. Naturally, no Linux distribution is providing this kernel as a default option yet as it is so new.

But what if you’re dependent on a new bit of hardware support or a new feature in a newer kernel? In the past, you would have needed  to download the new source and compile the new kernel from scratch – not an easy process. Luckily though, Ubuntu provides pre-compiled, packaged kernels which can be easily installed! These are known as mainline kernels, instead of the default stable kernel which is part of the current version of Ubuntu.

These kernels are located at Then, find the kernel version you wish to install, in this case v3.14. The word after the version denotes which version of Ubuntu this kernel is meant for – at the moment 3.14 is only available for Trusty (Ubuntu 14.04). If you’re not running this version of Ubuntu, you can try installing the kernel, but things will likely break!

Inside the folder for the version you wish to install, you’ll find a multitude of files. First, decide on the variant of kernel you want – in most cases this will be generic. At this point, you should also know what platform you’re running on, most systems now are 64-bit, so you’ll likely want the amd64 platform.

Now, download the following files in to an empty directory;

  • linux-headers-version-variant-version-platform.deb
  • linux-headers-version-version-all.deb
  • linux-image-version-variant-version-platform.deb

For example, if I wanted the 3.14 generic 64-bit kernel, I would have downloaded;

Once downloaded, we need to install the new kernel packages. To do this, use the dpkg -i command. If you’ve downloaded the packages to an empty directory, you can simply run dpkg -i * to install all of them, otherwise you’ll need to specify each file after the dpkg -i command. Installing packages requires root privileges, so prefix the dpkg command with sudo if you’re not already root.

dpkg will now install the kernel for you and update your GRUB configuration so that next time you reboot, the new kernel is loaded instead of your current one. Once the installation is complete, just issue a reboot command and wait for your computer to reboot. While your computer is rebooting, pay careful attention to the GRUB screen, it should show your new kernel in the list of available boot options and it should also be the default.

Once the reboot is completed, check that the new kernel is loaded by running uname -a. This will output the current kernel version. For example, after installing the above packages, uname -a outputs;

Linux hostname 3.14.0-031400-generic #201403310035 SMP Mon Mar 31 04:36:23 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

This is all the confirmation we need that the kernel installation was successful. If the output does not match the kernel version, variant and platform you installed before, then the installation likely went wrong or another kernel version has taken precedence.

Of course, running a non-stable kernel does come with inherent risks. You’re now running a kernel which hasn’t been certified against the packages for that version of Ubuntu and you’re also now responsible for updating the kernel to receive bug and security fixes. Switching to the mainline kernel isn’t a decision that should be taken lightly and you should only do it if there’s a specific feature you need in a newer kernel.


Streaming Live TV using Tvheadend and XBMC on Raspberry Pis

I recently had an interesting request from some relatives – they wanted to be able to watch TV in the kitchen. A simple task I thought, I grabbed an old analogue TV, digital set-top box and internal aerial. Unfortunately, despite the post-switchover boost in transmission power, the signal received was just slightly too weak to be useful, resulting in only a handful of channels reliably working, not to mention how untidy the jumble of equipment looked.

This lead to a crazy idea – what if a computer attached to the main aerial, which has perfect reception, received live TV and retransmitted it over the internal network to be received by another computer attached to the TV? It was crazy enough that it might just work.

Since the launch of the Raspberry Pi almost 2 years ago, it has found its way in to a number of scenarios that were never anticipated. It’s been used for everything from controlling advertising boards, small micro-servers, distributed computing and, most importantly for this project, media centres.

Whilst the processing power of the Raspberry Pi is low, the GPU is able to fully decode H.264 video at up to 40Mbit/sec – no easy feat. Due to this, the Raspberry Pi can play back most media flawlessly. The attractive price point ($35), small size, low power requirements and video processing power make it particularly suited to this project.

Software Selection

I then needed to identify two bits of software that would run on the Raspberry Pi – one on the back end to receive and re-transmit the TV signal and one on the front end to receive and display the re-transmitted signal. For receiving, the long-running XBMC project makes for a fantastic media centre and, most importantly for this project, supports reception from a wide range of PVR hosts. Thus, selecting XBMC for the front end display wouldn’t affect my choice of back end. As XBMC is just a Linux application, I could theoretically choose any distribution to run it on. However, I instead chose to go with OpenELEC - a stripped down distribution that’s designed only to run XBMC. It works like an appliance, and so handles updates both to itself and the underlying Linux system automatically. A downside for tinkers, but an upside for simplicity. As this system was for a relative, simplicity is key – it reduces the number of calls I get about things not working.

On the back end, there are a number of potential options, however only a few offer support for the Raspberry Pi. These are Tvheadend and VDR. I went with Tvheadend as the setup looked relatively simple. There’s no appliance version of Tvheadend as there was with XBMC and OpenELEC, so I opted to run it on Raspbian, the ‘official’ distribution of Linux for the Raspberry Pi.

Hardware Architecture

Next up, hardware. I had decided that I was going to run this project using Raspberry Pis, but other hardware was also required. A wired connection was available next to an aerial point for the back end Raspberry Pi, but the front end Raspberry Pi (in the kitchen) wasn’t anywhere near a wired network connection, and running a new cable wasn’t an option. Instead, I used a spare Edimax EW-7811Un USB adaptor to add WiFi capabilities to the front end Raspberry Pi. With WiFi devices on the Raspberry Pi, you’re recommended to run them through a powered USB hub due to their power draw requirements, but I found that it worked well connected to just the on board USB ports. There are no other USB peripherals connected (e.g. Keyboard/Mouse) so this reduced the overall power requirements.

As for receiving the signal, I had an old Freecom DVB-T USB stick, which has Linux support. Unfortunately, the power draw of this was too much for the Raspberry Pi and so I had to run it through a powered hub. Being old, it also does not support the newer DVB-T2 standard, which is used for Freeview HD transmissions. This isn’t a major issue however considering that the front end is an analogue TV and so can’t display HD anyway!


Front End Installation

Next up was setting up everything. OpenELEC was relatively painless, I just needed to write the provided image to an SD card and then go through the initial setup. Easy!

Back End Installation

Tvheadend was more involved however. After writing the Raspbian image to the SD card and going through the initial setup, I needed to add the repository for Tvheadend and install it. Once installed it runs automatically as a background service. Tvheadend is administered entirely through a web interface which runs on port 9981.  To get my channels into Tvheadend I had to do the following;

  1. Browse to Configuration > DVB Inputs > TV Adapters. Select the connected adapter from the list. My Freecom DVB-T USB stick appeared as “Wideview USB DVB-T”.
  2. Configure the adapter. Due to compatibility issues, I had to make sure that “Close device handle when idle” was checked and “Disable PMT monitoring” was unchecked. If I didn’t do this, I would quickly hit the file handle limit for the USB stick and need to reboot to use it again. Once you’ve configured the right settings for your adapter click Save.
  3. Browse to the Multiplexes tab. You can click “Add DVB Network by location” on the General tab to automatically add the Multiplexes for your area however I found that they were out of date.
  4. Add the multiplexes for your area one by one by clicking “Add mux(es) manually”. For Freeview in the UK, you can find the values you need to enter at and entering your postcode to find your nearest Freeview transmitter. Enter the frequency, bandwidth (normally 8Mhz), constellation and transmission mode (normally 8k), leave all other values at Auto. Click add to finish adding the multiplex. Repeat for as many multiplexes as you wish to add. When you add a multiplex, Tvheadend will automatically scan the multiplex for available channels.
  5. Browse to the General tab. On the information and capabilities section, wait for the “Muxes awaiting initial scan” count to drop to 0. Now, click “Map DVB services to channels” and Tvheadend will map the discovered channels.
  6. Done! Tvheadend is now advertising the channels.

Connecting the Front End to the Back End

Now that both the Front End and the Back End are configured and hopefully functional, we can tell the Front End (XBMC) to use the Back End (Tvheadend) to receive TV. To configure XBMC;

  1. Browse to System > Settings > Live TV > General.
  2. Click “Enabled” to Enable Live TV reception.
  3. You will be warned that there is currently no configured add on for Live TV. Click OK and you will be taken to the list of available Live TV back ends.
  4. Find “Tvheadend HTSP Client” and select Configure. Enter the IP address or hostname of your back end and then select OK.
  5. Select Enable. XBMC will now try and connect to the Tvheadend back end and retrieve channel listings etc.


At this point, you should be able to receive Live TV on your XBMC front end. Browse to Live TV > TV channels and select a channel to start watching. If you have no channels listed or can’t view any channels, check on Tvheadend that you’ve performed the mapping and check the debug log by clicking the little arrow on the bottom right of the web interface.

However, at this point, you may find that while you have audio, you don’t have any video. Earlier, it was noted that part of the power of the Raspberry Pi is that it has a built in GPU that can decode video, doing tasks which would be impossible on the weak CPU.  However, as a cost-saving measure to help the Raspberry Pi meet its $35 price point, not all of the video formats that the GPU can decode are enabled. The majority of video content today is encoded in MPEG4 H.264, which the Raspberry Pi’s GPU can decode by default. However, UK Freeview DVB-T transmissions use the older MPEG2 format – simply because MPEG4 didn’t exist when Freeview originally launched. Unfortunately, while the Raspberry Pi’s GPU is able to decode MPEG2, it is one of the codecs which was not enabled by default in order to save costs. Thankfully, you can buy a licence key to enable it for the cheap price of £2.40 from the Raspberry Pi web store. Buy it, await your key, and then install it on your Raspberry Pi. It is very important that you give the serial number of the front end Raspberry Pi – that’s the device that does the decoding and display! The back end Raspberry Pi just receives the MPEG2 stream from the aerial and re-transmits it over the network – it does not alter the stream.


Once you’ve got this all setup, you’ll probably wonder how you’re going to control it. Having a keyboard and mouse attached to the front end probably isn’t practical or convenient for control. You could likely buy an Infrared receiver to plug in to the Raspberry Pi and use a remote. However, since XBMC is connected to your network, why not control it from another device that’s also on your network, say a phone or tablet.

Indeed, XBMC supports remote control by other devices. A web interface is available, or even better an API. This can be used by other applications to control XBMC and receive information.

I use Yatse on my Android devices to control XBMC, solutions for other devices also likely exist but I wouldn’t know about them as i’m purely an Android user. The great thing about Yatse is that it can act as a normal remote (e.g. going left, right. up, down etc in menu interfaces) or it can also be used as a “second screen” to select content which will immediately start playing back on XBMC – easier than navigating through several menus. For Live TV, just tap the PVR button then tap the channel you want to watch. It’s that simple.

Performance Tweaks

The Raspberry Pi is just slightly too underpowered to run the XBMC interface smoothly (but as video playing is done by the GPU, there’s no issues there). If the lack of a smooth interface annoys you though, you can either make tweaks to the XBMC interface, such as disabling the RSS feed, or you can overclock your Raspberry Pi. Overclocking runs your Raspberry Pi past its rated performance and may cause instability. Overclocking will not void the warranty on your Raspberry Pi, so long as you overclock it in a way which is endorsed by the Raspberry Pi foundation.

I run the the front end Raspberry Pi using the pre-set Turbo setting.  The supported method of overclocking is using the raspi-config utility, but this isn’t present on OpenELEC. Instead, if you want to overclock you’ll need to edit some files as described in this forum thread.  It is very important that you stick to the pre-set values (Modest, Medium, High, Turbo) and do not set force_turbo=1 as this will definitely invalidate your warranty. Without force_tubro=1, the Raspberry Pi will automatically overclock itself subject to the requirements of the current workload (i.e. it won’t overclock if the extra processing power isn’t needed) and the current temperature of the processor (it won’t be overclocked if it is 85 centigrade or higher, but you’re unlikely to hit this. Mine in a case runs at around 65 centigrade with a Turbo overclock and full load.

Price List

  • 2x Raspberry Pis (Model B, 512MB RAM): £58.14 via Amazon.
  • 2x Cyntech Case for Raspberry Pi: £11.50 via Amazon.
  • 2x 8GB SanDisk Ultra Micro SD Card with SD Adapter: £17.70 via Amazon.
  • 1x Freecom DVB-T USB Stick: Discontinued. You may wish to buy the PCTV 290e instead, with the added bonus of supporting Freeview HD on DVB-T2. However, ensure you don’t buy the newer 292e as it is not compatible with Linux yet.
  • Edimax EW-7811Un USB WiFi Adapter: £6.78 via Amazon.

Total cost: £94.12, excluding DVB-T receiver. For me, the total cost is lower than this as I already owned some of these components. Still, not a bad price at all, and certainly much cheaper than having a new aerial feed run to the kitchen.


The system that i’ve built works quite well. Setup wasn’t exactly an easy process, but once it’s working it is very stable. This certainly made for a fun weekend project and the reception from my relatives has been positive, they have no issues using Yatse as a remote control from their phones either.

As for the next steps on this project, i’m looking in to the DVR and Timeshifting options offered by Tvheadend as well as buying a more recent Freeview receiver, such as the previously mentioned PCTV 290e in order to also receive HD Freeview transmissions.

Resetting the IPMI Password on the ASRock E3C224D2I

Note: This will likely work for other similar ASRock boards too (e.g. E3C226D2I), however as I’ve only got the E3C224D2I I cannot verify if it does or not. Due to the method, it may even work on other boards too. Proceed at your own risk!

I recently bought the ASRock E3C224D2I motherboard for a new Home NAS build. It has an integrated IPMI controller, so regardless of the state of the system it is possible to connect over the network to view the video output and use the keyboard and mouse. You can even mount images and directories from your computer making it possible to carry out an installation fully remotely. Perfect for troubleshooting when things go really wrong!

Unfortunately, after experimenting with the firmware upgrade option on the IPMI controller I locked myself out. No passwords would work, not the default admin/admin login nor admin and the password I had set before. I couldn’t find any way to reset the password either – the IPMI’s password reset interface required a working SMTP server to be configured, which I hadn’t done. Additionally, there seemed to be no option in the BIOS to reset the password. I really didn’t want to lose access to the IPMI as it’s one of the main reasons I chose this board.

After much Googling, I came across a forum post where someone had the same problem. A reply from an ASRock Rack representative said to contact them for a tool that would reset the password. I did so, but while waiting I thought it was worth trying another mechanism.

Supermicro, another motherboard vendor that often features IPMI on their motherboards, provides a download for ipmicfg. This DOS tool is intended for performing operations on the built-in IPMI chip without having to go through the IPMI interface – perfect for password resets. Despite being from another vendor, I thought I’d give it a go, and what do you know, it works!

So, to reset your password using ipmicfg;

  1. Prepare a bootable DOS USB stick via your preferred means. I used rmprepusb to create a bootable FreeDOS USB stick.
  2. Download and place the ipmicfg files on to your newly created USB stick.
  3. Boot from your USB stick on the machine whose IPMI password you are trying to reset.
  4. Run ipmicfg -m to verify communication with the IPMI chip is working. If the command succeeds, you should see the IP address and MAC address of the IPMI displayed.
  5. Run ipmicfg -user list. This will display a list of users that can log in to the IPMI. Note down the User ID for the account whose password you wish to reset.
  6. Run ipmicfg -user setpw userid password, replacing userid with the User ID you found with the previous command and password with the new password you wish to set.
  7. Done! Try logging in to the IPMI again with your new passwords. If things still don’t work, try running ipmicfg -fd to reset the IPMI to its factory defaults.

As to why this worked with a non-Supermicro board, I theorise that it’s because the IPMI chip on the ASRock board complies the IPMI standards. Therefore, any tool which is compliant to the standards, such as the Supermicro ipmicfg tool, should be able to interact with the chip.

It’s also worth noting that ASRock Rack did get back to me about a day later with a tool to reset the password, but I didn’t use it having found a way to do it with ipmicfg. If you don’t feel like trying out a method using another vendor’s tool, contact them and wait for their reply.

Good luck!