The Developer of Cerber Security, Antispam & Malware Scan Gives Out Bad Advice To Push Their Plugin

When it comes the security industry around WordPress unfortunately there are many people that either don’t know what they are talking about or are intentionally peddling bad information to push products and services that provide little to no protection, while making things harder for companies that are actually doing the hard work to actually improve security.

We often run into examples of this even when we aren’t looking for them. We ran into another example just the other day when we went to look around for some information while working on a post about running into a problem with contact form due to WordPress’ REST API being disabled. That lead us to an example of someone at best not knowing what they are talking about when it comes to the basics of WordPress security while being the developer a security plugin, Cerber Security, Antispam & Malware Scan, that currently has 90,000+ active installs according to WordPress.org.

A big tell that developer doesn’t have a basic clue as to security surrounding WordPress is that a main feature of their plugin is blocking brute force attacks despite the fact that those are not happening. They also make this brute force related claim in the marketing materials for plugin:

By default, WordPress allows unlimited login attempts through the login form, XML-RPC or by sending special cookies. This allows passwords to be cracked with relative ease via brute force attack.

Saying that brute force attacks could crack a password relative ease is belied by the number of login attempts needed to actually test out all of the password combinations. Here is what we wrote about that previously:

To understand how you can tell that these brute force attacks are not happening, it helps to start by looking at what a brute force attack involves. A brute force attack does not refer to just any malicious login attempt, it involves trying to login by trying all possible passwords until the correct one is found, hence the “brute force” portion of the name. To give you an idea how many login attempts that would take, let’s use the example of a password made up of numbers and letters (upper case and lower case), but no special characters. Below are the number of possible passwords with passwords of various lengths:

  • 6 characters long: Over 56 billion possible combinations (or exactly 56,800,235,584)
  • 8 characters long: Over 218 trillion possible combinations (218,340,105,584,896)
  • 10 characters long: Over 839 quadrillion possible combinations  (839,299,365,868,340,224)
  • 12 characters long: Over 3 sextillion possible combinations  (3,226,266,762,397,899,821,056)

The post that we had run across was “Why it’s important to restrict access to the WP REST API”. The post is riddled with errors, for example citing someone as having discovered a vulnerability they didn’t.

The general problem was that they were suggesting disabling the REST API, which not at all coincidentally they touted their plugin did, because there could be security issues with it since it is new. But that is true of anything. In reality the vulnerability they discussed in the post actually showed how WordPress does a good job in handling security in one important way, since the auto update mechanism that has been in WordPress 3.7 allows the vast majority of WordPress website to be updated to a new security release in a very short time. Normally WordPress checks for updates every 12 hours and that can be shortened when a security update is being released, so most of the websites would likely have been updated in around 12 hours. With this vulnerability there was no evidence of it being exploited until after it was disclosed that it had been fixed a week after the version that fixed it was released (while the information on this vulnerability was held back for a week, other security updates were mentioned when it was released).

The developer though put forward a very different impression:

Unfortunately, the REST API bug had not yet been fixed. That leaves unprotected millions of websites around the world. It’s hard to believe but updating WordPress on shared hostings may take up to several weeks. How many websites have been hacked and infected?

That it may take several weeks to for WordPress on shared hosting to update is actually hard to believe, since it doesn’t appear to be true and no evidence was presented to back up a claim even they claim is counter-intuitive. The developer provides no evidence that any websites were hacked before the vulnerability was disclosed as having been fixed a week before, which as far as we are aware they couldn’t have since it doesn’t appear any were. That all probably shouldn’t be surprising since the developer apparently had never checked to see if brute force attacks were actually happening before building a plugin to protect against that.

For website where the auto update mechanism was disabled or didn’t work they did get mildy hacked due to this vulnerability, but that is the only vulnerability in more than a decade that we are aware of where there was any sizable number of websites hacked (in that time outdated WordPress installation have been frequently falsely blamed for the hacking of websites by security companies that either didn’t know what they were talking about or intentionally lying to get themselves press coverage). So disabling the REST API subsequent to this vulnerability being fixed has not actually improve the security of websites in any meaningful way.

There also was the issue of the developer conflating bugs and security vulnerabilities, which is important since having a lot of bugs fixed in something doesn’t mean that there was security risk.

The downside of disabling the REST API can be seen in that, like with the other plugin we mentioned in the post from earlier this week, this plugin can cause Contact Form 7 based forms to stop functioning. This exactly the kind of downside that often isn’t considered when people indiscriminately use WordPress security plugins and services without finding out first if there is any evidence that they provide effective protection. In this case what makes this stand out more to us is that our Plugin Vulnerabilities plugin, which is designed to help protect against a real issue, is much less popular than this plugin. It could be worse though, as another security plugin just designed to protect against brute force attacks has 2+ million active installs according to wordpress.org and it not only doesn’t protect against a real threat, but contains a security vulnerability of its own.

Disabling WordPress’ REST API Can Cause Contact Form 7 to Not Work

In our work for our Plugin Vulnerabilities service we frequently need to contact developers of WordPress plugins to let them know about security vulnerabilities in their plugins (either that we have discovered or that others have disclosed) and that often means submitting messages through contact forms (not surprisingly these are often handled by WordPress plugins). We have all too frequently run into situations where the contact forms didn’t work, which seems like a good reason for people managing websites with a low volume of contacts to periodically make sure that contact forms work, otherwise you could be missing out on messages.

In a recent instance of this, a loading graphic showed up after hitting Send and then that didn’t change to a message about the form being successfully sent. Pulling up the web browser’s console showed an error:

Failed to load resource: the server responded with a status of 401 (Unauthorized)

The page that related to was /wp-json/contact-form-7/v1/contact-forms/193/feedback, which would indicate the Contact Form 7 plugin was being used to handle the contact form. Visiting that page showed the following message:

{“code”:”rest_cannot_access”,”message”:”DRA: Only authenticated users can access the REST API.”,”data”:{“status”:401}}

Based on that it seems that disabling of WordPress’ REST API for those not logged in to WordPress caused the contact form to not work. A quick search showed that message was generated by another plugin, Disable REST API, which as the name suggest disables WordPress’ REST API.

As this shows, using something like that that disables the REST API can have some serious downsides. Not surprisingly for us, while looking into this we found someone in the WordPress security industry that doesn’t seem to have a clue about WordPress security pushing disabling it (and promoting using their plugin to do it), which we will discuss in a follow up post.

Excessive Debug Log Files Can Slow Down Zen Cart Admin Area

We recently ran into an issue while working on an upgrade of Zen Cart website, which seems worth sharing in case someone else runs into a similar issue.

We had first done a test of the upgrade using a copy of the website we made through FTP access and placed into a directory on the existing website. Everything worked fine with that test copy of the upgrade. Then after the production website was upgraded the client was noticing that there very long load times for pages in the admin area of Zen Cart. That wasn’t happening on the test copy and apparently that hadn’t been happening on the production website before the upgrade (though we couldn’t confirm that). Seeing as the two websites should be identical that didn’t seem to make sense.

We then stumbled in to the answer. In trying to debug things we found that over FTP we couldn’t see all of the debug log files in the /logs/ directory for the website, as there was a limit of 8,000 items being shown, which also meant the test copy only started with that many items. We couldn’t see how many files there were in the production website’s directory as in the file manager in the cPanel control panel for the website the amount of files was apparently too many for it to be able to display any files in the directory. When we temporarily replaced the /logs/ directory on the production website to be able to see the latest entries, we found that the page load time in the admin area were no longer so slow.

What looks to have been happening is that when visiting an admin page a new file was being created to warn that the setting for where to place log files of the parse time was invalid and the amount of files in the directory was slowing creating that file, leading to the slow load times. So clearing out the old debug log files would solve such a situation, but if there are lots of debug files being generated dealing with the cause of those is important as the files count will just start growing again if that isn’t dealt with.

BitNinja Makes Up Zero-Day Attack

The terribleness of security companies never ends. The latest example of that is something we ran across today while looking in to a claim that outdated software was the cause of a security issue on a server. What had been pointed to as evidence of that was a report from a security company named BitNinja. That report was claiming that there was malicious activity based on emails being sent from software on a website, but based on the information provided there was nothing that we could see that would indicate if there really was an issue or if there was a false positive happening (it would seem that the company doesn’t have a good understanding of what information is important to determine that sort of thing).

In looking over BitNinja we quickly ran across evidence of them spreading false information, which happened to involve a topic we just discussed earlier today, exploitation of a recently fixed vulnerability in MODX. The title of a blog post on their website made a striking claim about that, “Critical zero-day vulnerability in MODX Revolution patched by BitNinja WAF”. A zero-day vulnerability refers to a vulnerability that is being exploited before the developer is aware of it, so they have had zero-days to fix it. That obviously is quite concerning since doing the security basic of keeping software up to date wouldn’t protect against and if there was a security system that could protect against such a situation it would be useful.

With a website that had been hacked through that vulnerability the attempts to exploit it on that website started about a week after the vulnerability was fixed, with the first attempts logged on July 19. There was nothing we saw in looking into the situation that would indicate that that this was a zero-day vulnerability.

BitNinja seems to either not have any idea what they are talking about or intentionally misleading people as their claim that this is zero-day vulnerability is based on spotting exploitation attempts two weeks after a fix for the vulnerability had been released:

At 26th July at 6 PM, the flow has been started according to our data. This botnet is really aggressive, as, in the first 6 hours, we detected almost 13.000 attacks!

They also were quite behind in even spotting the attacks, which doesn’t say great things about them either.

Blaming the Victim

Looking at their About Us page a couple of things stood out to us, one of them being them starting with a claim of near equivalency between hackers and people running web servers:

We believe every server owner is responsible for their servers. If they have been hacked – and used for cybercrime – the owner is almost as guilty as the hacker is.

There also is the basis of their business that doesn’t seem to be from a security background, but one of a web host not being able to maintain their servers:

We couldn’t ensure the security of our servers beyond applying continuous updates. To make matters worse, we started losing customers after a series of downtimes. We quickly realized that server security is not a question of a single component but is about several components working together to harden a server. This inspired us to create BitNinja, an all-in-one security solution designed for hosting providers.

They don’t make any claim to having security expertise on that page (not that it would mean much based on what we have seen of security companies making such claims).

Vulnerability in Older Versions of MODX Being Exploited

Quite often with hacked websites outdated software is pointed to as the source of the hack. That is usually a claim that is made without any knowledge if the claim is actually true. Many security companies that market themselves as having unique expertise in dealing with hacked websites don’t even attempt to determine how websites are hacked, despite that being one of the three key components of a proper cleanup, so they would have no idea what the cause might be. Often times these companies don’t seem to even have a cursory knowledge of what they are talking about either, as an example, one well known security company, Sucuri, once told people to update software despite it being well known that the vulnerability being exploited in the software was in the then current version of the software (that kind of thing somehow never stopped journalists from repeating misleading and false claims made by that company or people claiming that they are a reputable company).

From what we have seen those baseless claims are usually easy to spot as there usually isn’t even a specific vulnerability that is pointed to as being the cause of the hack, which should be something known if someone has actually done the work to determine the source of the hack and determined it was outdated software.

As example of finding out that outdated software was actually the cause of a hack, we were recently brought in to clean up a hacked MODX website. MODX websites have not been a common type of website needing cleanups from us recently, so the software in use on the website was of some note right away.

In trying to determine how a website was hacked the logging is probably the most important resource, but the files can often tell you a lot, and both of them can work together to speed up the process. In the case of this website there was an obviously malicious file named dbs.php in the root directory of the website. That file had also had a number of POST requests made to it, which are requests that contain additional data and of which most requests sent by hackers are of that type, sent to it shortly before we started the cleanup. Looking back at the logging to where that file was first requested we found it in a set of requests sent by an IP address from Ukraine:

134.249.50.5 – – [19/Jul/2018:19:55:23 -0400] “GET / HTTP/1.1” 403 134 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36”
134.249.50.5 – – [19/Jul/2018:19:55:23 -0400] “POST /connectors/system/phpthumb.php HTTP/1.1” 403 134 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36”
134.249.50.5 – – [19/Jul/2018:19:55:24 -0400] “GET /dbs.php HTTP/1.1” 403 134 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36”

The first request there is for the homepage of the website. The second one sends a POST request to a file /connectors/system/phpthumb.php. Finally there is a request for the dbs.php file. Based on that, it would appear that the file phpthumb.php would be the vector for adding the dbs.php file.

In reviewing the file phpthumb.php there wasn’t anything in the file itself that looked like a vulnerability that would permit uploading a file as that series of requests would indicate was what the hacker would be attempting to do. In fact the file only contained four lines of code that just called on code in other files:

define('MODX_CONNECTOR_INCLUDED', 1);
require_once dirname(dirname(__FILE__)).'/index.php';
$_SERVER['HTTP_MODAUTH'] = $modx->user->getUserToken($modx->context->get('key'));
$modx->request->handleRequest(array('location' => 'system','action' => 'phpthumb'));

Instead of digging through more code at that point we instead did a web search for “/connectors/system/phpthumb.php” and though that we got pointed to the issue. There was a post of the details of a vulnerability that matched what we had seen that was published on July 13 and what seems more important, code for exploiting the vulnerability that was released on July 18. On this website the first attempt to exploit it was one July 19, so it would seem the code to exploit it was quickly utilized by hackers.

That vulnerability had been fixed in version 2.6.5 of MODX, which was released on July 11, and the developers provided clear notice of the need to update due to security fixed in it. Writing in the release announcement

Today we released MODX Revolution 2.6.5. It contains fixes for two critical security vulnerabilities affecting all versions at or prior to 2.6.4. Upgrading to 2.6.5 should be considered mandatory.

and

Upgrading is Critical

Revolution 2.6.5 contains critical security enhancements, you should upgrade to 2.6.5 now. See below for more info.

We cannot stress the importance of diligently upgrading to the latest version of MODX enough. While no software is 100% secure, powering your site with the most current version usually helps protect you from hackers that rely on exploiting outdated software. If you’re not sure what version of MODX Revolution you’re running, log into your website Manager. If the version number doesn’t appear in the top left-hand corner of the Manager, go to Manage>Reports>System Info.

The two vulnerabilities refer to the ability to upload files and to remove files/directories. From the post with the details of the vulnerability it sounds like in version 2.5.1 to 2.6.4 the ability to exploit the file upload vulnerability would be more restricted than was the case with the website were dealing, which was running 2.4.1.

Cleaning Up After This Hack

On this website the hacker had done quite a number on it. The .htaccess file in the root directory had been removed, leading to all the pages other than the homepage to no longer being functional. That seems to have been done to remove any restriction in the .htaccess file that would have blocked the hacker from sending requests to the malicious files they were uploading.  When trying to go to the admin area you would be redirected to another website due to the content of all of the JavaScript files on the website being replaced with malicious code.

The best option to clean up after this would be restore a clean back up from before the hack (making sure that all of the existing files are removed during the restoration). Seeing as the vulnerability wasn’t disclosed until July 13, a backup from before then would be a good option. You might be able to get way with one from before July 18 as well. A review of the logging by someone familiar with all of this would likely be able to confirm when the hacker hacked the website.

From what we could see from that website, it would appear that there are likely multiple hackers exploiting this vulnerability and doing different things, so it wouldn’t be possible to provide general instruction on what to remove from the website to clean up if there isn’t a backup available (though based on past experience that won’t necessarily stop someone from claiming to provide that and unintentionally or intentionally leading people astray).

If you are looking for a professional cleanup from this or any other hack of a MODX website we provide a service for that. We can also upgrade MODX for you.

Fixing Zen Cart Admin Login Redirect Loop Caused By Forcing Website to Be Served Over HTTPS

We recently had someone come to us for Zen Cart support after they could no longer log in to the admin area of Zen Cart after their web host configured it so their website would always be served over HTTPS. When they tried to log in they were redirected back to the log in page without any error message being shown. While there are some other issues that can cause that same type of redirect to occur, in the situation where that change to serve the website over HTTPS has been made, what we found would fix this is to update the configuration file for the admin area, which is located /includes/configure.php in side of the admin directory (whose name varies from website to website), to use the HTTPS address for the website.

The relevant portion of the configuration file for a website using a recent version of Zen Cart is below:

/**
 * Enter the domain for your Admin URL. If you have SSL, enter the correct https address in the HTTP_SERVER setting, instead of just an http address.
 */
define('HTTP_SERVER', 'http://www.example.com);
/**
 * Note about HTTPS_SERVER:
 * There is no longer an HTTPS_SERVER setting for the Admin. Instead, put your SSL URL in the HTTP_SERVER setting above.
 */

Changing the “HTTP_SERVER” setting to start https:// instead of http:// resolves this as the proper address is used when handling the log in.

Security Company Promises They Can Prevent Websites from Getting Hacked Again and Immediately Contradicts the Claim

What we recently have been noticing over and over in looking over the marketing materials for website security services is that they claim to protect websites from being hacked and almost immediately contradict that claim. As yet another example of that, we were recently looking at a WordPress security plugin named Sitesassure WP Malware Scanner and as discussed over at the blog for our Plugin Vulnerabilities service we noticed that among other issues, it is insecure and contained a vulnerability (security software with security vulnerabilities of their own is a common occurrence from what we have seen). That plugin seemed to be largely a way to promote the security company Sitesassure.

On the homepage of Sitesassure they promote a service they offer with the claim “DONT GET HACKED AGAIN”:

We could find no evidence presented on their website that service was effective at all. When making a claim like that there really should be evidence from independent testing that backs up the claim. If their WordPress plugin is any indication they don’t have much of a grasp security, which seems like prerequisite for being able to have a service that could possibly provide that protection.

Everything we have seen from numerous different angles indicates that services like that don’t in fact provide the claimed protection. That includes plenty of people coming to us asking if we offer a service like that, which works, after using one that didn’t and that the providers of them often prominently promote that the service includes hack cleanups. That is the case with this service as well, as scrolling down the website just a bit from the claim that the website won’t get hacked again there is another part of the promotion for that service:

If the website won’t get hacked again with that service then there shouldn’t be anything to clean up.

Right after that they seem to water down the claim even more by moving the goal line from keeping the website from being hacked, to just it not going down after that occurs:

(While they claim WordPress is a specialty of theirs, they consistently improperly capitalize it, which seems like a good indication it is actually something they are not all too familiar with.)

If you really want to fight back the best thing would be to do is the basics of securing websites as those will actually prevent most hacks, which would make hacking have less of a payoff for the hackers.

If a website has already been hacked the important thing to do is make sure that the website is properly cleaned. From what we have seen providers of services like that usually don’t even attempt to do that, which doesn’t seem that surprising considering that they seem to think it is acceptable to market a security service in a way that they are aware is not true.

When looking for a company to properly clean things up these are things you want to hear from them that they do:

  • Clean up the hack.
  • Get the website secured as possible (which which usually involves getting any software on the website up date).
  • Try to determine how the website was hacked and fix that.

We always do those things when doing a cleanup. When those things haven’t been done by other companies it has frequently lead to us being brought in to re-clean websites.

Bluehost’s Poorly Thought Out Attempt to Clean Up Hacked Websites

We have repeatedly brought up the web host Bluehost in the past on this blog due to various security related issues involving them, including things like using phishing emails to sell unnecessary security services and it looking like a security issue on their end might be leading to websites being hacked. Recently we have started running into another issue while working on hack cleanups with websites hosted with them, it appears that Bluehost is attempting to do some cleanup of hacks in way that doesn’t seem well thought out and can lead to websites having more problems beyond just the ones caused by the hack.

What looks to be going on is that to try to clean files with malicious code, Bluehost is removing code from the files and making a copy of the previous version of the files with a different name. As an example of those different names, in one recent instance the copy of a file named link-manager.php was named link-manager.php.suspected.1524640055. The new files have no permissions, so you can’t view the contents of them (or change the permissions to be able to do that). In many instances the original files have been totally emptied, even if it appears that they had contained legitimate code in addition to malicious code.

One of the problems that is causing is that legitimate files that are used to generate websites are being emptied, which then causes the website to stop working. Due to permissions on the new files it isn’t possible to easily see the previous contents of files to be able to quickly restore the non-malicious portion without getting access to another copy of the file.

Where things get more problematic is that they are changing the permissions on some directories as well as files, which not only restricts seeing what is in the directory, but also introducing a complication that doesn’t occur with the change to individual files, you can’t delete the directories through FTP or the file manager in Bluehost’s control panel.

Bluehost does have the capability to make the files and directories accessible if you contact them.

What is important note is that in every instance we have run into this so far there have been malicious files that were not dealt with by this cleanup, so the upside from them attempting to clean things up is limited while it can come with a fairly significant downside. Another problem with this type of approach is that simply cleaning up hacked files doesn’t deal with the underlying cause that allowed the hacker to be able to add or modify files in the first place, so the hacking could continue.

If Your Website Is Critical to Your Business Make Sure You Properly Prepare for Software Upgrades

We often find that businesses’ investment in terms of money and effort in their websites is out of line with what would be optimal. For example, you have people that spend over a thousand dollars a year on a security service that doesn’t even really attempt to secure the website, while more basic maintenance is often neglected or done by those that don’t have the necessary experience to handle it properly, even when the business has the means necessary for to avoid that and the website is critical for the business.

We recently had someone contact us looking for emergency support after a Drupal upgrade went wrong that lead to a website that they said was critical to the business not working. One way to handle that situation would be to revert to a backup, but they said the most recent backup was three months old.

If a website is critical to business then frequent backups should be made outside of any done before upgrades, so there should have been a more recent backup than that, but what about properly preparing for an upgrade outside of that? At the very least, a backup should be made before upgrade is started. If the website is critical to business though that probably isn’t enough.

One reason for that is that there is always a possibility that there is some issue that would make restoring a backup a difficult or impossible. Let’s say for some reason the backup method being used isn’t actually fully backing things up (which can happen). Someone that knows what they are doing can usually ensure that the backup is complete without having to do a test run of a restoration, but one way to test a restoration also provides a better method for handling potential issues for an upgrade.

If a website is truly critical to a business then the best thing to do is to do a test of the upgrade before the production website is upgraded. That will significantly lessen the chance of something going wrong when the production website is upgraded, which would then require trying to quickly fix it or revert to a backup. Instead time can be takem to understand what is going wrong and how to ameliorate that before the time comes to upgrade the production website. Setting up the test copy of the website also allows testing out the restoration of a backup as well.

Usually a test copy of the website can simply be set up in a directory in the existing website’s hosting account/domain name. A more advanced way to handle it is to use a separate hosting environment, say a separate hosting account, though with that it is important to make sure the server environment is the same as the production website, so that some inconsistency between them doesn’t lead to an issue only appearing when the production website is upgraded.

Finally, testing out the test of the upgrade. For some types of website a quick check over few pages will be enough but in others more extensive manual or automated testing may be needed. Another sometimes overlooked area of testing is becoming familiar with functionality changes in the backend of websites when significant upgrades are made. We have been involved in situations where that hasn’t happened and someone then was panicking because they needed to do something in hurry, but weren’t aware of how to do that it in the new version, so trying out normal activities in the test copy we a significant upgrade is being done is very good idea.

Looking at Recently Modified Files Isn’t a Good Way To Find Files Added or Modified by Hacker

We often find that companies that claim to have expertise (and often unique expertise) in dealing with hacked websites either don’t know what they are doing or are intentionally doing things improperly. That makes it hard to recommend to people in general that they should hire someone to clean up their hacked website (despite us actually doing that very type of work). But at the same time we often have people contact us that have tried to clean up their own website who clearly don’t know what they are doing and have gotten poor results. Those are not always unconnected issues as there is lots of content put out by security companies on how to clean up websites that is either intentionally poor and really intended to entice people to hire them to clean up the website or is poor because the companies really don’t know what they are doing.

An example of that we happened to run across recently involves a blog post from a company named WPHackedHelp that is supposed to tell you how to fix a “Japanese Keywords Hack” on a WordPress website, https://secure.wphackedhelp.com/blog/fix-wordpress-japanese-keywords-hack/. Considering that what we assume they are referring to by that actually encompasses a wide variety of different issues, trying to write an all encompassing article would be difficult to impossible. Instead they write one that is really of little use and could equally have been written about trying to deal with many different issues. But we wanted to focus on one obviously problematic piece of advice.

The post in part states you can find malicious files by checking for recently modified files:

Check Recently Modified Files

To search for the most recently modified files, use SSH to login to your web server account and then execute the following command:

find/path-of-www -type f -printf ‘%TY-%Tm-%Td %TT %p\n’ | sort -r

Navigate through the files and see if you find any doubtful changes made to the code.  If so, replace the files with the clean backup version of it.

For anyone that has even dealt with a few hacked websites there should obvious problem with that advice and for any company that claims to have expertise dealing with hacked websites there should be another obvious issue. WPHackedHelp certainly claims to have that level of expertise:

With over 15 years of experience, our WordPress security experts specialize in website malware removal & cleanup WordPress websites.

It’s worth noting though that WordPress itself is barely 15 years old, so we would assume that is referring to combined experience, though they are not upfront about that, which seems like a red flag.

The glaring problem with relying on the last modified date of files is that hackers frequently change the last modified date of files they have added or modified to have the dates match other files in the same directory. In some instances that occurs with some of the files and not others, so someone might think they have gotten the malicious files and really they have missed a lot of them.

The other issue with this is that often times people only become aware that their website has been hacked well after it has occurred, in some extreme instances the hackers originally got in years ago. So even if the hacker hasn’t changed the last modified dates, looking at recently modified files wouldn’t identify them.

At the end of WPHackedHelp’s post you get to the seeming insincerity of the whole thing as they write:

Having listed an array of methods requiring technical expertise, let’s consider an approach that is way smarter, consumes less time and takes the burden off your shoulders. WP Hacked Help deploys a systematic plan to clean up your WordPress website. The site is thoroughly scanned and the detected flaws are dealt by an expert team to provide you with a website free of malicious codes. Within a short span of time, your website will be live up again, running efficiently like before.

Why not be upfront about that, considering that it is supposed to be “way smarter, consumes less time and takes the burden off your shoulders”?

What is missing in that post or anywhere else that we looked on this company websites for that matter was any mention of one of the three key components of a proper hack cleanup, trying to determine how the website was hacked. Not only is that important to make sure that the hacker can’t just get back in after things are cleaned, but we have found that the work involved with that is important to make sure the hack is fully cleaned up. In almost every instance when we are hired to re-clean up a hacked website there had been no attempt to do that, so avoiding companies that don’t do that is something we would recommend.

If the focus of security companies was on figuring out how websites were being hacked and working to make sure that the instances of those things are lessened, security could be in much better shape than it is. That of course would mean less business for a lot of those security companies, so instead you have an arms race type situation where hackers figure out new ways to avoid detection (like changing the last modified date), which makes it harder to clean up hacked website, leading to more business for security companies, but a worse situation for their customers since the root cause isn’t being dealt with properly.