Opened 3 years ago

Last modified 18 months ago

#11405 new Feature/Enhancement Request

auto upgrade all wordpress and drupal sites, including third party modules and plugins

Reported by: Owned by:
Priority: Medium Component: Tech
Keywords: drupal wordpress security web-app-security Cc: support-team@…
Sensitive: no


I'd like to open for discussion, first with the support team and then with the leadership committee, a potentially controversial suggestion: automatic, un-attended upgrades of all WordPress and Drupal sites on our servers (excluding major Drupal version upgrades).

Here's the problem: About once a month, sometimes more often, a member's web site (almost always running either Wordpress or Drupal) is compromised. The compromise results in:

  • Many hours of wasted MF/PL support time tracking it down and addressing the consequences (removal from email block lists, etc)
  • Many hours of MF/PL support time detecting and fixing the vulnerability on behalf of members without the resources to fix it themselves.
  • Days and sometimes weeks of downtime for the member with the affected sites.
  • Email delivery problems for groups on the same server (affecting email users who forward their email and also it can cause content-based spam blocks for users that include links to their web site if the IP address has been black-listed)
  • Hours or days of server slow downs when we don't immediately detect the compromise as the attackers use up valuable CPU cycles sending spam
  • The potential for a privilege escalation that could result in a root compromise that would require days of work to rebuild the server in question
  • Reduced confidence in MF/PL by both the member with the compromised site and other members affected by it

To address this problem, I'd like to propose that we run a nightly cron job that runs shortly after the nightly backup. It would detect all Drupal and WordPress sites on a server and, using the command line tools drush and wp, automatically updates both core and third party plugins/modules on each site and run any pending database updates.

As part of this overhaul, we would completely eliminate the centralized Drupal installation approach. Instead, the control panel would install Drupal (and eventually) WordPress directly in each user's web directory.

For members who want to upgrade their own sites on their own schedule, we would respect the presence of the file .well-known/mfpl/no-auto-update. If we find that file we would not run in that directory.

The main downside is that sometimes running these upgrades may break some sites. This possibility is particularly accute with web sites that have not been upgraded in a long time. However, in my experience running both drush and wp upgrades, I have very rarely experienced this problem and I think the trade off of potentially breaking some sites is worth the benefit of providing a more stable and secure system for everyone.

To help educate members about this process, we could take several steps:

  • When creating a Drupal or Wordpress site in the control panel, there would be a checkbox that would be checked by default that says: auto-upgrade my site. If unchecked, the installer would create the no-auto-updates file.
  • Whenever a new web configuration is created in the control panel, a README file would be installed warning people that if you install Drupal or WordPress by hand, it will be auto upgraded unless you include the file.

Thoughts? Concerns?

Attachments (1)

.gitignore (1.5 KB) - added by 3 years ago.
.gitignore for Wordpress

Download all attachments as: .zip

Change History (17)

comment:1 Changed 3 years ago by

See also #11336 (problems installing Drupal 8 using our central drupal installation method).

comment:2 Changed 3 years ago by

I've only helped with a few compromise mitigations (far fewer than Jamie, Ross, or Jaime), but I agree that they're a pain, and that they require a lot of effort to clean up.

Heck, I've even turned one of my wordpress mitigations into a presentation :)

I don't have a good sense of the number of compromises that result from easily-guessed passwords, vs. the number that result from running vulnerable code. Automating drupal and wordpress upgrades will likely help with the latter, but I'm not sure they'll help much with the former.

I've always understood that MF/PL's centralized drupal installation (aka /usr/local/share/drupal) was intended to help keep member sites up to date. Running drush pm-update in individual member sites should accompish the same goal.

Overall, I think automated upgrades could be a win. ticket:11336 points out that our centralized approach doesn't play well with drupal 8. Eliminating the central install would avoid the need to develop a workaround.

The `fully embedded' approach should make it easier to support wordpress installations via the control panel (i.e., the wp utility we'd need for control panel installations is the same one we'd need for automating upgrades).

Jamie's plan provides a way for users to opt out of automated upgrades. That should meet the needs of members who want more control over the upgrade process.

I've done lots of drupal and wordpress upgrades (five tonight, as a matter of fact!). Only a very small number have ever gone bad, but that's still more than zero. My main concern is having a bad automated update take down a large number of member sites within a short period of time. File backups + mysqldump (or timing upgrades after nightly backups) could help mitigate this risk.

My own upgrades are usually preceeded by mysqldump + git status (to ensure that all files are checked into source control). That makes it fairly easy to rewind things.

comment:3 Changed 3 years ago by

Some concerns sent to my by Palante (and posted with permission):

We completely understand the reasoning but Drupal/Wordpress frequently have breaking updates in contrib modules, particularly Wordpress. You're going to be fielding a lot of questions from users about why their site's not working today. And it means that every time someone reports a website issue not working, you're going to HAVE to get involved, because it's potentially your fault.

And in the case of Wordpress themes, a theme update is potentially a disaster. There are a LOT of WP "developers" who don't create child themes, but instead hack directly on a theme. So the first time you update the theme, the site gets badly broken. Yet WP themes are huge masses of executable (and often insecure) PHP; it's very common in the WP world for script kiddies to exploit a theme vulnerability, whereas with Drupal they focus on module ones. If you DO go this route, there are a couple of mitigating steps you can take:

  • Only install security updates. "drush up" has a "--security-only" argument it can take; you might need to script something for wp-cli.
  • Consider limiting what plugins/themes users can install, especially on Wordpress. That's how handles this situation.
  • Write tests that can automatically test for something distinctive about a properly running site, like a footer loads or a donate form can be submitted, etc.

Most hosts handle this situation by simply disabling compromised websites, and informing the users that they must get it fixed before it will be re-enabled. I understand why you're choosing to try to fix it yourselves, but a) we're concerned about the sustainability of that approach with regard to the labor involved, and b) we're worried it promotes an attitude amongst users that they don't need to worry about such things. There's a strong argument for doing less, not more, and simply shutting down a website that's compromised.

comment:4 Changed 3 years ago by

Disabling compromised sites is something we often do - but it's not particularly effective for a couple reasons: often the damage is done (hours of slowness, blacklisted IP address) by the time we disable it OR the group with the site does an ineffectual job fixing the problem and the attack comes back.

It does suggest a different line of development: put resources into auto-detecting and auto-disabling compromised sites. However, that's an on-going project and leaves groups without the resource to fix their sites out in the cold. Anyway... this discussion is helpful toward possibly finding a way to do this with the best results.

comment:5 Changed 3 years ago by

Not sure about drupal but upgrades to the wordpress software itself will rarely break things although upgrading a theme or plugins might.

Overall I would much rather debug a broken theme than a site that has already been compromised. The kind of compromises we've seen lately are really nasty, reproducing themselves throughout the site and creating backdoors to return even after an intial cleanup and upgrade.

One thing I thought about is the possibility of keeping each site in a git repository so that upgrades could easily be rolled back if they caused problems. This might be complicated from a normal user perspective or create conflicts if the site developers are already using git to track files in the site.

Last edited 3 years ago by (previous) (diff)

comment:6 Changed 3 years ago by

I wrote most of the thoughts from comment 3, so I'll follow up here.

In response to Jaime: While I generally regard upgrading WP core as safe, there are many instances of plugins that break against a new version of core. In fact, wp-cli itself broke on upgrade to WP Core 4.4. WP plugin pages actually have a widget to indicate whether it works for you with a given point release. My concern is partly that a) while any one upgrade is probably safe, we're talking about a LOT of upgrades here, b) WP Core vulnerabilities are only a part of the problem (I just had a site on WP 4.4.2 pwned last week through an insecure theme), c) since a WP core upgrade MIGHT break critical functionality, it means that any WP ticket submitted has to be investigated in case it was an MFPL upgrade that broke it.

There are companies that provide "Wordpress hosting" instead of "shared hosting" that address these issues - but they charge several times what shared hosting costs. I would strongly suggest researching their technical models before going this route - I think this is still a vast commitment of resources.

On a more positive note, let me offer another alternative: Voluntary resource usage limits. MFPL is near-unique in that they tell members, "Use the resources you need," and so has never implemented, e.g., a "maximum outgoing emails per hour/second", or "maximum CPU cycles" limit. There's theoretically a disk usage limit, but I also know that it's unenforced - thankfully, in my case!

However, I think that the lack of limits has exacerbated this situation - one site can hog resources. Jamie notes that this is one of the reasons that "disabling compromised sites after the fact" isn't as effective. What if there were limits, with reasonable defaults, that a member could increase via a support ticket (or, down the road, the control panel)?

This feels like the equivalent of logging into my own computer as a non-root user and using sudo. I'm not limited in what I can do, but I have to request a bypass on those limits.

If members are bypassing those limits, it could generate warnings, either to the support team, the member, or both. Linode, e.g., does this very well.

This isn't necessarily instead of other approaches suggested here. However, it seems like it would mitigate some of the worst consequences of compromised sites with a minimum of effort.

comment:7 Changed 3 years ago by

Also, here's a copy/paste of a sample Linode notification email, so folks see what I'm talking about. Obviously MFPL language would be more user-friendly, but the key sentence is the one that says, "This is not meant as a warning or a representation that you are misusing your resources":

Your Linode, linode440921, has exceeded the notification threshold (1000) for disk io rate by averaging 1107.67 for the last 2 hours. The dashboard for this specific Linode is located at: <>

This is an automated message, please do not respond to this email. If you have questions, please open a support ticket.

You can view or change your alert thresholds under the "Settings" tab of the Linode Manager.

This is not meant as a warning or a representation that you are misusing your resources. We encourage you to modify the thresholds based on your own individual needs.

You may access the members' site at <>.

comment:8 follow-up: Changed 3 years ago by

Given takethestreet's comments, let's spend some time exploring alternatives.

The goal is to better control compromises to our WordPress and Drupal sites. I think there are three strategies:

  • Prevention. The auto-upgrade proposal only addresses this strategy. Other options in this strategy include libapache2-mod-security2. Any others?
  • Detection. By detection - I mean finding out as soon as possible when a site is compromised. We currently have some mechanisms in place (such as alerts when the mailq fills up, or a user injects over a certain amount of email messages into the queue and also alerts when a user exceeds their allocation of php processes). takethestreets suggests another one - alerting members about going over limits
  • Resolution. Unfortunately, turning off the web site isn't a great option. Perhaps if we had better detection and automated the disabling of sites, it's a possibility - but a very impersonal one. Under this category - I think Jaime's suggestion of using git is a potentially good one. I'd like to explore that a bit more. Here's what I had in mind:
    • As root, git init the root directory of every directory in a web folder that appears to be a drupal or wordpress site.
    • Use --git-dir=/home/members/<name>/sites/<site>/.red/git/web/path.git - so we don't pollute their directory with our .git repository (also by running as root we prevent the attacker from modifying the git repo)
    • Configure our repo with git config set core.worktree = /home/members/<name>/sites/<site>/web - that is necessary if git-dir is not the working directory.
    • We could create a bash wrapper called mf-git that will detect the directory we are in - and call git with --git-dir set appropriately.
    • Carefully craft a .gitignore file. We could really screw ourselves if we git add a bunch of huge media files.
    • Auto-detect if we are drupal or wordpress and then git add the directories that PHP files belong in
    • Every night run and git commit -a to get any changes.
    • Now, when we have a site that is compromised, we can:
      • Use git diff to try to find if core files are compromised and how
      • Search for all files in other directories that end in .php
      • remove/clean compromised files
      • use wp-cli/drush to upgrade
      • run additional commands to check database, ~/.ssh, prompt to update db password and user passwd, etc.

If cleaning up compromised sites wasn't so tedious, then it would make a huge difference.

The only parts that this approach doesn't resolve is the vulnerability we face between compromise and detection and also the danger of undetected compromises. The steady crop of linux kernel privilege escalation bugs makes these quite dangerous.

comment:9 in reply to: ↑ 8 Changed 3 years ago by

  • Prevention. The auto-upgrade proposal only addresses this strategy. Other options in this strategy include libapache2-mod-security2. Any others?

mod_security is very common on shared hosts. Some configurations break CiviCRM. Expect some support tickets regarding this, but overall worthwhile IMO. By comparison, the other options I can think of (reverse proxies, web application firewalls etc.) feel like too much work for not enough additional gain.

  • Resolution. Unfortunately, turning off the web site isn't a great option.

I think there's a discussion to be had about what the expectation of users is. Education can help here.

Most shared hosts expect users to maintain their own site. However, shared hosts are getting their lunch eaten by, Wix, SquareSpace, etc. because they doesn't require that level of expertise.

While I'm against automatic updates in general, I could see an argument for something that gives the choice between a managed and unmanaged implementation. Make people choose between a locked-down Wordpress/Drupal that's auto-updated but limited like is (no custom themes, only whitelisted plugins), or freedom to install plugins/themes, along with the understanding that failure to keep them updated WILL result in a disabled site.

Perhaps if we had better detection and automated the disabling of sites, it's a possibility - but a very impersonal one. Under this category - I think Jaime's suggestion of using git is a potentially good one. I'd like to explore that a bit more. Here's what I had in mind:

This is a very good suggestion IMO. I have an excellent .gitignore for Wordpress that excludes media files, known backup locations, etc. It's the one WPEngine uses, I got it from a migration I did - I assume it's covered by the GPL as a derivative work of Wordpress. I'll attach it here.

However, note that many exploits will inject a PHP remote shell app into a node/page in the CMS - and it's not practical to version control the database.

Changed 3 years ago by

.gitignore for Wordpress

comment:10 Changed 3 years ago by

We are revisiting this theme again. I will be looking into modsecurity and our experimenting with proposed git strategy. We should all finally find a way to include wp-cli even if it is not included in debian repos.

comment:11 Changed 3 years ago by wp-cli seems easy to download and update using composer.

comment:12 Changed 3 years ago by

I updated #11601. As I am using drush from composer stable and not from debian, I would prefer the same to be used by the auto-upgrade script.

I know you will ask why I use a drush not in debian for Drupal 7. That is because I tried and it was not working. I am not sure to remember precisely, but I would guess it may be for updating translations.

For Drupal 8, there is no choice, if I remember right, debian 8 jessie drush is just useless for it.

comment:13 Changed 3 years ago by

See #11878 - same problem with mediawiki sites installed via the control panel.

comment:14 Changed 3 years ago by

Now that drush and wp-cli are installed and up to date on moshes, I'm working on a opt-in approach to auto-upgrading sites. I haven't yet written any code, but have written a help file that documents how I would expect it to work. Comments welcome!

comment:15 Changed 3 years ago by

I have a first go at this in puppet - it should go out to all mosh'es next time we sign a tag (currently installed on ossie).

It won't run for any site that doesn't have the right file in /home/members/*/sites/*/.red/web-app-security and only works for drupal modules.


Last edited 18 months ago by (previous) (diff)

comment:16 Changed 18 months ago by

  • Keywords web-app-security added

Please login to add comments to this ticket.

Note: See TracTickets for help on using tickets.