Dries Buytaert: Acquia retrospective 2023

At the beginning of every year, I publish a retrospective that looks back at the previous year at Acquia. I also discuss the changing dynamics in our industry, focusing on Content Management Systems (CMS) and Digital Experience Platforms (DXP).

If you'd like, you can read all of my retrospectives for the past 15 years: 2022, 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009.

Resilience and growth amid market turbulence

At the beginning of 2023, interest rates were 4.5%. Technology companies, investors, and PE firms were optimistic, anticipating modest growth. However, as inflation persisted and central banks raised rates more than expected, this optimism dwindled.

The first quarter also saw a regional bank crisis, notably the fall of Silicon Valley Bank, which many tech firms, including Acquia, relied on. Following these events, the market's pace slowed, and the early optimism shifted to a more cautious outlook.

Despite these challenges, Acquia thrived. We marked 16 years of revenue increase, achieved record renewal rates, and continued our five-year trend of rising profitability. 2023 was another standout year for Acquia.

One of our main objectives for 2023 was to expand our platform through M&A. However, tighter credit lending, valuation discrepancies, and economic uncertainty complicated these efforts. By the end of 2023, with the public markets rebounding, the M&A landscape showed slight improvement.

In November, we announced Acquia's plan to acquire Monsido, a platform for improving website accessibility, content quality, SEO, privacy, and performance. The acquisition closed last Friday. I'm excited about expanding the value we offer to our customers and look forward to welcoming Monsido's employees to Acquia.

Working towards a safer, responsible and inclusive digital future

Image removed.

Looking ahead to 2024, I anticipate these to be the dominant trends in the CMS and DXP markets:

  • Converging technology ecosystems: MACH and Jamstack are evolving beyond their original approaches. As a result, we'll see their capabilities converge with one another, and with those of traditional CMSes. I wrote extensively about this in Jamstack and MACH's journey towards traditional CMS concepts.
  • Navigating the cookie-less future: Marketers will need to navigate a cookie-less future. This means organizations will depend more and more on data they collect from their own digital channels (websites, newsletters, video platforms, etc).
  • Digital decentralization: The deterioration of commercial social media platforms has been a positive development in my opinion. I anticipate users will continue to reduce their time on these commercial platforms. The steady shift towards open, decentralized alternatives like Mastodon, Nostr, and personal websites is a welcome trend.
  • Growth in digital accessibility: The importance of accessibility is growing and will become even more important in 2024 as organizations prepare for enforcement of the European Accessibility Act in 2025. This trend isn't just about responding to legislation; it's about making sure digital experiences are inclusive to everyone, including individuals with disabilities.
  • AI's impact on digital marketing and websites: As people start getting information directly from Artificial Intelligence (AI) tools, organic website traffic will decline. Just like with the cookie-less future, organizations will need to focus more on growing their own digital channels with exclusive and personalized content.
  • AI's impact on website building: We'll witness AI's evolution from assisting in content production to facilitating the building of applications. Instead of laboriously piecing together landing pages or campaigns with a hundred clicks, users will simply be able to guide the process with AI prompts. AI will evolve to become the new user interface for complex tasks.
  • Cybersecurity prioritization: As digital landscapes expand, so do vulnerabilities. People and organizations will become more protective of their personal and privacy data, and will demand greater control over the sharing and storage of their information. This means a growing focus on regulation, more strict compliance rules, automatic software updates, AI-driven monitoring and threat detection, passwordless authentication, and more.
  • Central content and data stores: Organizations are gravitating more and more towards all-in-one platforms that consolidate data and content. This centralization enables businesses to better understand and anticipate customer needs, and deliver better, personalized customer experiences.

While some of these trends suggest a decline in the importance of traditional websites, others trends point towards a positive future for websites. On one side, the rise of AI in information gathering will decrease the need for traditional websites. On the other side, the decline of commercial social media and the shift to a cookie-less future suggest that websites will continue to be important, perhaps even more so.

What I like most about many of these trends is that they are shaping a more intuitive, inclusive, and secure digital future. Their impact on end-users will be profound, making every interaction more personal, accessible, and secure.

However, I suspect the ways in which we do digital marketing will need to change quite a bit. Marketing teams will need to evolve how they generate leads. They'll have to use privacy-friendly methods to develop strong customer relationships and offer more value than what AI tools provide.

This means getting closer to customers with content that is personal and relevant. The use of intent data, first-party data and predictive marketing for determining the "next best actions" will continue to grow in importance.

It also means that more content may transition into secure areas such as newsletters, members-only websites, or websites that tailor content dynamically for each user, where it can't be mined by AI tools.

All this bodes well for CMSes, Customer Data Platforms (CDPs), personalization software, Account-Based Marketing (ABM), etc. By utilizing these platforms, marketing teams can better engage with individuals and offer more meaningful experiences. Acquia is well-positioned based on these trends.

Reaffirming our DXP strategy, with a focus on openness

On a personal front, my title expanded from CTO to CTO & Chief Strategy Officer. Since Acquia's inception, I've always played a key role in shaping both our technology and business strategies. This title change reflects my ongoing responsibilities.

Until 2018, Acquia mainly focused on CMS. In 2018, we made a strategic shift from being a leader in CMS to becoming a leader in DXP. We have greatly expanded our product portfolio since then. Today, Acquia's DXP includes CMS, Digital Asset Management (DAM), Customer Data Platform (CDP), Marketing Automation, Digital Experience Optimization, and more. We've been recognized as leaders in DXP by analyst firms including Gartner, GigaOm, Aragon Research and Omdia.

As we entered 2023, we felt we had successfully executed our big 2018 strategy shift. With my updated title, I spearheaded an effort to revise our corporate strategy to figure out what is next. The results were that we reaffirmed our commitment to our core DXP market with the goal of creating the best "Open DXP" in the market.

We see "Open" as a key differentiator. As part of our updated strategy, we explicitly defined what "Open" means to us. While this topic deserves a blog post on its own, I will touch on it here.

Being "Open" means we actively promote integrations with third-party vendors. When you purchase an Acquia product, you're not just buying a tool; you're also buying into a technology ecosystem.

However, our definition of "Open" extends far beyond mere integrations. It's also about creating an inclusive environment where everyone is empowered to participate and contribute to meaningful digital experiences in a safe and secure manner. Our updated strategy, while still focused on the DXP ecosystem, champions empowerment, inclusivity, accessibility, and safety.

Image removed.A slide from our strategy presentation, summarizing our definition of an Open DXP. The definition is the cornerstone of Acquia's "Why"-statement.

People who have followed me for a while know that I've long advocated for an Open Web, promoting inclusivity, accessibility, and safety. It's inspiring to see Acquia fully embrace these principles, a move I hope will inspire not just me, but our employees, customers, and partners too. It's not just a strategy; it's a reflection of our core values.

It probably doesn't come as a surprise that our updated strategy aligns with the trends I outlined above, many of which also point towards a safer, more responsible, and inclusive digital future. Our enthusiasm for the Monsido acquisition is also driven by these core principles.

Needless to say, our strategy update is about much more than a commitment to openness. Our commitment to openness drives a lot of our strategic decisions. Here are a few key examples to illustrate our direction.

  • Expanding into the mid-market: Acquia has primarily catered to the enterprise and upper mid-market sectors. We're driven by the belief that an open platform, dedicated to inclusivity, accessibility, and safety, enhances the web for everyone. Our commitment to contributing to a better web is motivating us to broaden our reach, making expanding into the mid-market a logical strategic move.
  • Expanding partnerships, empowering co-creation: Our partnership program is well-established with Drupal, and we're actively expanding partnerships for Digital Asset Management (DAM), CDP, and marketing automation. We aim to go beyond a standard partner program by engaging more deeply in co-creation with our partners, similar to what we do in the Open Source community. The goal is to foster an open ecosystem where everyone can contribute to developing customer solutions, embodying our commitment to empowerment and collaboration. We've already launched a marketplace in 2023, Acquia Exchange, featuring more than 100 co-created solutions, with the goal of expanding to 500 by the end of 2024.
  • Be an AI-fueled organization: In 2023, we launched numerous AI features and we anticipate introducing even more in 2024. Acquia already adheres to responsible AI principles. This aligns with our definition of "Open", emphasizing accountability and safety for the AI systems we develop. We want to continue to be a leader in this space.

Stronger together

We've always been very focused on our greatest asset: our people. This year, we welcomed exceptional talent across the organization, including two key additions to our Executive Leadership Team (ELT): Tarang Patel, leading Corporate Development, and Jennifer Griffin Smith, our Chief Market Officer. Their expertise has already made a significant impact.

Image removed.Eight awards for "Best Places to Work 2023" in various categories such as IT, mid-size workplaces, female-friendly environments, wellbeing, and appeal for millennials, across India, the UK, and the USA.

In 2023, we dedicated ourselves to redefining and enhancing the Acquia employee experience, committing daily to its principles through all our programs. This focus, along with our strong emphasis on diversity, equity, and inclusion (DEI), has cultivated a culture of exceptional productivity and collaboration. As a result, we've seen not just record-high employee retention rates but also remarkable employee satisfaction and engagement. Our efforts have earned us various prestigious "Best Place to Work" awards.

Customer-centric excellence, growth, and renewals

Our commitment to delivering great customer experiences is evident in the awards and recognition we received, many of which are influenced by customer feedback. These accolades include recognition on platforms like TrustRadius and G2, as well as the prestigious 2023 CODiE Award. Image removed.Acquia received 32 awards for leadership in various categories across its products.

As mentioned earlier, we delivered consistently excellent, and historically high, renewal rates throughout 2023. It means our customers are voting with their feet (and wallets) to stay with Acquia.

Furthermore, we achieved remarkable growth within our customer base with record rates of expansion growth. Not only did customers choose to stay with Acquia, they chose to buy more from Acquia as well.

To top it all off, we experienced a substantial increase in the number of customers who were willing to serve as references for Acquia, endorsing our products and services to prospects.

Many of the notable customer stories for 2023 came from some of the world's most recognizable organizations, including:

  • Nestlé: With thousands of brand sites hosted on disparate technologies, Nestlé brand managers had difficulty maintaining, updating, and securing brand assets globally. Not only was it a challenge to standardize and govern the multiple brands, it was costly to maintain resources for each technology and inefficient with work being duplicated across sites. Today, Nestlé uses Drupal, Acquia Cloud Platform and Acquia Site Factory to face these challenges. Nearly all (90%) of Nestlé sites are built using Drupal. Across the brand's entire portfolio of sites, approximately 60% are built on a shared codebase – made possible by Acquia and Acquia Site Factory.
  • Novartis: The Novartis product sites in the U.S. were fragmented across multiple platforms, with different approaches and capabilities and varying levels of technical debt. This led to uncertainty in the level of effort and time to market for new properties. Today, the Novartis platform built with Acquia and EPAM has become a model within the larger Novartis organization for how a design system can seamlessly integrate with Drupal to build a decoupled front end. The new platform allows Novartis to create new or move existing websites in a standardized design framework, leading to more efficient development cycles and more functionality delivered in each sprint.
  • US Drug Enforcement Administration: The U.S. DEA wanted to create a campaign site to increase public awareness regarding the increasing danger of fake prescription pills laced with fentanyl. Developed with Tactis and Acquia, the campaign website One Pill Can Kill highlights the lethal nature of fentanyl. The campaign compares real and fake pills through videos featuring parents and teens who share their experiences with fentanyl. It also provides resources and support for teens, parents, and teachers and discusses the use of Naloxone in reversing the effects of drug overdose.
  • Cox Automotive: Cox Automotive uses first-party data through Acquia Campaign Studio for better targeted marketing. With their Automotive Marketing Platform (AMP) powered by Acquia, they access real-time data and insights, delivering personalized messages at the right time. The results? Dealers using AMP see consumers nine times more likely to purchase within 45 days and a 14-fold increase in sales gross profit ROI.

I'm proud of outcomes like this: it show how valuable our DXP is to our customers.

Product innovation

In 2023, we remained focused on solving problems for our current and future customers. We use both quantitative and qualitative data to assess areas of opportunities and run hypothesis-driven experiments with design prototypes, hackathons, and proofs-of-concept. This approach has led to hundreds of improvements across our products, both by our development teams and through partnerships. Below are some key innovations that have transformed the way our customers operate:

  • We released many AI features in 2023, including AI assistants and automations for Acquia DAM, Acquia CDP, Acquia Campaign Studio, and Drupal. This includes: AI assist during asset creation in Drupal and Campaign Studio, AI-generated descriptions for assets and products in DAM, auto-tagging in DAM with computer vision, next best action/channel predictions in CDP, ML-powered customer segmentation in CDP, and much more.
  • Our Drupal Acceleration Team (DAT) worked with the Drupal community on major upgrade of the Drupal field UI, which makes it significantly faster and more user-friendly to perform content modeling. We also open sourced Acquia Migrate Accelerate as part of the run-up to the Drupal 7 community end-of-life in January 2025. Finally, DAT contributed to a number of major ongoing initiatives including Project Browser, Automatic Updates, Page Building, Recipes, and more that will be seen in later versions of Drupal.
  • We launched a new trial experience for Acquia Cloud Platform, our Drupal platform. Organizations can now explore Acquia's hosting and developer tools to see how their Drupal applications perform on our platform.
  • Our Kuberbetes-native Drupal hosting platform backed by AWS, Acquia Cloud Next, continued to roll out to more customers. Over two-thirds of our customers are now enjoying Acquia Cloud Next, which provides them the highest levels of performance, self-healing, and dynamic scaling. We've seen a 50% decrease in critical support tickets since transitioning customers to Acquia Cloud Next, all while maintaining an impressive uptime record of 99.99% or higher.
  • Our open source marketing automation tool, Acquia Campaign Studio, is now running on Acquia Cloud Next as its core processing platform. This consolidation benefits everyone: it streamlines and accelerates innovation for us while enabling our customers to deliver targeted and personalized messages at a massive scale.
  • We decided to make Mautic a completely independent Open Source project, letting it grow and change freely. We've remained the top contributor ever since.
  • Marketers can now easily shape the Acquia CDP data model using low-code tools, custom attributes and custom calculations features. This empowers all Acquia CDP users, regardless of technical skill, to explore new use cases.
  • Acquia CDP's updated architecture enables nearly limitless elasticity, which allows the platform to scale automatically based on demand. We put this to the test during Black Friday, when our CDP efficiently handled billions of events. Our new architecture has led to faster, more consistent processing times, with speeds improving by over 70%.
  • With Snowflake as Acquia's data backbone, Acquia customers can now collaborate on their data within their organization and across business units. Customers can securely share and access governed data while preserving privacy, offering them advanced data strategies and solutions.
  • Our DAM innovation featured 47 updates and 13 new integrations. These updates included improved Product Information Management (PIM) functionality, increased accessibility, and a revamped search experience. Leveraging AI, we automated the generation of alt-text and product descriptions, which streamlines content management. Additionally, we established three partnerships to enhance content creation, selection, and distribution in DAM: Moovly for AI-driven video creation and translation, Vizit for optimizing content based on audience insights, and Syndic8 for distributing visual and product content across online commerce platforms.
  • With the acquisition of Monsido and new partnerships with VWO (tools for optimizing website engagement and conversions) and Conductor (SEO platform), Acquia DXP now offers an unparalleled suite of tools for experience optimization. Acquia already provided the best tools to build, manage and operate websites. With these additions, Acquia DXP also offers the best solution for experience optimization.
  • Acquia also launched Acquia TV, a one-stop destination for all things digital experience. It features video podcasts, event highlights, product breakdowns, and other content from a diverse collection of industry voices. This is a great example of how we use our own technology to connect more powerfully with our audiences. It's something our customers strive to do everyday.

Conclusion

In spite of the economic uncertainties of 2023, Acquia had a remarkable year. We achieved our objectives, overcame challenges, and delivered outstanding results. I'm grateful to be in the position that we are in.

Our achievements in 2023 underscore the importance of putting our customers first and nurturing exceptional teams. Alongside effective management and financial responsibility, these elements fuel ongoing growth, irrespective of economic conditions.

Of course, none of our results would be possible without the support of our customers, our partners, and the Drupal and Mautic communities. Last but not least, I'm grateful for the dedication and hard work of all Acquians who made 2023 another exceptional year.

Dries Buytaert: The little HTTP Header Analyzer that could

HTTP headers play a crucial part in making your website fast and secure. For that reason, I often inspect HTTP headers to troubleshoot caching problems or review security settings.

The complexity of the HTTP standard and the challenge to remember all the best practices led me to develop an HTTP Header Analyzer four years ago.

It is pretty simple: enter a URL, and the tool will analyze the headers sent by your webserver, CMS or web application. It then explains these headers, offers a score, and suggests possible improvements.

For a demonstration, click https://dri.es/headers?url=https://www.reddit.com. As the URL suggests, it will analyze the HTTP headers of Reddit.com.

I began this as a weekend project in the early days of COVID, seeing it as just another addition to my toolkit. At the time, I simply added it to my projects page but never announced or mentioned it on my blog.

So why write about it now? Because I happened to check my log files and, lo and behold, the little scanner that could clocked in more than 5 million scans, averaging over 3,500 scans a day.

So four years and five million scans later, I'm finally announcing it to the world!

If you haven't tried my HTTP header analyzer, check it out. It's free, easy to use, requires no sign-up, and is built to help improve your website's performance and security.

The crawler works with all websites, but naturally, I added some special checks for Drupal sites.

I don't have any major plans for the crawler. At some point, I'd like to move it to its own domain, as it feels a bit out of place as part of my personal website. But for now, that isn't a priority.

For the time being, I'm open to any feedback or suggestions and will commit to making any necessary corrections or improvements.

It's rewarding to know that this tool has made thousands of websites faster and safer! It's also a great reminder to share your work, even in the simplest way – you never know the impact it could have.

Dries Buytaert: Satoshi Nakamoto's Drupal adventure

Martti Malmi, an early contributor to the Bitcoin project, recently shared a fascinating piece of internet history: an archive of private emails between himself and Satoshi Nakamoto, Bitcoin's mysterious founder.

The identity of Satoshi Nakamoto remains one of the biggest mysteries in the technology world. Despite extensive investigations, speculative reports, and numerous claims over the years, the true identity of Bitcoin's creator(s) is still unknown.

Martti Malmi released these private conversations in reaction to a court case focused on the true identity of Satoshi Nakamoto and the legal entitlements to the Bitcoin brand and technology.

The emails provide some interesting details into Bitcoin's early days, and might also provide some new clues about Satoshi's identity.

Satoshi and Martti worked together on a variety of different things, including the relaunch of the Bitcoin website. Their goal was to broaden public understanding and awareness of Bitcoin.

And to my surprise, the emails reveal they chose Drupal as their preferred CMS! (Thanks to Jeremy Andrews for making me aware.)

Image removed.

The emails detail Satoshi's hands-on involvement, from installing Drupal themes, to configuring Drupal's .htaccess file, to exploring Drupal's multilingual capabilities.

Image removed.

At some point in the conversation, Satoshi expressed reservations about Drupal's forum module.

Image removed.

For what it is worth, this proves that I'm not Satoshi Nakamoto. Had I been, I'd have picked Drupal right away, and I would never have questioned Drupal's forum module.

Jokes aside, as Drupal's Founder and Project Lead, learning about Satoshi's use of Drupal is a nice addition to Drupal's rich history. Almost every day, I'm inspired by the unexpected impact Drupal has.

Dries Buytaert: Acquia a Leader in the 2024 Gartner Magic Quadrant for Digital Experience Platforms

Image removed.

For the fifth year in a row, Acquia has been named a Leader in the Gartner Magic Quadrant for Digital Experience Platforms (DXP).

Acquia received this recognition from Gartner based on both the completeness of product vision and ability to execute.

Central to our vision and execution is a deep commitment to openness. Leveraging Drupal, Mautic and open APIs, we've built the most open DXP, empowering customers and partners to tailor our platform to their needs.

Our emphasis on openness extends to ensuring our solutions are accessible and inclusive, making them available to everyone. We also prioritize building trust through data security and compliance, integral to our philosophy of openness.

We're proud to be included in this report and thank our customers and partners for their support and collaboration.

Mandatory disclaimer from Gartner

Gartner, Magic Quadrant for Digital Experience Platforms, Irina Guseva, Jim Murphy, Mike Lowndes, John Field - February 21, 2024.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Acquia.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Gartner is a registered trademark and service mark of Gartner and Magic Quadrant is a registered trademark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Dries Buytaert: Two years later: is my Web3 website still standing?

Image removed.

Two years ago, I launched a simple Web3 website using IPFS (InterPlanetary File System) and ENS (Ethereum Name Service). Back then, Web3 tools were getting a lot of media attention and I wanted to try it out.

Since I set up my Web3 website two years ago, I basically forgot about it. I didn't update it or pay attention to it for two years. But now that we hit the two-year mark, I'm curious: is my Web3 website still online?

At that time, I also stated that Web3 was not fit for hosting modern web applications, except for a small niche: static sites requiring high resilience and infrequent content updates.

I was also curious to explore the evolution of Web3 technologies to see if they became more applicable for website hosting.

My original Web3 experiment

In my original blog post, I documented the process of setting up what could be called the "Hello World" of Web3 hosting. I stored an HTML file on IPFS, ensured its availability using "pinning services", and made it accessible using an ENS domain.

For those with a basic understanding of Web3, here is a summary of the steps I took to launch my first Web3 website two years ago:

  1. Purchased an ENS domain name: I used a crypto wallet with Ethereum to acquire dries.eth through the Ethereum Name Service, a decentralized alternative to the traditional DNS (Domain Name System).
  2. Uploaded an HTML File to IPFS: I uploaded a static HTML page to the InterPlanetary File System (IPFS), which involved running my own IPFS node and utilizing various pinning services like Infura, Fleek, and Pinata. These pinning services ensure that the content remains available online even when my own IPFS node is offline.
  3. Accessed the website: I confirmed that my website was accessible through IPFS-compatible browsers.
  4. Mapped my webpage to my domain name: As the last step, I linked my IPFS-hosted site to my ENS domain dries.eth, making the web page accessible under an easy domain name.

If the four steps above are confusing to you, I recommend reading my original post. It is over 2,000 words, complete with screenshots and detailed explanations of the steps above.

Checking the pulse of various Web3 services

As the first step in my check-up, I wanted to verify if the various services I referenced in my original blog post are still operational.

The results, displayed in the table below, are really encouraging: Ethereum, ENS, IPFS, Filecoin, Infura, Fleek, Pinata, and web3.storage are all operational.

The two main technologies – ENS and IPFS – are both actively maintained and developed. This indicates that Web3 technology has built a robust foundation.

Service Description Still around in February 2024) ENS A blockchain-based naming protocol offering DNS for Web3, mapping domain names to Ethereum addresses. Yes IPFS A peer-to-peer protocol for storing and sharing data in a distributed file system. Yes Filecoin A blockchain-based storage network and cryptocurrency that incentivizes data storage and replication. Yes Infura Provides tools and infrastructure to manage content on IPFS and other tools for developers to connect their applications to blockchain networks and deploy smart contracts. Yes Fleek A platform for building websites using IPFS and ENS. Yes Pinata Provides tools and infrastructure to manage content on IPFS, and more recently Farcaster applications. Yes web3.storage Provides tools and infrastructure to manage content on IPFS with support for Filecoin. Yes

Is my Web3 website still up?

Seeing all these Web3 services operational is encouraging, but the ultimate test is to check if my Web3 webpage, dries.eth, remained live. It's one thing for these services to work, but another for my site to function properly. Here is what I found in a detailed examination:

  1. Domain ownership verification: A quick check on etherscan.io confirmed that dries.eth is still registered to me. Relief!
  2. ENS registrar access: Using my crypto wallet, I could easily log into the ENS registrar and manage my domains. I even successfully renewed dries.eth as a test.
  3. IPFS content availability: My webpage is still available on IPFS, thanks to having pinned it two years ago. Logging into Fleek and Pinata, I found my content on their admin dashboards.
  4. Web3 and ENS gateway access: I can visit dries.eth using a Web3 browser, and also via an IPFS-compatible ENS gateway like https://dries.eth.limo/ – a privacy-centric service, new since my initial blog post.

The verdict? Not only are these Web3 services still operational, but my webpage also continues to work!

This is particularly noteworthy given that I haven't logged in to these services, didn't perform any maintenance, or didn't pay any hosting fees for two years (the pinning services I'm using have a free tier).

Visit my Web3 page yourself

For anyone interested in visiting my Web3 page (perhaps your first Web3 visit?), there are several methods to choose from, each with a different level of Web3-ness.

  • Use a Web3-enabled browser: Browsers such as Brave and Opera, offer built-in ENS and IPFS support. They can resolve ENS addresses and interpret IPFS addresses, making it as easy to navigate IPFS content as if it is traditional web content via HTTP or HTTPS.
  • Install a Web3 browser extension: If your favorite browser does not support Web3 out of the box, adding a browser extension like MetaMask can help you access Web3 applications. MetaMask works with Chrome, Firefox, and Edge. It enables you to use .eth domains for doing Ethereum transactions or for accessing content on IPFS.
  • Access through an ENS gateway: For those looking for the simplest way to access Web3 content without installing anything new, using an ENS gateway, such as eth.limo, is the easiest method. This gateway maps ENS domains to DNS, offering direct navigation to Web3 sites like mine at https://dries.eth.limo/. It serves as a simple bridge between Web2 (the conventional web) and Web3.

Streamlining content updates with IPNS

In my original post, I highlighted various challenges, such as the limitations for hosting dynamic applications, the cost of updates, and the slow speed of these updates. Although these issues still exist, my initial analysis was conducted with an incomplete understanding of the available technology. I want to delve deeper into these limitations, and refine my previous statements.

Some of these challenges stem from the fact that IPFS operates as a "content-addressed network". Unlike traditional systems that use URLs or file paths to locate content, IPFS uses a unique hash of the content itself. This hash is used to locate and verify the content, but also to facilitate decentralized storage.

While the principle of addressing content by a hash is super interesting, it also introduces some complications: whenever content is updated, its hash changes, making it tricky to link to the updated content. Specifically, every time I updated my Web3 site's content, I had to update my ENS record, and pay a translation fee on the Ethereum network.

At the time, I wasn't familiar with the InterPlanetary Name System (IPNS). IPNS, not to be confused with IPFS, addresses this challenge by assigning a mutable name to content on IPFS. You can think of IPNS as providing an "alias" or "redirect" for IPFS addresses: the IPNS address always stays the same and points to the latest IPFS address. It effectively eliminates the necessity of updating ENS records with each content change, cutting down on expenses and making the update process more automated and efficient.

To leverage IPNS, you have to take the following steps:

  1. Upload your HTML file to IPFS and receive an IPFS hash.
  2. Publish this hash to IPNS, creating an IPNS hash that directs to the latest IPFS hash.
  3. Link your ENS domain to this IPNS hash. Since the IPNS hash remains constant, you only need to update your ENS record once.

Without IPNS, updating content involved:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the ENS record with the new IPFS hash, which costs some Ether and can take a few minutes.

With IPNS, updating content involves:

  1. Update the HTML file.
  2. Upload the revised file to IPFS, generating a new IPFS hash.
  3. Update the IPNS record to reference this new hash, which is free and almost instant.

Although IPNS is a faster and more cost-effective approach compared to the original method, it still carries a level of complexity. There is also a minor runtime delay due to the extra redirection step. However, I believe this tradeoff is worth it.

Updating my Web3 site to use IPNS

With this newfound knowledge, I decided to use IPNS for my own site. I generated an IPNS hash using both the IPFS desktop application (see screenshot) and IPFS' command line tools:

[code bash]$ ipfs name publish /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q > Published to k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a: /ipfs/bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q[/code] Image removed.The IPFS Desktop application showing my index.html file with an option to 'Publish to IPNS'.

After generating the IPNS hash, I was able to visit my site in Brave using the IPFS protocol at ipfs://bafybeibbkhmln7o4ud6an4qk6bukcpri7nhiwv6pz6ygslgtsrey2c3o3q, or via the IPNS protocol at ipns://k51qzi5uqu5dgy8mzjtcqvgr388xjc58fwprededbb1fisq1kvl34sy4h2qu1a.

Image removed.My Web3 site in Brave using IPNS.

Next, I updated the ENS record for dries.eth to link to my IPNS hash. This change cost me 0.0011 ETH (currently $4.08 USD), as shown in the Etherscan transaction. Once the transaction was processed, dries.eth began directing to the new IPNS address.

Image removed.A transaction confirmation on the ENS website, showing a successful update for dries.eth.

Rolling back my IPNS record in ENS

Unfortunately, my excitement was short-lived. A day later, dries.eth stopped working. IPNS records, it turns out, need to be kept alive – a lesson learned the hard way.

While IPFS content can be persisted through "pinning", IPNS records require periodic "republishing" to remain active. Essentially, the network's Distributed Hash Table (DHT) may drop IPNS records after a certain amount of time, typically 24 hours. To prevent an IPNS record from being dropped, the owner must "republish" it before the DHT forgets it.

I found out that the pinning services I use – Dolphin, Fleek and Pinata – don't support IPNS republishing. Looking into it further, it turns out few IPFS providers do.

During my research, I discovered Filebase, a small Boston-based company with fewer than five employees that I hadn't come across before. Interestingly, they provide both IPFS pinning and IPNS republishing. However, to pin my existing HTML file and republish its IPNS hash, I had to subscribe to their service at a cost of $20 per month.

Faced with the challenge of keeping my IPNS hash active, I found myself at a crossroads: either fork out $20 a month for a service like Filebase that handles IPNS republishing for me, or take on the responsibility of running my own IPFS node.

Of course, the whole point of decentralized storage is that people run their own nodes. However, considering the scope of my project – a single HTML file – the effort of running a dedicated node seemed disproportionate. I'm also running my IPFS node on my personal laptop, which is not always online. Maybe one day I'll try setting up a dedicated IPFS node on a Raspberry Pi or similar setup.

Ultimately, I decided to switch my ENS record back to the original IPFS link. This change, documented in the Etherscan transaction, cost me 0.002 ETH (currently $6.88 USD).

Although IPNS works, or can work, it just didn't work for me. Despite the setback, the whole experience was a great learning journey.

(Update: A couple of days after publishing this blog post, someone kindly recommended https://dwebservices.xyz/, claiming their free tier includes IPNS republishing. Although I haven't personally tested it yet, a quick look at their about page suggests they might be a promising solution.)

Web3 remains too complex for most people

Over the past two years, Web3 hosting hasn't disrupted the mainstream website hosting market. Despite the allure of Web3, mainstream website hosting is simple, reliable, and meets the needs of nearly all users.

Despite a significant upgrade of the Ethereum network that reduced energy consumption by over 99% through its transition to a Proof of Stake (PoS) consensus mechanism, environmental considerations, especially the carbon footprint associated with blockchain technologies, continue to create further challenges for the widespread adoption of Web3 technologies. (Note: ENS operates on the blockchain but IPFS does not.)

As I went through the check-up, I discovered islands of innovation and progress. Wallets and ENS domains got easier to use. However, the overall process of creating a basic website with IPFS and ENS remains relatively complex compared to the simplicity of Web2 hosting.

The need for a SQL-compatible Web3 database

Modern web applications like those built with Drupal and WordPress rely on a technology stack that includes a file system, a domain name system (e.g. DNS), a database (e.g. MariaDB or MySQL), and a server-side runtime environment (e.g. PHP).

While IPFS and ENS offer decentralized alternatives for the first two, the equivalents for databases and runtime environments are less mature. This limits the types of applications that can easily move from Web2 to Web3.

A major breakthrough would be the development of a decentralized database that is compatible with SQL, but currently, this does not seem to exist. The complexity of ensuring data integrity and confidentiality across multiple nodes without a central authority, along with meeting the throughput demands of modern web applications, may be too complex to solve.

After all, blockchains, as decentralized databases, have been in development for over a decade, yet lack support for the SQL language and fall short in speed and efficiency required for dynamic websites.

The need for a distributed runtime

Another critical component for modern websites is the runtime environment, which executes the server-side logic of web applications. Traditionally, this has been the domain of PHP, Python, Node.js, Java, etc.

WebAssembly (WASM) could emerge as a potential solution. It could make for an interesting decentralized solution as WASM binaries can be hosted on IPFS.

However, when WASM runs on the client-side – i.e. in the browser – it can't deliver the full capabilities of a server-side environment. This limitation makes it challenging to fully replicate traditional web applications.

So for now, Web3's applications are quite limited. While it's possible to host static websites on IPFS, dynamic applications requiring database interactions and server-side processing are difficult to transition to Web3.

Bridging the gap between Web2 and Web3

In the short term, the most likely path forward is blending decentralized and traditional technologies. For example, a website could store its static files on IPFS while relying on traditional Web2 solutions for its dynamic features.

Looking to the future, initiatives like OrbitDB's peer-to-peer database, which integrates with IPFS, show promise. However, OrbitDB lacks compatibility with SQL, meaning applications would need to be redesigned rather than simply transferred.

Web3 site hosting remains niche

Even the task of hosting static websites, which don't need a database or server-side processing, is relatively niche within the Web3 ecosystem.

As I wrote in my original post: In its current state, IPFS and ENS offer limited value to most website owners, but tremendous value to a very narrow subset of all website owners.. This observation remains accurate today.

IPFS and ENS stand out for their strengths in censorship resistance and reliability. However, for the majority of users, the convenience and adequacy of Web2 for hosting static sites often outweigh these benefits.

The key to broader acceptance of new technologies, like Web3, hinges on either discovering new mass-market use cases or significantly enhancing the user experience for existing ones. Web3 has not found a universal application or surpassed Web2 in user experience.

The popularity of SaaS platforms underscores this point. They dominate not because they're the most resilient or robust options, but because they're the most convenient. Despite the benefits of resilience and autonomy offered by Web3, most individuals opt for less resilient but more convenient SaaS solutions.

Conclusion

Despite the billions invested in Web3 and notable progress, its use for website hosting still has significant limitations.

The main challenge for the Web3 community is to either develop new, broadly appealing applications or significantly improve the usability of existing technologies.

Website hosting falls into the category of existing use cases.

Unfortunately, Web3 remains mostly limited to static websites, as it does not yet offer robust alternatives to SQL databases and server-side runtime.

Even within the limited scope of static websites, improvements to the user experience have been marginal, focused on individual parts of the technology stack. The overall end-to-end experience remains complex.

Nonetheless, the fact that my Web3 page is still up and running after two years is encouraging, showing the robustness of the underlying technology, even if its current use remains limited. I've grown quite fond of IPFS, and I hope to do more useful experiments with it in the future.

All things considered, I don't see Web3 taking the website hosting world by storm any time soon. That said, over time, Web3 could become significantly more attractive and functional. All in all, keeping an eye on this space is definitely fun and worthwhile.

Dries Buytaert: Drupal adventures in Japan and Australia

Next week, I'm traveling to Japan and Australia. I've been to both countries before and can't wait to return – they're among my favorite places in the world.

My goal is to connect with the local Drupal community in each country, discussing the future of Drupal, learning from each other, and collaborating.

I'll also be connecting with Acquia's customers and partners in both countries, sharing our vision, strategy and product roadmap. As part of that, I look forward to spending some time with the Acquia teams as well – about 20 employees in Japan and 35 in Australia.

I'll present at a Drupal event in Tokyo the evening of March 14th at Yahoo! Japan.

While in Australia, I'll be attending Drupal South, held at the Sydney Masonic Centre from March 20-22. I'm excited to deliver the opening keynote on the morning of March 20th, where I'll delve into Drupal's past, present, and future.

I look forward to being back in Australia and Japan, reconnecting with old friends and the local communities.

Dries Buytaert: Building my own temperature and humidity monitor

Last fall, we toured the Champagne region in France, famous for its sparkling wines. We explored the ancient, underground cellars where Champagne undergoes its magical transformation from grape juice to sparkling wine. These cellars, often 30 meters deep and kilometers long, maintain a constant temperature of around 10-12°C, providing the perfect conditions for aging and storing Champagne.

Image removed.25 meters underground in a champagne tunnel, which often stretches for miles/kilometers.

After sampling various Champagnes, we returned home with eight cases to store in our home's basement. However, unlike those deep cellars, our basement is just a few meters deep, prompting a simple question that sent me down a rabbit hole: how does our basement's temperature compare?

Rather than just buying a thermometer, I decided to build my own "temperature monitoring system" using open hardware and custom-built software. After all, who needs a simple solution when you can spend evenings tinkering with hardware, sensors, wires and writing your own software? Sometimes, more is more!

The basic idea is this: track the temperature and humidity of our basement every 15 minutes and send this information to a web service. This web service analyzes the data and alerts us if our basement becomes too cold or warm.

I launched this monitoring system around Christmas last year, so it's been running for nearly three months now. You can view the live temperature and historical data trends at https://dri.es/sensors. Yes, publishing our basement's temperature online is a bit quirky, but it's all in good fun.

Image removed.A screenshot of my basement temperature dashboard.

So far, the temperature in our basement has been ideal for storing wine. However, I expect it will change during the summer months.

In the rest of this blog post, I'll share how I built the client that collects and sends the data, as well as the web service backend that processes and visualizes that data.

Hardware used

For this project, I bought:

  1. Adafruit ESP32-S3 Feather: A microcontroller board with Wi-Fi and Bluetooth capabilities, serving as the central processing unit of my project.
  2. Adafruit SHT4x sensor: A high-accuracy temperature and humidity sensor.
  3. 3.7v 500mAh battery: A small and portable power source.
  4. STEMMA QT / Qwiic JST SH 4-pin cable: To connect the sensor to the board without soldering.

The total hardware cost was $32.35 USD. I like Adafruit a lot, but it's worth noting that their products often come at a higher cost. You can find comparable hardware for as little as $10-15 elsewhere. Adafruit's premium cost is understandable, considering how much valuable content they create for the maker community.

Image removed.An ESP32-S3 development board (middle) linked to a Sensirion SHT41 temperature and humidity sensor (left) and powered by a battery pack (right).

Client code for Adafruit ESP32-S3 Feather

I developed the client code for the Adafruit ESP32-S3 Feather using the Arduino IDE, a widely used platform for developing and uploading code to Arduino-compatible boards.

The code measures temperature and humidity every 15 minutes, connects to WiFi, and sends this data to https://dri.es/sensors, my web service endpoint.

One of my goals was to create a system that could operate for a long time without needing to recharge the battery. The ESP32-S3 supports a "deep sleep" mode where it powers down almost all its functions, except for the clock and memory. By placing the ESP32-S3 into deep sleep mode between measurements, I was able to significantly reduce power.

Now that you understand the high-level design goals, including deep sleep mode, I'll share the complete client code below. It includes detailed code comments, making it self-explanatory.

[code c]#include "Adafruit_SHT4x.h" #include "Adafruit_MAX1704X.h" #include "WiFiManager.h" #include "ArduinoJson.h" #include "HTTPClient.h" // The Adafruit_SHT4x sensor is a high-precision, temperature and humidity // sensor with an I2C interface. Adafruit_SHT4x sht4 = Adafruit_SHT4x(); // The Adafruit ESP32-S3 Feather comes with a built-in MAX17048 LiPoly / LiIon // battery monitor. The MAX17048 provides accurate monitoring of the battery's // voltage. Utilizing the Adafruit library, not only helps us obtain the raw // voltage data from the battery cell, but also converts this data into a more // intuitive battery percentage or charge level. We will pass on the battery // percentage to the web service endpoint, which can visualize it or use it to // send notifications when the battery needs recharging. Adafruit_MAX17048 maxlipo; // The setup() function is used to initialize the device's hardware and // communications. It's executed once at startup. Here, we begin serial // communication, initialize sensors, connect to Wi-Fi, and send initial // data. void setup() { Serial.begin(115200); // Wait for the serial connection to establish before proceeding further. // This is crucial for boards with native USB interfaces. Without this loop, // initial output sent to the serial monitor is lost. This code is not // needed when running on battery. //delay(1000); // Generates a unique device ID from a segment of the MAC address. // Since the MAC address is permanent and unchanged after reboots, // this guarantees the device ID remains consistent. To achieve a // compact ID, only a specific portion of the MAC address is used, // specifically the range between 0x10000 and 0xFFFFF. This range // translates to a hexadecimal string of a fixed 5-character length, // giving us roughly 1 million unique IDs. This approach balances // uniqueness with compactness. uint64_t chipid = ESP.getEfuseMac(); uint32_t deviceValue = ((uint32_t)(chipid >> 16) & 0x0FFFFF) | 0x10000; char device[6]; // 5 characters for the hex representation + the null terminator. sprintf(device, "%x", deviceValue); // Use '%x' for lowercase hex letters // Initialize the SHT4x sensor: if (sht4.begin()) { Serial.println(F("SHT4 temperature and humidity sensor initialized.")); sht4.setPrecision(SHT4X_HIGH_PRECISION); sht4.setHeater(SHT4X_NO_HEATER); } else { Serial.println(F("Could not find SHT4 sensor.")); } // Initialize the MAX17048 sensor: if (maxlipo.begin()) { Serial.println(F("MAX17048 battery monitor initialized.")); } else { Serial.println(F("Could not find MAX17048 battery monitor!")); } // Insert a short delay to ensure the sensors are ready and their data is stable: delay(200); // Retrieve temperature and humidity data from SHT4 sensor: sensors_event_t humidity, temp; sht4.getEvent(&humidity, &temp); // Get the battery percentage and calibrate if it's over 100%: float batteryPercent = maxlipo.cellPercent(); batteryPercent = (batteryPercent > 100) ? 100 : batteryPercent; WiFiManager wifiManager; // Uncomment the following line to erase all saved WiFi credentials. // This can be useful for debugging or reconfiguration purposes. // wifiManager.resetSettings(); // This WiFi manager attempts to establish a WiFi connection using known // credentials, stored in RAM. If it fails, the device will switch to Access // Point mode, creating a network named "Temperature Monitor". In this mode, // connect to this network, navigate to the device's IP address (default IP // is 192.168.4.1) using a web browser, and a configuration portal will be // presented, allowing you to enter new WiFi credentials. Upon submission, // the device will reboot and try connecting to the specified network with // these new credentials. if (!wifiManager.autoConnect("Temperature Monitor")) { Serial.println(F("Failed to connect to WiFi ...")); // If the device fails to connect to WiFi, it will restart to try again. // This approach is useful for handling temporary network issues. However, // in scenarios where the network is persistently unavailable (e.g. router // down for more than an hour, consistently poor signal), the repeated // restarts and WiFi connection attempts can quickly drain the battery. ESP.restart(); // Mandatory delay to allow the restart process to initiate properly: delay(1000); } // Send collected data as JSON to the specified URL: sendJsonData("https://dri.es/sensors", device, temp.temperature, humidity.relative_humidity, batteryPercent); // WiFi consumes significant power so turn it off when done: WiFi.disconnect(true); // Enter deep sleep for 15 minutes. The ESP32-S3's deep sleep mode minimizes // power consumption by powering down most components, except the RTC. This // mode is efficient for battery-powered projects where constant operation // isn't needed. When the device wakes up after the set period, it runs // setup() again, as the state isn't preserved. Serial.println(F("Going to sleep for 15 minutes ...")); ESP.deepSleep(15 * 60 * 1000000); // 15 mins * 60 secs/min * 1,000,000 μs/sec. } bool sendJsonData(const char* url, const char* device, float temperature, float humidity, float battery) { StaticJsonDocument<200> doc; // Round floating-point values to one decimal place for efficient data // transmission. This approach reduces the JSON payload size, which is // important for IoT applications running on batteries. doc["device"] = device; doc["temperature"] = String(temperature, 1); doc["humidity"] = String(humidity, 1); doc["battery"] = String(battery, 1); // Serialize JSON to a string: String jsonData; serializeJson(doc, jsonData); // Initialize an HTTP client with the provided URL: HTTPClient httpClient; httpClient.begin(url); httpClient.addHeader("Content-Type", "application/json"); // Send a HTTP POST request: int httpCode = httpClient.POST(jsonData); // Close the HTTP connection: httpClient.end(); // Print debug information to the serial console: Serial.println("Sent '" + jsonData + "' to " + String(url) + ", return code " + httpCode); return (httpCode == 200); } void loop() { // The ESP32-S3 resets and runs setup() after waking up from deep sleep, // making this continuous loop unnecessary. }[/code]

Further optimizing battery usage

When I launched my thermometer around Christmas 2023, the battery was at 88%. Today, it is at 52%. Some quick math suggests it's using approximately 12% of its battery per month. Given its current rate of usage, it needs recharging about every 8 months.

Connecting to the WiFi and sending data are by far the main power drains. To extend the battery life, I could send updates less frequently than every 15 minutes, only send them when there is a change in temperature (which is often unchanged or only different by 0.1°C), or send batches of data points together. Any of these methods would work for my needs, but I haven't implemented them yet.

Alternatively, I could hook the microcontroller up to a 5V power adapter, but where is the fun in that? It goes against the project's "more is more" principle.

Handling web service requests

With the client code running on the ESP32-S3 and sending sensor data to https://dri.es/sensors, the next step is to set up a web service endpoint to receive this incoming data.

As I use Drupal for my website, I implemented the web service endpoint in Drupal. Drupal uses Symfony, a popular PHP framework, for large parts of its architecture. This combination offers an easy but powerful way for implementing web services, similar to those found across other modern server-side web development frameworks like Laravel, Django, etc.

Here is what my Drupal routing configuration looks like:

[code yaml]sensors.sensor_data: path: '/sensors' methods: [POST] defaults: _controller: '\Drupal\sensors\Controller\SensorMonitorController::postSensorData' requirements: _access: 'TRUE'[/code]

The above configuration directs Drupal to send POST requests made to https://dri.es/sensors to the postSensorData() method of the SensorMonitorController class.

The implementation of this method handles request authentication, validates the JSON payload, and saves the data to a MariaDB database table. Pseudo-code:

[code php]public function postSensorData(Request $request) : JsonResponse { $content = $request->getContent(); $data = json_decode($content, TRUE); // Validate the JSON payload: … // Authenticate the request: … $device = DeviceFactory::getDevice($data['device']); if ($device) { $device->recordSensorEvent($data); } return new JsonResponse(['message' => 'Thank you!']); }[/code]

For testing your web service, you can use tools like cURL:

[code bash]$ curl -X POST -H "Content-Type: application/json" -d '{"device":"0xdb123", "temperature":21.5, "humidity":42.5, "battery":90.0}' https://localhost/sensors[/code]

While cURL is great for quick tests, I use PHPUnit tests for automated testing in my CI/CD workflow. This ensures that everything keeps working, even when upgrading Drupal, Symfony, or other components of my stack.

Storing sensor data in a database

The primary purpose of $device->recordSensorEvent() in SensorMonitorController::postSensorData() is to store sensor data into a SQL database. So, let's delve into the database design.

My main design goals for the database backend were:

  1. Instead of storing every data point indefinitely, only keep the daily average, minimum, maximum, and the latest readings for each sensor type across all devices.
  2. Make it easy to add new devices and new sensors in the future. For instance, if I decide to add a CO2 sensor for our bedroom one day (a decision made in my head but not yet pitched to my better half), I want that to be easy.

To this end, I created the following MariaDB table:

[code sql]CREATE TABLE sensor_data ( date DATE, device VARCHAR(255), sensor VARCHAR(255), avg_value DECIMAL(5,1), min_value DECIMAL(5,1), max_value DECIMAL(5,1), min_timestamp DATETIME, max_timestamp DATETIME, readings SMALLINT NOT NULL, UNIQUE KEY unique_stat (date, device, sensor) );[/code]

A brief explanation for each field:

  • date: The date for each sensor reading. It doesn't include a time component as we aggregate data on a daily basis.
  • device: The device ID of the device providing the sensor data, such as 'basement' or 'bedroom'.
  • sensor: The type of sensor, such as 'temperature', 'humidity' or 'co2'.
  • avg_value: The average value of the sensor readings for the day. Since individual readings are not stored, a rolling average is calculated and updated with each new reading using the formula: avg_value = avg_value + new_value - avg_value new_total_readings . This method can accumulate minor rounding errors, but simulations show these are negligible for this use case.
  • min_value and max_value: The daily minimum and maximum sensor readings.
  • min_timestamp and max_timestamp: The exact moments when the minimum and maximum values for that day were recorded.
  • readings: The number of readings (or measurements) taken throughout the day, which is used for calculating the rolling average.

In essence, the recordSensorEvent() method needs to determine if a record already exists for the current date. Depending on this determination, it will either insert a new record or update the existing one.

In Drupal this process is streamlined with the merge() function in Drupal's database layer. This function handles both inserting new data and updating existing data in one step.

[code php]private function updateDailySensorEvent(string $sensor, float $value): void { $timestamp = \Drupal::time()->getRequestTime(); $date = date('Y-m-d', $timestamp); $datetime = date('Y-m-d H:i:s', $timestamp); $connection = Database::getConnection(); $result = $connection->merge('sensor_data') ->keys([ 'device' => $this->id, 'sensor' => $sensor, 'date' => $date, ]) ->fields([ 'avg_value' => $value, 'min_value' => $value, 'max_value' => $value, 'min_timestamp' => $datetime, 'max_timestamp' => $datetime, 'readings' => 1, ]) ->expression('avg_value', 'avg_value + ((:new_value - avg_value) / (readings + 1))', [':new_value' => $value]) ->expression('min_value', 'LEAST(min_value, :value)', [':value' => $value]) ->expression('max_value', 'GREATEST(max_value, :value)', [':value' => $value]) ->expression('min_timestamp', 'IF(LEAST(min_value, :value) = :value, :timestamp, min_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('max_timestamp', 'IF(GREATEST(max_value, :value) = :value, :timestamp, max_timestamp)', [':value' => $value, ':timestamp' => $datetime]) ->expression('readings', 'readings + 1') ->execute(); }[/code]

Here is what the query does:

  • It checks if a record for the current sensor and date exists.
  • If not, it creates a new record with the sensor data, including the initial average, minimum, maximum, and latest value readings, along with the timestamp for these values.
  • If a record does exist, it updates the record with the new sensor data, adjusting the average value, and updating minimum and maximum values and their timestamps if the new reading is a new minimum or maximum.
  • The function also increments the count of readings.

For those not using Drupal, similar functionality can be achieved with MariaDB's INSERT ... ON DUPLICATE KEY UPDATE command, which allows for the same conditional insert or update logic based on whether the specified unique key already exists in the table.

Here are example queries, extracted from MariaDB's General Query Log to help you get started:

[code sql]INSERT INTO sensor_data (device, sensor, date, min_value, min_timestamp, max_value, max_timestamp, readings) VALUES ('0xdb123', 'temperature', '2024-01-01', 21, '2024-01-01 00:00:00', 21, '2024-01-01 00:00:00', 1); UPDATE sensor_data SET min_value = LEAST(min_value, 21), min_timestamp = IF(LEAST(min_value, 21) = 21, '2024-01-01 00:00:00', min_timestamp), max_value = GREATEST(max_value, 21), max_timestamp = IF(GREATEST(max_value, 21) = 21, '2024-01-01 00:00:00', max_timestamp), readings = readings + 1 WHERE device = '0xdb123' AND sensor = 'temperature' AND date = '2024-01-01';[/code]

Generating graphs

With the data securely stored in the database, the next step involved generating the graphs. To accomplish this, I wrote some custom PHP code that generates Scalable Vector Graphics (SVGs).

Given that is blog post is already quite long, I'll spare you the details. For now, those curious can use the 'View source' feature in their web browser to examine the SVGs on the thermometer page.

Conclusion

It's fun how a visit to the Champagne cellars in France sparked an unexpected project. Choosing to build a thermometer rather than buying one allowed me to dive back into an old passion for hardware and low-level software.

I also like taking control of my own data and software. It gives me a sense of control and creativity.

As Drupal's project lead, using Drupal for an Internet-of-Things (IoT) backend brought me unexpected joy. I just love the power and flexibility of open-source platforms like Drupal.

As a next step, I hope to design and 3D print a case for my thermometer, something I've never done before. And as mentioned, I'm also considering integrating additional sensors. Stay tuned for updates!

PreviousNext: How can free open source CMSes remain competitive with enterprise clients?

With Drupal now heavily used in the enterprise market by very large organisations, much of its direct competition is from well-funded proprietary products. From the perspective of my role on the Drupal Association board, I gave a talk at FOSDEM in February 2024 on the strategies and initiatives the Drupal community is starting to put in place to remain competitive in the enterprise market and how these approaches can be shared by other open source projects. 

by Owen Lansbury / 12 March 2024

The original of this video recording was first published on the FOSDEM website

Drupal has historically had no centralised product management or marketing, let alone ANY coordinated budget! For comparison, Adobe spends around USD$2.7bn annually on product development, sales and marketing for its Experience Cloud product suite. 

In the talk, I discuss Drupal's recent recognition as a Digital Public Good and the way that the Drupal community is highly motivated by providing world-class software for free to anyone who wants to use it, promoting values of freedom, inclusion, participation and empowerment. The Drupal Association recently released a manifesto that defines the Drupal project's commitment to the Open Web, but in order to fulfil this mission, Drupal needs to be successful as a product in the open market.

Since Drupal 8 was released in 2015, it has been specifically targeted at building "ambitious digital experiences." While this has resulted in an overall drop in Drupal installs as smaller sites move to SAAS platforms, the Drupal economy is robust, with an estimated USD$3 billion spent on Drupal-related projects each year.

Unlike other open source projects, Drupal doesn’t have a single company doing the majority of the code contribution. The Drupal Association has run on a budget of around $3.5m or 1/1000th of the revenue being spent on Drupal projects each year. 

This was brought into focus for the Drupal Association during COVID when the primary source of income - running DrupalCon events - required an abrupt rethink. We had to refocus on how Drupal would be both successful and sustainable in the future. This has led to us recently embarking on a new strategy, where the Drupal Association play a more direct role in both Drupal product innovation and marketing.

Enterprise customers are key to maintaining a healthy ecosystem for a CMS. Their investment flows through to the agencies building, maintaining, supporting, and hosting large-scale projects, providing consistent, repeat income that ultimately benefits our open source community in the form of stable jobs, community funding, and sponsored code contribution. 

Looking more closely at the challenges of succeeding in the enterprise market, how do you get access and awareness with key decision makers in large organisations like the CIO, CTO and, increasingly, the CMO (Chief Marketing Officer)? They are the people likely to read analyst reports from Gartner and Forrester. While Acquia features as a leader in these reports and relies heavily on Drupal for its platform, Drupal's name recognition is largely absent from these reports. 

Acquia has also had great success with their Engage events that target key decision makers, but it's been a challenge to attract a similar audience to the more community and developer-focused DrupalCon events. 

While the Drupal Association itself has historically had limited relationships with Drupal's large end users, partner agencies who rely on Drupal's open source software for their clients absolutely do have these relationships.

The Drupal Association is in a strong position to provide our agency partners with as much assistance as possible to either retain or win new enterprise clients through any playbook-style information we can provide. For example, do we have a pitch deck on hand to help an agency argue why Drupal is superior to Adobe or Sitecore? Are there pre-packaged product demos that can be consistently updated to highlight new features?

This is an area where we currently fall short in the Drupal community, with most agencies replicating efforts for every new client engagement. It's something we're starting to address with the Drupal Certified Partner program, however, if we can harness the strength of hundreds of agency salespeople pitching Drupal to their clients every day. New agencies joining a partner program need to see a clear pathway to building their teams' expertise and being able to sell Drupal to their clients to grow their businesses. The largest global digital agencies have tended to struggle with engaging with open source software communities, so bridging that gap is critical.

The other group of people we need to convince in any large organisation are the people who’ll be using our product - the developers, content editors and systems engineers. C-level decision-makers lean heavily on this group to evaluate and make recommendations about what platform they should be considering. To influence this group, our product needs to look and function like a modern piece of software, fulfil contemporary requirements or be quickly downloadable for a working demo of the software.

In terms of where we already clearly win, rapid innovation is the thing that we do very well in the open source world. Maintaining the speed of innovation, though, is an area that has been harder for Drupal as both the software and community have matured. A big philosophical hurdle we’ve faced is the notion of the Drupal Association directing budget to innovation projects when people often have an expectation that contribution is “free”. But contribution has never been free! An individual or company has always borne the cost in personal time or wages. Other big open source projects have absolutely no stigma about funding projects with actual money, such as the Linux Foundation's $160m annual funding towards projects.

The Drupal community dipped our toe into this model last year with the Pitchburgh contest, which saw $98,000 worth of projects get completed in a relatively short amount of time because they had the budget. We’re also in the process of hiring people at the Drupal Association who can facilitate innovation and remove roadblocks to contribution.

Now, all we need is the funding to scale this model up. Imagine if just 1% of the $3bn spent on Drupal-related projects each year went towards funding strategic innovation - that would be a $30m budget to work with!

Similarly, the idea that Drupal would be “marketed” as a product by the Drupal Association has never been a core competency. This is the legacy of being structured as a 501c3 not-for-profit in the USA where funds are for the “advancement of a charitable cause”. Our charitable cause is ensuring Drupal remains a Digital Public Good that supports the United Nations’ Sustainable Development Goals. But if there isn't positive product awareness about Drupal in the broader market, then market share will slip and our ability to support the goals around being a Digital Public Good will suffer as a result. 

Whether we call it marketing or advocacy, we need to ensure Drupal as a product is commercially successful. We’ve had a Promote Drupal working group within the Drupal community for a number of years that has driven a range of broader marketing initiatives. The Drupal Association has now taken on an active role in this by commissioning a go-to-market strategy targeting the enterprise sector. This will be rolling out in 2024 as funding for specific marketing initiatives becomes available. 

At the cheaper end of the scale, this might include coordinating speakers at non-Drupal tech events or managing positive media coverage. At a higher budget scale, it might include Drupal-branded booths at major tech conferences, like the one we recently built for Web Summit in Lisbon, or running global campaigns to build Drupal product awareness. 

Our other huge advantage as an open source community is the strength and depth of our developer pool. We do encounter a perception issue when it comes to attracting younger developers to our platforms because there are so many shiny new things to play with. Building robust outreach, training, mentoring, certification and professional pathways is the key to maintaining a sustainable developer pool as those of us with 20+ years of experience head towards the other side of middle age.

So, where can you start to help with all of this? 

  1. If you're a professional services company that relies on Drupal for your business, get involved with the Drupal Certified Partner program. This is the fastest way to both contribute to Drupal's innovation as a product and play a direct role in driving adoption.

  2. If you rely on Drupal as your organization's CMS software, become a Supporting Partner and help fund Drupal's sustainability. 

  3. If you're passionate about maintaining the Open Web, the Drupal Association can accept your philanthropic donation

  4. Send your team members to DrupalCon or a regional DrupalCamp to connect with the community.

This level of engagement will help Drupal maintain its status as the platform of choice for large-scale projects.

Four Kitchens: Custom Drush commands with Drush Generate

Image removed.

Marc Berger

Senior Backend Engineer

Always looking for a challenge, Marc tries to add something new to his toolbox for every project and build — be it a new CSS technology, creating custom APIs, or testing out new processes for development.

January 1, 1970

Recently, one of our clients had to retrieve some information from their Drupal site during a CI build. They needed to know the internal Drupal path from a known path alias. Common Drush commands don’t provide this information directly, so we decided to write our own custom Drush command. It was a lot easier than we thought it would be! Let’s get started.

Note: This post is based on commands and structure for Drush 12.

While we can write our own Drush command from scratch, let’s discuss a tool that Drush already provides us: the drush generate command. Drush 9 added support to generate scaffolding and boilerplate code for many common Drupal coding tasks such as custom modules, themes, services, plugins, and many more. The nice thing about using the drush generate command is that the code it generates conforms to best practices and Drupal coding standards — and some generators even come with examples as well. You can see all available generators by simply running drush generate without any arguments.

Step 1: Create a custom module

To get started, a requirement to create a new custom Drush command in this way is to have an existing custom module already in the codebase. If one exists, great. You can skip to Step 2 below. If you need a custom module, let’s use Drush to generate one:

drush generate module

Drush will ask a series of questions such as the module name, the package, any dependencies, and if you want to generate a .module file, README.md, etc. Once the module has been created, enable the module. This will help with the autocomplete when generating the custom Drush command.

drush en <machine_name_of_custom_module>

Step 2: Create custom Drush command boilerplate

First, make sure you have a custom module where your new custom Drush command will live and make sure that module is enabled. Next, run the following command to generate some boilerplate code:

drush generate drush:command-file

This command will also ask some questions, the first of which is the machine name of the custom module. If that module is enabled, it will autocomplete the name in the terminal. You can also tell the generator to use dependency injection if you know what services you need to use. In our case, we need to inject the path_alias.manager service. Once generated, the new command class will live here under your custom module:

<machine_name_of_custom_module>/src/Drush/Commands

Let’s take a look at this newly generated code. We will see the standard class structure and our dependency injection at the top of the file:

<?php namespace Drupal\custom_drush\Drush\Commands; use Consolidation\OutputFormatters\StructuredData\RowsOfFields; use Drupal\Core\StringTranslation\StringTranslationTrait; use Drupal\Core\Utility\Token; use Drupal\path_alias\AliasManagerInterface; use Drush\Attributes as CLI; use Drush\Commands\DrushCommands; use Symfony\Component\DependencyInjection\ContainerInterface; /** * A Drush commandfile. * * In addition to this file, you need a drush.services.yml * in root of your module, and a composer.json file that provides the name * of the services file to use. */ final class CustomDrushCommands extends DrushCommands { use StringTranslationTrait; /** * Constructs a CustomDrushCommands object. */ public function __construct( private readonly Token $token, private readonly AliasManagerInterface $pathAliasManager, ) { parent::__construct(); } /** * {@inheritdoc} */ public static function create(ContainerInterface $container) { return new static( $container->get('token'), $container->get('path_alias.manager'), ); }

Note: The generator adds a comment about needing a drush.services.yml file. This requirement is deprecated and will be removed in Drush 13, so you can ignore it if you are using Drush 12. In our testing, this file does not need to be present.

Further down in the new class, we will see some boilerplate example code. This is where the magic happens:

/** * Command description here. */ #[CLI\Command(name: 'custom_drush:command-name', aliases: ['foo'])] #[CLI\Argument(name: 'arg1', description: 'Argument description.')] #[CLI\Option(name: 'option-name', description: 'Option description')] #[CLI\Usage(name: 'custom_drush:command-name foo', description: 'Usage description')] public function commandName($arg1, $options = ['option-name' => 'default']) { $this->logger()->success(dt('Achievement unlocked.')); }

This new Drush command doesn’t do very much at the moment, but provides a great jumping-off point. The first thing to note at the top of the function are the new PHP 8 attributes that begin with the #. These replace the previous PHP annotations that are commonly seen when writing custom plugins in Drupal. You can read more about the new PHP attributes.

The different attributes tell Drush what our custom command name is, description, what arguments it will take (if any), and any aliases it may have.

Step 3: Create our custom command

For our custom command, let’s modify the code so we can get the internal path from a path alias:

/** * Command description here. */ #[CLI\Command(name: 'custom_drush:interal-path', aliases: ['intpath'])] #[CLI\Argument(name: 'pathAlias', description: 'The path alias, must begin with /')] #[CLI\Usage(name: 'custom_drush:interal-path /path-alias', description: 'Supply the path alias and the internal path will be retrieved.')] public function getInternalPath($pathAlias) { if (!str_starts_with($pathAlias, "/")) { $this->logger()->error(dt('The alias must start with a /')); } else { $path = $this->pathAliasManager->getPathByAlias($pathAlias); if ($path == $pathAlias) { $this->logger()->error(dt('There was no internal path found that uses that alias.')); } else { $this->output()->writeln($path); } } //$this->logger()->success(dt('Achievement unlocked.')); }

What we’re doing here is changing the name of the command so it can be called like so:

drush custom_drush:internal-path <path> or via the alias: drush intpath <path>

The <path> is a required argument (such as /my-amazing-page) because of how it is called in the getInternalPath method. By passing a path, this method first checks to see if the path starts with /. If it does, it will perform an additional check to see if there is a path that exists. If so, it will return the internal path, i.e., /node/1234. Lastly, the output is provided by the logger method that comes from the inherited DrushCommands class. It’s a simple command, but one that helped us automatically set config during a CI job.

Table output

Note the boilerplate code also generated another example below the first — one that will provide output in a table format:

/** * An example of the table output format. */ #[CLI\Command(name: 'custom_drush:token', aliases: ['token'])] #[CLI\FieldLabels(labels: [ 'group' => 'Group', 'token' => 'Token', 'name' => 'Name' ])] #[CLI\DefaultTableFields(fields: ['group', 'token', 'name'])] #[CLI\FilterDefaultField(field: 'name')] public function token($options = ['format' => 'table']): RowsOfFields { $all = $this->token->getInfo(); foreach ($all['tokens'] as $group => $tokens) { foreach ($tokens as $key => $token) { $rows[] = [ 'group' => $group, 'token' => $key, 'name' => $token['name'], ]; } } return new RowsOfFields($rows); }

In this example, no argument is required, and it will simply print out the list of tokens in a nice table:

------------ ------------------ ----------------------- Group Token Name ------------ ------------------ ----------------------- file fid File ID node nid Content ID site name Name ... ... ...

Final thoughts

Drush is a powerful tool, and like many parts of Drupal, it’s expandable to meet different needs. While I shared a relatively simple example to solve a small challenge, the possibilities are open to retrieve all kinds of information from your Drupal site to use in scripting, CI/CD jobs, reporting, and more. And by using the drush generate command, creating these custom solutions is easy, follows best practices, and helps keep code consistent.

Further reading

The post Custom Drush commands with Drush Generate appeared first on Four Kitchens.

The Drop Times: Zoocha: The Drupal Development Specialists in the UK

Discover the journey of Zoocha, a leading Drupal Development Agency renowned for its commitment to innovation, sustainability, and client success. Since its inception in 2009, Zoocha has demonstrated a profound dedication to open-source technology and agile methodologies, earning notable certifications and accolades. With a portfolio boasting collaborations with high-profile clients across various sectors, Zoocha excels in delivering customized Drupal solutions that meet unique client needs.

This article delves into Zoocha's strategies for engaging talent, promoting Drupal among younger audiences, and its ambitious goal to achieve carbon neutrality by 2025. Explore how Zoocha's commitment to quality, security, and environmental sustainability positions it as a trusted partner in the dynamic world of web development.