Aten Design Group: Syncing Drupal and React for a Custom Interactive Map for Tampa International Airport

Syncing Drupal and React for a Custom Interactive Map for Tampa International Airport Image removed.jenna Fri, 02/21/2025 - 11:17 Drupal Client work

A new website as easy to navigate as the airport itself

Tampa International Airport consistently ranks #1 in customer satisfaction thanks in part to how easy it is to navigate. Our goal of the redesign of tampaairport.com was to make sure their Drupal 10 website was just as easy to maneuver. Their team wanted to level up their website’s airport map from simple and static to interactive and dynamic.

React application with Drupal content management

The application's goal was to fetch data from the newly implemented site, providing users with a tool to easily navigate the airport, search for specific locations, and even order food in advance at airport restaurants and cafes. The decision to build a React application stemmed from its performance and ability to integrate with the Drupal site, enabling site editors to consolidate, manage, and edit all content through the Drupal UI. This approach ensured that content updates could occur dynamically without the need for additional development efforts, setting up their team to sustain the custom map.

Recognizing that the primary users of this application would be on mobile devices, we worked hard to ensure that all JavaScript libraries and node packages were lightweight and optimized for mobile performance. 
We significantly reduced the overall footprint of the React application before integrating it into the Drupal site by leveraging techniques like code splitting, which separates components that can be loaded independently, and tree shaking, which removes unused code when bundling multiple JavaScript files. This reduced package size ensures a more efficient experience for users accessing the app on their mobile devices.

React Leaflet library

From a technology standpoint, the application is built using React Leaflet, a library that establishes connections between React and the open-source Leaflet JavaScript library.  The incorporation of Leaflet enables us to display custom images, replacing traditional geographical maps within the interface. To dynamically display our SVG maps, we utilize the ImageOverlay component using map information provided by our custom API endpoint provided by Drupal. We use memoization at the location level so that when users switch between maps, the application swiftly recalls previously viewed layers, ensuring a seamless and responsive experience.

const mapImage = useMemo(() => { return <ImageOverlay url={props.mapSVG} bounds={bounds} />; }, [props.mapSVG]);

Leaflet provides us with the flexibility to set our own boundaries for the map layer. In this application, we opted for a 1000 by 1000 base grid. Within the Drupal UI, editors have the ability to effortlessly add new map locations by specifying X and Y coordinates, effectively placing locations on the map within the established grid.

Furthermore, editors can easily customize the displayed map icons and manage the inclusion of locations within specific icon groupings, all through the user-friendly provided UI. This streamlined process empowers editors to make dynamic adjustments to the map without the need for additional development support.

Image removed.

 

Interactive user experience

The application also includes a list view of the map in which users can select specific locations, triggering a seamless navigation experience on the visual map. Each location in the list displays details such as their name, description, business hours, current status, and proximity to airport gates. Upon choosing a location, the application responds by centering the corresponding point on the visual map.

This interactive feature enhances user experience, allowing them to effortlessly explore and locate points of interest without manually searching the entire map. The linkage between the list view and the visual map ensures that users can easily transition from browsing a curated list of locations to visually identifying and locating those places on the map interface. It serves as an intuitive method to guide users from the textual representation of locations to their spatial representation on the visual map.

Notably, the list view was designed with accessibility in mind, ensuring better usability for keyboard and screen reader users. This thoughtful implementation caters to a diverse range of users, making it an inclusive and user-friendly experience for all. Check it out for yourself.

Video file

The collaboration between the Drupal and React technologies allowed us to build a solution that seamlessly meets the needs of both users navigating the maps and the editors on the Tampa International Airport team. Even though creating this map from scratch was a blast, the custom application I built could easily be applied to another client’s map or way-finding tool.

 

Image removed.Jennifer Dust

Wim Leers: XB week 26: ComponentSource plugins

A strong start of the week by Harumi “hooroomoo” Jang and Jesse Baker, to remove the “insert” panel appearing over the left sidebar, in favor of the components being listed inside the left sidebar:

Experience Builder’s (XB) component library now appears in the left sidebar — making them easier to reach. (Previously, there was a blue “plus” button that covered the left sidebar.)
Issue #3482394, image by Harumi.

The XB UI is once again leaping ahead of the back end: the UI for saving compositions of components as “sections” landed, well ahead of the necessary config entity, let alone the needed HTTP API for actually saving those! 👏

Missed a prior week? See all posts tagged Experience Builder.

Goal: make it possible to follow high-level progress by reading ~5 minutes/week. I hope this empowers more people to contribute when their unique skills can best be put to use!

For more detail, join the #experience-builder Slack channel. Check out the pinned items at the top!

Blocks

After a lot of iteration, Dave “longwave” Long, Felix “f.mazeikis” Mazeikis, Ted “tedbow” Bowman, Lee “larowlan” Rowlands and I landed the initial MR to add Blocks support to XB!

Until now, XB was tightly coupled to Single Directory Components (SDC). That’s no longer the case, thanks to the introduction of:

  • ComponentSource plugins: the existing SDC support was refactored into an sdc component source, the new Block support lives in a block component source
  • XB’s Component config entities now have a source property, referring to one of those source plugins

There’s still lots of loose ends — from the pragmatic low-level choice of using hook_block_alter() to automatically create Component config entities, to the rather limiting facts that block plugins’ settings forms do not yet work (the default settings are always used). But the basic infrastructure is there!

This was a huge refactor: 35 files, +1861, −878 is no small diffstat 😅

Grab bag

Week 26 was November 4–November 10, 2024.

Droptica: How to Find and Hire the Best Drupal Company

Image removed.

Drupal is a complex framework, but it delivers exceptional results, making your investment worthwhile. To fully leverage its potential, it's crucial to follow correct architecture and coding standards, ensuring the project’s long-term success. Working with a knowledgeable partner can get you a long way compared to the situation you’d be in if you started with someone without Drupal experience. In this article, we’ll guide you through the process of finding a solid Drupal development company.

Droptica: Drupal CMS vs. Drupal Core – Key Differences and How to Choose a System

Image removed.

At the beginning of 2025, a new CMS hit the market that could revolutionize how you can manage content online. We mean Drupal CMS, a platform designed primarily for marketers that offers intuitive tools for creating websites without coding. What sets this project apart? What capabilities does it offer, and how does it differ from the Drupal Core? We encourage you to read the article or watch an episode of the “Nowoczesny Drupal” series.

Tag1 Consulting: Migrating Your Data from D7 to D10: Migrating nodes - Part 1

Take control of your Drupal 7 to 10 node migration with our latest technical guide. Learn to extend core plugins, manage entity relationships, and implement custom filtering solutions. We’ve included practical code examples and step-by-step instructions for handling basic pages and articles so you can migrate your next project with confidence.

mauricio Fri, 02/21/2025 - 05:00

ComputerMinds.co.uk: Views Data Export: Sprint 3 Summary

I've started working on maintaining Views Data Export again.

I've decided to document my work in 2 week 'sprints'. And so this article is about what I did in Sprint 3.

The sprint ended up being a lot longer than i'd planned for various reasons, mostly illness. I'm starting another sprint today, and so wanted to post an update and draw a line under 'Sprint 3'.

Sprint progress

At the start of the sprint in the Drupal.org issue queue there were:

  • 48 open bugs
  • 4 fixed issues.
  • 63 other open issues

That's a total of 115 open issues.

By the end it looked like this:

  • 45 open bugs
  • 1 fixed issue.
  • 63 other open issues

So that's a total of 109 open issues, only a 5% reduction from before.

Key goals

In this sprint I wanted to:

  • Go through the remaining bug reports

Bug reports

  • I've still not managed to get through the remaining bug reports, though some have been closed/fixed in the sprint.

Future roadmap/goals

I'm not committing myself to doing these exactly, or any particular order, but this is my high-level list of hopes/dreams/desires, I'll copy and paste this to the next sprint summary article as I go and adjust as required.

  • Update the documentation on Drupal.org
  • Not have any duplicate issues on Drupal.org

The Drop Times: Meet the Trainers Taking the Stage at Florida DrupalCamp 2025

Florida DrupalCamp 2025 features expert-led training sessions covering Drupal CMS, Agile workflows, Drupal Forge, Laravel, and beginner site-building. Michael Anello explores Drupal CMS and its evolving tools, April Sides breaks down Agile and Git workflows, Salim Lakhani introduces cloud-based Drupal Forge, Rod Martin guides absolute beginners through Drupal 11, and Lee Walker helps Drupal developers transition into Laravel. Trainers share insights on technical advancements, best practices, and community collaboration.

Dries Buytaert: Automating alt-text generation with AI

Billions of images on the web lack proper alt-text, making them inaccessible to millions of users who rely on screen readers.

My own website is no exception, so a few weeks ago, I set out to add missing alt-text to about 9,000 images on this website.

What seemed like a simple fix became a multi-step challenge. I needed to evaluate different AI models and decide between local or cloud processing.

To make the web better, a lot of websites need to add alt-text to their images. So I decided to document my progress here on my blog so others can learn from it – or offer suggestions. This third post dives into the technical details of how I built an automated pipeline to generate alt-text at scale.

[newsletter-blog]

High-level architecture overview

My automation process follows three steps for each image:

  1. Check if alt-text exists for a given image
  2. Generate new alt-text using AI when missing
  3. Update the database record for the image with the new alt-text

The rest of this post goes into more detail on each of these steps. If you're interested in the implementation, you can find most of the source code on GitHub.

Retrieving image metadata

To systematically process 9,000 images, I needed a structured way to identify which ones were missing alt-text.

Since my site runs on Drupal, I built two REST API endpoints to interact with the image metadata:

  • GET /album/{album-name}/{image-name}/get – Retrieves metadata for an image, including title, alt-text, and caption.
  • PATCH /album/{album-name}/{image-name}/patch – Updates specific fields, such as adding or modifying alt-text.

I've built similar APIs before, including one for my basement's temperature and humidity monitor. That post provides a more detailed breakdown of how I built those endpoints.

This API uses separate URL paths (/get and /patch) for different operations, rather than using a single resource URL. I'd prefer to follow RESTful principles, but this approach avoids caching problems, including content negotiation issues in CDNs.

Anyway, with the new endpoints in place, fetching metadata for an image is simple:

[code bash]curl -H "Authorization: test-token" \ "https://dri.es/album/isle-of-skye-2024/journey-to-skye/get"[/code]

Every request requires an authorization token. And no, test-token isn't the real one. Without it, anyone could edit my images. While crowdsourced alt-text might be an interesting experiment, it's not one I'm looking to run today.

This request returns a JSON object with image metadata:

[code bash]{ "title": "Journey to Skye", "alt": "", "caption": "Each year, Klaas and I pick a new destination for our outdoor adventure. In 2024, we set off for the Isle of Skye in Scotland. This stop was near Glencoe, about halfway between Glasgow and Skye." } [/code]

Because the alt-field is empty, the next step is to generate a description using AI.

Generating and refining alt-text with AI

Image removed.

In my first post on AI-generated alt-text, I wrote a Python script to compare 10 different local Large Language Models (LLMs). The script uses PyTorch, a widely used machine learning framework for AI research and deep learning. This implementation was a great learning experience. I really enjoyed building it.

The original script takes an image as input and generates alt-text using multiple LLMs:

[code bash]./caption.py journey-to-skye.jpg { "image": "journey-to-skye.jpg", "captions": { "vit-gpt2": "A man standing on top of a lush green field next to a body of water with a bird perched on top of it.", "git": "A man stands in a field next to a body of water with mountains in the background and a mountain in the background.", "blip": "This is an image of a person standing in the middle of a field next to a body of water with a mountain in the background.", "blip2-opt": "A man standing in the middle of a field with mountains in the background.", "blip2-flan": "A man is standing in the middle of a field with a river and mountains behind him on a cloudy day.", "minicpm-v": "A person standing alone amidst nature, with mountains and cloudy skies as backdrop.", "llava-13b": "A person standing alone in a misty, overgrown field with heather and trees, possibly during autumn or early spring due to the presence of red berries on the trees and the foggy atmosphere.", "llava-34b": "A person standing alone on a grassy hillside with a body of water and mountains in the background, under a cloudy sky.", "llama32-vision-11b": "A person standing in a field with mountains and water in the background, surrounded by overgrown grass and trees." } }[/code]

My original plan was to run everything locally for full control, no subscription costs, and optimal privacy. But after testing 10 local LLMs, I changed my mind.

I always knew cloud-based models would be better, but wanted to see if local models were good enough for alt-texts specifically. Turns out, they're not quite there. You can read the full comparison, but I gave the best local models a B, while cloud models earned an A.

While local processing aligned with my principles, it compromised the primary goal: creating the best possible descriptions for screen reader users. So I abandoned my local-only approach and decided to use cloud-based LLMs.

To automate alt-text generation for 9,000 images, I needed programmatic access to cloud models rather than relying on their browser-based interfaces — though browser-based AI can be tons of fun.

Instead of expanding my script with cloud LLM support, I switched to Simon Willison's llm tool (see https://llm.datasette.io/). llm is a command-line tool and Python library that supports both local and cloud-based models. It takes care of installation, dependencies, API key management, and uploading images. Basically, all the things I didn't want to spend time maintaining myself.

Despite enjoying my PyTorch explorations with vision language models and multimodal encoders, I needed to focus on results. My weekly progress goal meant prioritizing working alt-text over building homegrown inference pipelines.

I also considered you, my readers. If this project inspires you to make your own website more accessible, you're better off with a script built on a well-maintained tool like llm rather than trying to adapt my custom implementation.

Scrapping my PyTorch implementation stung at first, but building on a more mature and active open-source project was far better for me and for you. So I rewrote my script, now in the v2 branch, with the original PyTorch version preserved in v1.

The new version of my script keeps the same simple interface but now supports cloud models like ChatGPT and Claude:

[code bash]./caption.py journey-to-skye.jpg --model chatgpt-4o-latest claude-3-sonnet --context "Location: Glencoe, Scotland" { "image": "journey-to-skye.jpg", "captions": { "chatgpt-4o-latest": "A person in a red jacket stands near a small body of water, looking at distant mountains in Glencoe, Scotland.", "claude-3-sonnet": "A person stands by a small lake surrounded by grassy hills and mountains under a cloudy sky in the Scottish Highlands." } }[/code]

The --context parameter improves alt-text quality by adding details the LLM can't determine from the image alone. This might include GPS coordinates, album titles, or even a blog post about the trip.

In this example, I added "Location: Glencoe, Scotland". Notice how ChatGPT-4o mentions Glencoe directly while Claude-3 Sonnet references the Scottish Highlands. This contextual information makes descriptions more accurate and valuable for users. For maximum accuracy, use all available information!

Updating image metadata

With alt-text generated, the final step is updating each image. The PATCH endpoint accepts only the fields that need changing, preserving other metadata:

[code bash]curl -X PATCH \ -H "Authorization: test-token" \ "https://dri.es/album/isle-of-skye-2024/journey-to-skye/patch" \ -d '{ "alt": "A person stands by a small lake surrounded by grassy hills and mountains under a cloudy sky in the Scottish Highlands.", }' [/code]

That's it. This completes the automation loop for one image. It checks if alt-text is needed, creates a description using a cloud-based LLM, and updates the image if necessary. Now, I just need to do this about 9,000 times.

Tracking AI-generated alt-text

Before running the script on all 9,000 images, I added a label to the database that marks each alt-text as either human-written or AI-generated. This makes it easy to:

  • Re-run AI-generated descriptions without overwriting human-written ones
  • Upgrade AI-generated alt-text as better models become available

This approach allows me to re-generate descriptions as models improve. For example, I could update the AI-generated alt-text when ChatGPT 5 is released. And eventually, it might allow me to return to my original principles: to use a high-quality local LLM trained on public domain data. In the mean time, it helps me make the web more accessible today while building toward a better long-term solution tomorrow.

Next steps

Now that the process is automated for a single image, the last step is to run the script on all 9,000. And honestly, it makes me nervous. The perfectionist in me wants to review every single AI-generated alt-text, but that is just not feasible. So, I have to trust AI. I'll probably write one more post to share the results and what I learned from this final step.

Stay tuned.