The Developer Productivity Engineer | Monthly Newsletter | April 2026

KotlinConf 2026 heads to Munich | AI writes code faster — but your pipeline still can't keep up | Harness engineering: the structured approach to AI-assisted development | Spotify's lessons from 30% more code per developer | GitHub Actions for Gradle v6 drops | DPE and AI jobs worldwide!

KotlinConf 2026 — Munich
Featured Upcoming Event

🌐 May 20–22: KotlinConf 2026 — Meet the Gradle Technologies team in Munich

The JetBrains flagship Kotlin conference is heading to Munich's ICM Messe, and the Gradle Technologies team will be there to discuss everything from Gradle Build Tool and Develocity to Developer Productivity Engineering and AI's evolving role in software delivery.

With Kotlin multiplatform now production-ready across Android, backend, and beyond, the pressure on build infrastructure has never been higher. This is the year to talk about what it actually takes to keep fast, reliable feedback loops when your codebase is growing across every platform simultaneously.

Whether you want to deep-dive on build tooling, swap war stories about CI pipeline optimization, or explore how Develocity's context engine gives coding agents the causal context they need to reason about builds and test results, come find us in Munich before sessions fill up.

Register for KotlinConf 2026
Harness Engineering — structured AI workflows
Expert Takes

Harness engineering: why your AI assistant writes better code when you design the environment

Published this week on Red Hat Developer, Marco Rizzi's "Harness Engineering" is a short, practical read that engineering leaders should share widely.

Rizzi's core finding: AI coding tools perform inconsistently not because the model is bad, but because the context it receives is unstructured. When he stopped pasting Jira tickets and started building structured harnesses — defined input/output contracts, explicit codebase context, constrained scope — the AI's output became reliably useful rather than unpredictably correct.

For DPE teams, the implication is significant. The build system is the harness. CI/CD pipelines are the harness. Structured, observable toolchains are the harness. Organizations that invest in toolchain discipline before scaling AI adoption will see compounding returns; those that skip this step will spend an increasing share of engineering time reviewing and reverting AI output that "looked right but wasn't."

Read the article
AI developer productivity measurement
Ideas & Insights

The honest numbers on AI and developer productivity — and what they actually mean

METR, the AI safety research organization, released an updated analysis of its developer productivity experiment in February — and the results are more nuanced than most headlines suggest.

Their earlier study found AI-assisted tasks took 19% longer on average. The follow-up, with a more diverse cohort and tasks, now shows a speedup of around 18% — but with wide confidence intervals and a significant caveat: developers who gained the most from AI were increasingly opting out of the experiment because they didn't want to work without AI tools. The best data points are missing from the dataset.

This is exactly the measurement challenge that engineering leaders face. The teams who most need productivity insights are the ones where AI adoption is highest — and traditional experiment designs break down when the technology changes the developer's expectations mid-study.

The takeaway for leaders: trust your internal DORA metrics and build analytics over vendor claims and academic studies. If you can observe your pipeline with real data — build times, test cycle times, failure rates, cache hit rates — you have a more honest picture of AI's impact on your team than any published study can provide.

Read the research
AI code volume flooding the pipeline
Expert Takes

Spotify saw 30% more code per developer from AI. Then the pipeline broke.

LeadDev's annual engineering leadership survey published in January includes a striking account from Niklas Gustavsson, VP of Engineering at Spotify: with 90% of developers using AI every day, Spotify saw code changes per developer rise 30%. What followed was a spike in code review time, quality concerns, and pipeline strain that leadership didn't anticipate.

Gustavsson's team responded not by pulling back on AI — but by treating the problem as a systems design challenge, not a tooling choice. They deployed background coding agents to manage fleet maintenance, applied guardrails around test automation and verification, and invested in developer experience improvements alongside the AI tools themselves.

Their conclusion, echoed by engineering leaders across the survey: "If you want durable productivity gains from AI, invest as much in reliability, review, and developer experience as you do in the tools themselves." The pipeline is not an afterthought. It is the leverage point.

Read the full analysis
Develocity IntelliJ plugin — AI-powered troubleshooting
Best Practices

Fix AI-written code faster: Trisha Gee on the Develocity IntelliJ plugin + agentic AI

Java Champion Trisha Gee's February post on the Gradle Technologies blog has become required reading in DPE circles — and for good reason. She puts her finger on a paradox every senior engineer recognizes: AI accelerates code creation, but it also creates code that developers don't fully understand, making failures harder to diagnose.

Gee's practical answer is the Develocity IntelliJ IDEA plugin combined with agentic AI troubleshooting. Instead of context-switching to a browser or CI dashboard when a build fails, developers stay in the IDE and ask the AI agent direct questions: "Why did this fail in CI but not locally?" The agent queries Develocity build data to answer with real context rather than generic suggestions.

For engineering leaders, the key metric here is the ratio of time creating to time debugging. AI tools shift code creation dramatically to the left — but if your debugging infrastructure doesn't keep pace, you simply trade one bottleneck for another. Combining deep build observability with in-IDE troubleshooting keeps the creative phase as long as possible.

Read Trisha's post
GitHub Actions for Gradle v6
Velocity | Product Update

GitHub Actions for Gradle v6: what's changing, and what it means for your CI

Gradle Technologies shipped v6 of GitHub Actions for Gradle builds this month — the collection of actions used by over 45,000 open source repositories, including GitHub's own official starter workflows.

The headline change: the caching component in setup-gradle is transitioning from a fully open-source MIT license to a commercial Terms of Use for the enhanced caching feature. For public repositories, enhanced caching remains free. For organizations running large-scale builds on self-hosted runners, this licensing change opens the path to production-ready Configuration Cache support and more advanced caching capabilities coming later in 2026.

If you are running Gradle builds at scale — especially as AI-driven development floods your CI with more commits and test cycles — this is also a good moment to evaluate Develocity Universal Cache, which provides a globally distributed caching layer with full observability across every build, regardless of runner location.

Read the announcement
Career Opportunities

DPE (and AI) job openings

The industry needs you! You might find your dream role among these job openings related to DPE, AI developer productivity, and engineering leadership.

NOTE: These postings are active at the time of sending but are subject to change.

🏢 Gradle Technologies' customers are hiring

🌍 Other open positions around the world

Gradle Technologies

Gradle Technologies | 2261 Market Street | San Francisco, CA 94114

Privacy Policy | Unsubscribe