AI in Software Engineering: Lessons from the Dot-Com Era

AI in Software Engineering

TL;DR AI in software engineering is not an existential threat. It is simply the latest disruption. Adaptability, engineering discipline, and an unwavering focus on outcomes remain what matter. The real lesson from the dot-com years, and every wave since, is straightforward: build for change, measure what matters, and never lose sight of business value.

The wave is familiar: AI in software engineering is not new

Ignore the hype for a moment. Every decade or so, a new technology arrives and upends the status quo. Headlines declare the death of the old guard. What persists, in reality, is the need for teams who adapt. I have seen this before. In the late nineties, we shifted from monoliths to the web almost overnight; later came agile, then the move from data centres to the cloud. The job titles changed: web developer, cloud engineer, platform lead. But the people who endure are those who read the room, learn the tools, and keep their focus on what delivers real value. Enterprise adoption cycles for foundational technologies remain measured and deliberate. In my experience and across industry discussions, it is not unusual for large organisations to require 18 to 36 months to shift even core workflows to an AI-augmented model. For leaders seeking practical examples of enterprise-scale transformation, McKinsey offers an up-to-date overview of how AI is reshaping software development at scale.

Patterns repeat, but AI in software engineering is about impact

The dot-com boom created new jobs overnight. Some roles faded, others emerged stronger. What has not changed: engineering roles that focus on real outcomes persist. Today, AI in software engineering brings its own skills race, but the winning approach is unchanged. Adapt, deliver measurable results, and do not confuse novelty for value.

Commercial realities: timelines and challenges for enterprise AI adoption

Despite the hype cycles, enterprise adoption is neither instant nor frictionless. In practice, most mature organisations move at the speed of risk management, integration effort, and clear ROI. Current industry patterns suggest that meaningful AI augmentation of software delivery takes place over 18 to 36 month cycles, not weeks or quarters. Early pilots often struggle to show measurable ROI, especially where teams lack clear measurement or change management discipline. Successful adoption demands executive sponsorship, robust frameworks, and patience with the inevitable setbacks.

AI in software engineering: leverage, not replacement

AI in software engineering is not about erasing the need for engineers. It changes where we spend our attention. Routine work moves faster; new questions surface. But deciding what to build, why, and how to do it well remains stubbornly human. The teams that succeed are those that use AI to amplify discipline, not shortcut it.

Risks and fundamentals: why the basics matter more than ever

Every wave brings illusions. The dot-com years made speed a virtue, sometimes at the expense of quality. Today, it is tempting to believe AI means we can automate away discipline. In practice, AI speeds up both solutions and mistakes. The difference comes down to fundamentals: modularity, observability, and robust review practices. These do not go out of fashion.

Regulation and responsible scale

Regulation will always lag innovation. But with AI, the stakes are higher. Traceability, auditability, and transparency are non-negotiable in regulated industries. Build for scrutiny, using established compliance frameworks, with practical implementation steps available in this AI Compliance and Implementation Guide. Assume that governance and privacy requirements will only get tougher.

Hype versus practice: what actually works

Three rules for this wave:

  1. Adopt with intention. Use AI to remove toil and automate the repeatable, but never abdicate understanding.
  2. Double down on fundamentals. AI will reveal the strengths and weaknesses of your codebase and processes. Make them explicit, modular, and observable. For practical frameworks, see the DORA Value Stream Mapping Guide.
  3. Measure what matters. DORA, SPACE, and business KPIs tell you if you are building the right things. If you do not measure, you are not engineering. For forward-looking trends in measurement, see Future of Software Development: DORA, SPACE & AI Agents.

This is not a start-up story. The biggest shifts are happening inside established companies with deep pockets, mature delivery organisations, and global reach. Success will not go to the fastest adopters, but to those who align platforms, governance, and skills, often over multi-year cycles. The best engineers now pair hands-on experience with systems thinking and prompt literacy. Prompt literacy means more than asking ‘write code’ or ‘generate tests’. For instance, instead of a vague request like ‘write unit tests’, an effective engineer will specify: ‘write unit tests for this payment processing function using Jest, covering edge cases for invalid card numbers and network timeouts, following our established testing patterns.’ For more, see these Best Practices for Prompt Engineering. This practical skill is increasingly a differentiator in high-performing teams. The strongest organisations run regular prompt-literacy workshops and track improvements in both output and defect rates.

It is not only engineers who must adapt. Product managers, designers, and compliance leads need to understand AI’s constraints and risks. Integration, governance, and trust are now everyone’s business.

Measurement in practice: what DORA and SPACE mean for AI

The test for AI is not output, but value. DORA metrics such as lead time, deployment frequency, change failure rate, and MTTR must be interpreted in context. When AI is generating code or tests, “lead time” includes both the time to review, correct, and productionise AI outputs, and not just initial generation. “Change failure rate” must account for issues unique to AI-augmented changes, such as misunderstood requirements or model hallucinations. For SPACE, measuring developer satisfaction means capturing how well teams integrate AI tools into their workflows, not just raw productivity. Organisations should supplement standard metrics with qualitative feedback on AI friction, upskilling needs, and code quality.

The test for AI is not output, but value:

  • DORA: Lead time, deployment frequency, change failure rate, MTTR
  • SPACE: Satisfaction, performance, activity, communication, efficiency
  • Business KPIs: Customer adoption, churn, cost to serve

Do not just count deployments. Track outcomes and adjust.

Value stream mapping in practice

Generic guidance to “map your value stream” is not enough. For example, identify repetitive code review or QA tasks that consume 15–20% of senior developer time. Pilot an AI-assisted code review tool for these scenarios, and measure both the reduction in manual review effort and any changes in defect detection rates. The point is to identify where AI can genuinely free up expertise for higher-value work, and to track the impact with both hard data and direct feedback.

Building for what lasts

Adaptability and business value do not go out of date. Yet the shape of disruption is always shifting. For example:

NeedDot-Com DisruptionsAI Disruptions
AdaptabilityDelivery channels, LAMP stack, open source, web risePlatformisation, automation, AI-augmented workflows, compliance demands
Focus on business valueE-commerce, digital marketing, first-gen SaaS, global reachAI literacy, prompt engineering, new measures of value, data as product
Engineering disciplineSurviving rapid growth and crashes, DevOps, cloudNew risks from automation, continuous learning, tighter governance and measurement

After the dot-com crash, the survivors rebuilt with better systems, stronger habits, and a clearer sense of purpose. AI offers a similar invitation, but only for teams willing to measure, adapt, and invest in their own foundations.

Let’s not chase acceleration for its own sake. Build for resilience, transparency, and impact, because the hardest problems in software are not technical; they are human, and they always have been.

Immediate next action: Pick one high-friction workflow (such as code review or deployment approvals) and run a focused pilot of AI augmentation, with clear before-and-after measurement of effort and outcomes. Use this as your quarterly learning and improvement cycle. For those who want to go deeper, a short list of authoritative resources is included below.

Check out other Artificial Intelligence articles here.


Further reading:

Be the first to comment

Leave a Reply

Your email address will not be published.


*