VELOCITY GOVERNANCE RESEARCH

AI tools accelerated development.
Did governance keep up?

We're investigating whether AI-accelerated software development is degrading engineering governance practices — and building the instruments to empirically measure it.

Concordance Labs is a software engineering research company studying the gap between what velocity metrics capture and what actually determines whether a software organization is well-governed.

The Thesis

Velocity governance: the missing measurement layer

Traditional SDLC governance evolved around an implicit constraint: the pace of manual development created natural checkpoints. Writing code took days, which gave requirements time to be specified. Reviews were substantive because each pull request represented significant human effort. Architecture decisions were documented because the cost of rework was high.

AI coding assistants — GitHub Copilot, Cursor, Windsurf, Claude Code — have dramatically reduced that friction. PR volume increases. Commit frequency accelerates. Features ship faster. By multiple measures, development velocity has improved significantly.

But a growing body of evidence suggests that governance practices may not be scaling with this new velocity. If reviewers face higher PR volumes, does review depth decline? If code is faster to write than to specify, do requirements atrophy? If AI can regenerate code cheaply, does the incentive to document architectural decisions diminish? These are empirical questions — and they are largely unanswered.

Existing research measures code-level quality (churn rates, defect rates, test pass rates) and developer productivity (speed, throughput, satisfaction). What remains unmeasured is the effect on governance-level practices — the standards across the full SDLC that determine whether software is maintainable, auditable, and secure over time.

— The velocity governance gap

We call this gap velocity governance: the empirical measurement of whether foundational engineering practices keep pace with development speed, or whether they silently degrade as AI tools accelerate the parts of development that governance was designed to oversee.

Prior Work & the Research Gap

What existing research shows — and what it doesn't

Multiple studies have examined AI tools' impact on developer productivity and code quality. The findings are mixed — and notably, they focus on code-level metrics rather than governance practices.

GitClear (2024, 2025)
Code churn projected to double under AI adoption

Analysis of 153M+ changed lines found code reverted within two weeks projected to double in 2024 vs pre-AI baselines. Copy/paste code increased while refactoring decreased.

gitclear.com — "Coding on Copilot" (2024)
Google DORA (2024)
Defect rates correlate with AI adoption

The 2024 Accelerate State of DevOps Report found increased AI tool adoption correlated with higher defect rates, quantifying estimated change in delivery stability per 25% increase in AI usage.

DORA — 2024 Accelerate State of DevOps Report
Uplevel Data Labs (2024)
Higher bug rates with consistent throughput

Developers with Copilot access saw significantly higher bug rates while issue throughput remained consistent — suggesting speed gains may come at a quality cost.

Uplevel — "The Hidden Costs of AI-Assisted Coding"
GitHub (2024)
Code quality improvements in controlled study

A randomized controlled trial with 202 developers found Copilot users produced more functional code (53% more likely to pass unit tests). GitHub hypothesizes speed gains freed time for quality refinement.

github.blog — "Does GitHub Copilot improve code quality?" (Nov 2024)
The gap this research addresses

These studies measure code-level outcomes: churn, defects, readability, test pass rates. None measure the governance-level practices that determine long-term software health: review depth and substantiveness, requirements quality and traceability, architectural documentation currency, or the relationship between development velocity and practice adherence across the full SDLC. That is the scope of this research.

Research Hypotheses

The core research question: Does AI-accelerated development systematically degrade SDLC governance practices, and can this degradation be empirically measured using toolchain data?

H1
Review depth erosion

Code review thoroughness (comment density, review time relative to change volume, substantive vs. rubber-stamp approvals) declines as AI-generated PR volume increases within a team.

Measure → review comment density, time-to-approval / lines changed ratio, approval-without-comment rate
H2
Requirements atrophy

Issue and ticket quality (acceptance criteria presence, requirement specificity, estimation rigor) degrades in teams with high AI tool adoption, as the cost of implementation drops below the cost of specification.

Measure → acceptance criteria rate, requirement word count, estimation presence, issue-to-PR linkage
H3
Architectural documentation decay

ADR creation frequency, design discussion depth, and technical documentation currency decline when the perceived cost of code rework drops — reducing the incentive to plan before building.

Measure → ADR frequency, design-tagged issue ratio, docs-to-code commit ratio
H4
Cosmetic test coverage

Test suites achieve higher line coverage but lower meaningful assertion density when tests are AI-generated, creating a false quality signal that satisfies CI gates without providing genuine safety.

Measure → coverage %, assertion density, test-to-code ratio, test churn rate

The Instrument

A working assessment platform across 50 SDLC standards

We have built Concordance — an assessment platform that connects to engineering toolchains (GitHub, Linear, Jira) and scores teams against 50 practitioner-defined standards across 6 SDLC phases. Each standard is scored 1–5 with evidence drawn from actual toolchain data.

50
Standards
6
SDLC Phases
5
Maturity Levels
3
Integrations

The current instrument uses metadata-level proxies for scoring — presence of CI workflows, branch protection configuration, PR approval patterns, issue linkage rates, commit convention adherence, and release cadence regularity. This provides a structural governance baseline.

The innovation in the proposed research is extending this instrument with an NLP/AI analysis layer capable of assessing quality and depth — not just whether a review exists, but how substantive the review comments are; not just whether requirements are linked, but how specific and complete they are; not just whether tests exist, but how meaningful the assertions are. This transforms the instrument from a governance structure measurement tool into a governance quality measurement tool.

Federal & Policy Relevance

Active federal initiatives this research supports

Multiple active federal initiatives address AI risk in software systems but lack empirical tools for measuring SDLC governance degradation. This research directly supports these policy objectives.

NIST SP 800-218 & 218A
Secure Software Development Framework

NIST's SSDF defines secure development practices. SP 800-218A (2024) extends these to generative AI. Concordance's 50 standards map to SSDF practices and can provide empirical evidence of adherence or degradation.

FY2026 NDAA §1512–1513
DoD AI/ML Cybersecurity Policy & Framework

The FY2026 National Defense Authorization Act directs DoD to establish AI/ML cybersecurity policy and a security framework for contractors. This research provides measurement methodology for the SDLC governance dimension.

NIST AI RMF 1.0
AI Risk Management Framework

The NIST AI RMF establishes GOVERN, MAP, MEASURE, and MANAGE functions for AI risk. Velocity governance research extends the MEASURE function to cover development practice degradation — a risk category not yet addressed.

CMMC & EO 14028
Defense Industrial Base Software Assurance

CMMC imposes cybersecurity maturity requirements on defense contractors. Executive Order 14028 mandates secure development practices. Both require evidence of SDLC governance — the type of evidence Concordance generates.

Research Program

Phase IActive

Instrument, Measure, Analyze

Extend the Concordance instrument with NLP-based quality analysis for review comments, requirements text, and test assertions. Deploy to engineering teams with varying levels of AI tool adoption. Establish baseline governance scores and measure whether statistically significant differences exist between high-adoption and low-adoption cohorts.

NLP scoring models, baseline governance dataset, preliminary correlation analysis, feasibility assessment
Phase IIPlanned

Scale & Validate

Expand measurement across diverse engineering organizations and technology stacks. Conduct longitudinal studies to track governance scores over time as AI tool adoption increases. Validate the 50-standard framework against real-world outcomes (defect rates, security incidents, audit findings) to establish predictive validity.

Peer-reviewed publication, validated framework, expanded instrument, longitudinal dataset
Phase IIIFuture

Establish the Standard

Position the Concordance Framework as an industry reference for SDLC governance measurement — informing NIST guidance, referenced in federal software acquisition policy, and adopted by enterprises as a complement to DORA metrics and existing compliance frameworks.

Published research, federal reference, enterprise and defense adoption

Commercial Platform

Our research is operationalized through Concordance, an SDLC intelligence platform that scores engineering organizations against 50 standards with real toolchain evidence. The platform is live and available for use — demonstrating technical feasibility and providing the foundation instrument for research activities.

Concordance — The SDLC Intelligence Platform
50 standards · 6 phases · GitHub, Linear, Jira integration · Free tier available
getconcordance.com →

Principal Investigator

This research is led by a practitioner who has spent over a decade governing SDLC practices at enterprise scale in financial services — investigating the problem from direct operational experience, not theoretical models. Built the Concordance platform as a solo technical founder.

The founder and principal investigator brings 15+ years of experience leading engineering governance across major financial services, insurance, and technology organizations — including roles directing agile transformation and SDLC standardization efforts for organizations with hundreds of engineering teams.

Agile Center of Excellence leadershipFortune 500 financial servicesFortune 500 insuranceConsumer technology (hardware & services)Global advertising & media
M.Sc. ComputingB.Sc. Applied Computing8 Agile certificationsSAFe · Scrum · Kanban · ICAgileU.S. citizenSolo Technical Founder

Company

Legal entity
Concordance Labs LLC
Incorporated
Pennsylvania, 2026
Structure
U.S. small business concern
Focus
SDLC governance measurement & velocity governance research