Skip to content

FAQ & Glossary

Common questions about GitHub Copilot metrics, followed by a glossary of key terms used throughout this toolkit.


Frequently Asked Questions

1. What's included in Copilot usage metrics?

IDE telemetry for completions, chat, agent mode, and PR lifecycle events. This requires telemetry to be enabled in the developer's IDE. Metrics capture how developers interact with Copilot features during their coding workflow.

2. What's NOT included?

The following are not included in the Usage Metrics API:

  • GitHub.com Chat (web-based Copilot Chat)
  • GitHub Mobile interactions
  • License and seat assignment data

For seat information

Use the Copilot User Management API for license and seat data.

3. How fresh is the data?

Data is available within 3 full UTC days. For example, Monday's data will be visible by Thursday end-of-day UTC.

Monday activity → available Thursday EOD UTC
Tuesday activity → available Friday EOD UTC
4. Can I get user-level data?

Yes. Use the Users 28-day and Users 1-day API endpoints at the enterprise level. These provide per-user breakdowns of Copilot usage across all features.

5. What about organization-level data?

Organization-level data is available from December 12, 2025 onward. It is based on org membership, not seat assignment — meaning a user's activity is attributed to the org they belong to.

6. Does agent mode show in metrics?

Yes — agent mode activity appears in:

  • Current Usage Metrics API
  • Code Generation Dashboard
  • ~~Legacy Copilot Metrics API~~ (not included)
7. What's happening to the legacy APIs?
Legacy API Sunset Date
User-Level Feature Engagement API March 2, 2026
Copilot Metrics API April 2, 2026

Action required

Migrate to the Copilot Usage Metrics API before these dates. See the Dashboards & Data Sources page for migration guidance.

8. What PAT scopes do I need?
  • manage_billing:copilot
  • read:enterprise
  • Enterprise Copilot metrics (read)
9. How is acceptance rate calculated?

Accepted suggestions ÷ total suggestions shown.

Important context

Developers use Copilot in many ways — research, verification, confirmation, and learning — so acceptance rate should not be used as a sole productivity metric. A low acceptance rate does not necessarily mean low value.

10. Can I compare metrics across organizations?

Organization-level metrics are not deduplicated across orgs. A user belonging to multiple orgs will appear in each org's metrics independently.

Tip

Enterprise-level endpoints deduplicate users, making them the correct choice for cross-org comparisons.

11. What IDEs are supported?
IDE Minimum Version
VS Code 1.101+
JetBrains 2024.2.6+
Visual Studio 17.14.13+
Eclipse 4.31+
Xcode 13.2.1+
12. How do I measure ROI?

See the Impact & ROI page. The key approach:

  1. Baseline — capture pre-Copilot metrics
  2. Measure deltas — compare post-adoption changes
  3. Translate to business value — quantify time savings, quality improvements
  4. Compare vs license cost — calculate net ROI

Use Apache DevLake if you want a prebuilt open-source path for DORA-style correlation, or combine the same data sources in your existing analytics stack.

13. What is NDJSON?

Newline-Delimited JSON — one JSON object per line. It is a standard format for streaming data and is used by the Usage Metrics API download responses.

{"user":"alice","date":"2025-01-15","completions":42}
{"user":"bob","date":"2025-01-15","completions":37}
14. How do I handle Premium Request costs?

Use copilot-metrics-tools to analyze premium request consumption by user and model. Monitor the ratio of included vs. billed requests to stay within budget.

15. Can I correlate Copilot usage with delivery outcomes?

Yes. One option is Apache DevLake, which can ingest Copilot, GitHub, and delivery data into a common model for dashboards and adoption-tier analysis. You can also build the same views in your existing analytics stack.

Common metrics to compare by adoption tier include:

  • PR cycle time
  • Deployment frequency
  • DORA metrics (lead time, change failure rate, MTTR)

Glossary

Alphabetical reference of key terms used throughout this toolkit.

Acceptance Rate
Ratio of accepted code suggestions to total suggestions shown. Should not be used as the sole productivity metric — developers derive value from Copilot beyond accepted completions.
CFR (Change Failure Rate)
Percentage of deployments causing failures in production. One of the four DORA metrics.
DAU (Daily Active Users)
Unique users interacting with Copilot on a given day.
DevLake
Apache DevLake — an open-source dev data platform for normalizing and correlating DevOps metrics from multiple sources (GitHub, Jira, Jenkins, etc.).
DORA
DevOps Research and Assessment — a framework measuring four key metrics: deployment frequency, lead time for changes, change failure rate, and MTTR.
Grafana
Open-source analytics and visualization platform used with DevLake to build dashboards for DORA metrics and Copilot adoption correlation.
LoC (Lines of Code)
In this context, lines suggested, added, or deleted by AI. Used in code generation dashboards to quantify Copilot's contribution.
MAU (Monthly Active Users)
Unique users interacting with Copilot in a 28-day window.
MTTR (Mean Time to Recovery)
Average time from incident detection to resolution. One of the four DORA metrics.
NDJSON (Newline-Delimited JSON)
One JSON object per line, used for streaming data. The Usage Metrics API returns data in this format.
PR Cycle Time
Duration from pull request creation to merge. A key delivery velocity indicator that can be correlated with Copilot adoption.
Premium Requests
Copilot interactions that consume premium model quota beyond the included allocation. Tracked per user and per model.
SPACE
Satisfaction, Performance, Activity, Communication, Efficiency — Microsoft's developer productivity framework. Provides a holistic lens for measuring developer experience beyond raw output.
WAU (Weekly Active Users)
Unique users interacting with Copilot in a 7-day window.