To resolve, at the end of the year I will take a sample of projects with 1000+ starts that allow AI contributions. I will divide the number of human approved commits that had an AI author (where the AI generated the original changeset, reviews don't count) by the total number of commits. The year will resolve to this percentage.
NOTE: The AI may be a number of asynchronous AI agents (Copilot, Google Jules, etc) - I'll count all clearly tool generated code if it's marked as such in git/GitHub. User submitted code will not count.
If I can't accurately calculate the percentage, or can't find enough projects that meet this criteria, I'll resolve N/A. Feedback welcome on this resolution process, it may be adjusted to resolve ambiguity.
I won't bet on this market.
Normally I use copilot to autocomplete as I code and I don’t bother to note that anywhere. I think this is normal so the question as written look like it’s really about how often people mention using it which is probably a much smaller percentage than its use.
@LiamZ I'm intending to only count asynchronous AI agents here (like Copilot agent and, not any AI assisted code. Updated accordingly.