Why The “Ghost Engineers” Paper Should Probably Not Be Getting Media Coverage (Yet)
The paper has not yet been peer-reviewed. In fact, it is not even available to read
I researcher named Yegor made this X post, where he has made the following claims:
- Based on data compiled from over 50,000 engineers and hundreds of companies, 9.5% of software engineers do virtually nothing
- 14% who work remotely do virtually nothing, compared to 9% who work in hybrid roles and 6% who work in the office
- 58% make fewer than three commits a month and 42% make trivial changes, like changing a single line of code — pretending to work
- If just 12 companies laid off all of them, their market cap would increase by $465 billion with absolutely no decrease in performance
The X post has already received media coverage from Business Insider, Yahoo! Finance, and now (as of yesterday) The Washington Post. Every day the X post seems to get picked up by a new news outlet, and the news outlet will start with an eye-catching headline (Example: “Tech Companies Are Awash In Ghost Engineers, Research Shows”), followed by what is often a more balanced and nuanced article alongside a disclaimer that the research has not been peer-reviewed.
If you find what some people are citing as the actual paper, you may notice the following details:
- The paper’s title is “Predicting Expert Evaluations In Software Code Reviews”
- They only evaluated Java
- They used a panel of ten “Java experts” consisting of three managers, three senior engineers, two executives, one director, and one vice president
- Their paper does not seem to be about ghost engineers, but rather how an AI model performs against a panel of human Java experts
I was not the only one confused by this, there are posters on LinkedIn posing the same question. Where is the paper? What research was done? The aforementioned paper on predicting expert evaluations looks relevant, but for the time being we only have an X post to cite as a source.
Closing Thoughts
Most of the news outlets have already shared contrarian opinions about flawed research methodology (bias towards companies that volunteered their data; an expert panel that was not all programmers). The implications, if they prove to be well-founded, are huge.
We should probably read the actual paper after it is peer-reviewed, then see it covered by The Washington Post.