Microsoft is detailing how it treats bugs in its software and services utilizing machine learning models. “47,000 developers generate 30,000 bugs a month,” explains Scott Christiansen, a senior security program manager at Microsoft. The software maker tracks these bugs throughout GitHub and AzureDevOps repositories, but it’s a variety of points to track with simply traditional labeling and prioritization.
Microsoft is now using almost 20 years of historical data throughout 13 million work items and bugs to develop a machine-learning model that may separate security and non-security bugs 99% of the time. It’s a model that’s designed to help builders accurately identify and prioritize critical security points that need fixing.
Security specialists and data scientists labored together at Microsoft to create the model, ensuring that it might be monitored in production and that a random sampling of bugs is manually reviewed.
The model can also be continually re-trained with new data that are reviewed by Microsoft’s safety experts. This machine studying model means Microsoft now accurately identifies security glitches 99% of the time, and labels them correctly 97% of the time.
It’s unusual for a corporation the size of Microsoft to disclose how many bugs its builders generate on a month-to-month basis, let alone how it tackles these.
Microsoft is now planning to open-source its methodology to GitHub, allowing different companies with similar data units to implement a similar model.