A recent survey conducted by GitLab provides valuable insights into developers’ perspectives on the use of Artificial Intelligence (AI) in software development. Titled ‘The State of AI in Software Development,’ the report compiled responses from more than 1,000 senior technology executives, developers, and security and operations professionals worldwide.
Findings from the survey reveal a nuanced relationship between the eagerness to adopt AI and concerns regarding data privacy, security, and intellectual property. Many enterprises are actively searching for AI platforms that strike a balance between leveraging its power and addressing potential risks, such as privacy and security considerations.
According to Alexander Johnston, a Research Analyst in the Data, AI & Analytics Channel at 451 Research, part of S&P Global Market Intelligence, the overwhelming majority of respondents—83 percent—regard AI implementation as vital to remain competitive.
However, these same respondents also harbored reservations about the potential risks associated with AI tools accessing sensitive information and intellectual property, with 79 percent expressing concerns in this regard. The desire to improve developer productivity emerged as a key advantage of AI adoption, as highlighted by 51 percent of all respondents.
Nonetheless, security professionals remain cautious about the possibility of AI-generated code leading to an increase in security vulnerabilities, potentially exacerbating their workload.
The current allocation of developers’ time revealed a disparity between identifying/mitigating security vulnerabilities—accounting for only seven percent—and testing code, which constituted 11 percent of their time. This discrepancy raises concerns about the widening gap between developers and security professionals during the AI era.
The survey emphasized the utmost importance of data privacy and intellectual property protection in choosing AI tools, as 95 percent of senior technology executives emphasized these aspects when making decisions. Moreover, a subset of respondents—32 percent—admitted to being ‘very’ or ‘extremely’ concerned about the introduction of AI into the software development lifecycle. Their worries specifically revolved around AI-generated code potentially introducing security vulnerabilities and lacking the same copyright protection as human-generated code.
Despite the optimism surrounding the potential of AI, the report exposed a divergence between organizations’ provision of AI training resources and practitioners’ satisfaction with them. While 75 percent of respondents indicated that their organizations offer training and resources for using AI, an equivalent percentage expressed a need to independently seek additional resources, suggesting that the existing training may be inadequate.
Astonishingly, 81 percent of respondents conveyed the necessity for further training to effectively utilize AI in their daily work. Furthermore, 65 percent of those intending to integrate AI into software development shared their organizations’ plans to hire new talent to manage AI implementation.
GitLab’s Chief Product Officer, David DeSanto, stressed that although only 25 percent of developers’ time is spent on code generation, the data reveals that AI can enhance productivity and collaboration in nearly 60 percent of their day-to-day tasks.
To fully unlock AI’s potential, it must be seamlessly integrated across the entire software development lifecycle, enabling everyone involved—beyond just developers—to benefit from its efficiency boost.
While AI undoubtedly holds tremendous promise for the software development industry, the GitLab report underscores the need to address cybersecurity and privacy concerns, bridge the skills gap, and foster collaboration between developers and security professionals to ensure successful AI adoption.