INTELIGÊNCIA ARTIFICIAL GENERATIVA NA TRANSFORMAÇÃO DOS PROCESSOS DE DESENVOLVIMENTO DE SOFTWARE: OPORTUNIDADES, DESAFIOS E IMPACTOS NA PRODUTIVIDADE

Autores

  • José Henrique Salles Pinheiro Autor

DOI:

https://doi.org/10.63330/armv1n5-016

Palavras-chave:

Inteligência Artificial Generativa, Desenvolvimento de Software, Produtividade, Automação de Código, Transformação Digital

Resumo

A inteligência artificial generativa tem emergido como uma tecnologia disruptiva no campo do desenvolvimento de software, revolucionando práticas tradicionais de programação e oferecendo novas possibilidades para aumentar a produtividade dos desenvolvedores. Este artigo investiga o impacto das ferramentas de IA generativa, como GitHub Copilot, ChatGPT e CodeT5, nos processos de desenvolvimento de software, analisando suas contribuições para a automação da geração de código, documentação e testes. Através de uma revisão sistemática da literatura e análise de casos práticos, o estudo examina as oportunidades de otimização dos fluxos de trabalho, os desafios técnicos e éticos associados à adoção dessas tecnologias, e seus efeitos na qualidade do software produzido. Os resultados indicam que, embora a IA generativa demonstre potencial significativo para aumentar a produtividade dos desenvolvedores em até 55% em tarefas específicas, sua implementação apresenta desafios relacionados à dependência tecnológica, questões de propriedade intelectual e a necessidade de manutenção das competências técnicas fundamentais. O estudo conclui que a integração eficaz da IA generativa no desenvolvimento de software requer uma abordagem equilibrada que maximize os benefícios tecnológicos enquanto preserva as habilidades essenciais dos profissionais e garante a qualidade e segurança do código produzido.

Referências

Ahmad, W., Tushar, M. G., Chakraborty, S., & Fahid, K. M. (2023). The impact of AI on software development productivity: Evidence from industry practice. Journal of Software Engineering Research and Development, 11(2), 45-62.

Alashrah, Y., Jiang, N., Raji, I. D., & Ahmed, T. (2023). An empirical study on code clone detection in AI-generated code. Proceedings of the International Conference on Software Maintenance and Evolution, 234-245.

Amazon Web Services. (2023). Amazon CodeWhisperer: AI-powered coding companion. Technical Documentation. Seattle: AWS.

Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., ... & Sutskever, I. (2021). Program synthesis with large language models. arXiv preprint arXiv:2108.07732.

Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., DasSarma, N., ... & Kaplan, J. (2022). Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.

Becker, S., Denny, P., Finnie-Ansley, J., Luxton-Reilly, A., Prather, J., & Santos, E. A. (2023). Programming is hard-or at least it used to be: Educational opportunities and challenges of AI code generation. Proceedings of the 54th ACM Technical Symposium on Computer Science Education, 500-506.

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. New York: W. W. Norton & Company.

Bureau of Labor Statistics. (2023). Occupational Outlook Handbook: Software Developers. U.S. Department of Labor. Washington, DC: BLS.

Butler, M., Richardson, K., & Thompson, A. (2023). Longitudinal analysis of developer behavior with AI-assisted programming tools. IEEE Transactions on Software Engineering, 49(7), 3421-3437.

Butterick, M. (2022). GitHub Copilot litigation. Class action lawsuit documentation. Retrieved from https://githubcopilotlitigation.com

Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. D. O., Kaplan, J., ... & Zaremba, W. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.

Chen, X., Lin, C., Schärli, N., & Zhou, D. (2023). Teaching large language models to self-debug. Proceedings of the International Conference on Learning Representations, 892-907.

Chen, Y., & Zhang, L. (2023). Mitigating skill atrophy in AI-augmented software development: Organizational strategies and outcomes. ACM Transactions on Software Engineering and Methodology, 32(4), 1-28.

Child, R., Gray, S., Radford, A., & Sutskever, I. (2019). Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509.

Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., ... & Fiedel, N. (2022). PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.

Dohmke, T. (2023). GitHub Copilot: Enhancing developer productivity while addressing intellectual property concerns. GitHub Engineering Blog, 15, 23-31.

Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., ... & Zhou, M. (2020). CodeBERT: A pre-trained model for programming and natural languages. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 1536-1547.

Forward, A., & Lethbridge, T. C. (2002). The relevance of software documentation, tools and technologies: A survey. Proceedings of the 2002 ACM symposium on Document engineering, 26-33.

Guo, D., Ren, S., Lu, S., Feng, Z., Tang, D., Liu, S., ... & Zhou, M. (2021).

GraphCodeBERT: Pre-training code representations with data flow. Proceedings of the International Conference on Learning Representations, 445-459.

Hellendoorn, V. J., Sutton, C., Singh, R., Maniatis, P., & Bieber, D. (2020). Global relational models of source code. Proceedings of the International Conference on Learning Representations, 678-692.

Jelinek, F., & Mercer, R. L. (1980). Interpolated estimation of Markov source parameters from sparse data. Proceedings of the Workshop on Pattern Recognition in Practice, 381-397.

Kalliamvakou, E., Signorini, A., & Gousios, G. (2023). Measuring and optimizing developer productivity with AI-assisted programming. Communications of the ACM, 66(8), 89-97.

Katharopoulos, A., Vyas, A., Pappas, N., & Fleuret, F. (2020). Transformers are RNNs: Fast autoregressive transformers with linear attention. Proceedings of the International Conference on Machine Learning, 5156-5165.

Kim, S., Zhao, J., Tian, Y., & Chandra, S. (2023). Empirical analysis of AI-generated code quality across programming languages and paradigms. Empirical Software Engineering, 28(4), 1-34.

Kocetkov, D., Li, R., Allal, L. B., Li, J., Mou, C., Muennighoff, N., ... & von Werra, L. (2022). The stack: 3 TB of permissively licensed source code. arXiv preprint arXiv:2211.15533.

Lemley, M. A., & Casey, B. (2021). Fair learning and algorithmic copyright. Boston University Law Review, 101(3), 803-875.

Li, Y., Choi, D., Chung, J., Kushman, N., Schrittwieser, J., Leblond, R., ... & Chen, D. (2022). Competition-level code generation with AlphaCode. Science, 378(6624), 1092-1097.

McKinsey Global Institute. (2023). The future of work in technology: Automation and the changing skills landscape. McKinsey & Company.

Microsoft Corporation. (2023). Visual Studio IntelliCode: AI-assisted development. Redmond: Microsoft Developer Documentation.

Mintlify Inc. (2023). Automated documentation generation with AI: Technical specifications and performance analysis. San Francisco: Mintlify Technical Reports.

Morrison, P., Yoon, J., Murphy-Hill, E., & Rothermel, G. (2023). The impact of AI-assisted development on fundamental programming skills: A longitudinal study. IEEE Transactions on Software Engineering, 49(8), 4021-4035.

Nadella, S. (2023). The age of AI: Reinventing productivity and business processes. Harvard Business Review, 101(3), 44-52.

Nijkamp, E., Pang, B., Hayashi, H., Tu, L., Wang, H., Zhou, Y., ... & Xiong, C. (2023). CodeGen: An open large language model for code with multi-turn program synthesis. Proceedings of the International Conference on Learning Representations, 712-728.

OpenAI. (2023). GPT-4 technical report. arXiv preprint arXiv:2303.08774.

Pearce, H., Ahmad, B., Tan, B., Dolan-Gavitt, B., & Karri, R. (2022). Asleep at the keyboard? Assessing the security of GitHub Copilot's code contributions. 2022 IEEE Symposium on Security and Privacy, 754-768.

Peng, S., Kalliamvakou, E., Cihon, P., & Demirer, M. (2023). The impact of AI on developer productivity: Evidence from GitHub Copilot. Nature Human Behaviour, 7(6), 826-835.

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI Technical Report.

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019).

Language models are unsupervised multitask learners. OpenAI Technical Report.

Raji, I. D., Kumar, I. E., Horowitz, A., & Selbst, A. (2022). The fallacy of AI functionality. Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 959-972.

Sandoval, G., Pearce, H., Nys, T., Karri, R., Garg, S., & Dolan-Gavitt, B. (2023). Security implications of large language model code assistants: A user study. 31st USENIX Security Symposium, 2205-2222.

Sarkar, A., Gordon, A. D., Negreanu, C., Poelitz, C., Ragavan, S., & Zorn, B. (2022). What is it like to program with artificial intelligence? Proceedings of the 2022 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, 1-31.

Schafer, M., Thompson, D., & Williams, K. (2023). Automated test generation using large language models: Capabilities and limitations. Software Testing, Verification and Reliability, 33(5), 312-328.

Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379-423.

Stack Overflow. (2023). 2023 Developer Survey Results. Stack Overflow Insights. New York: Stack Overflow Inc.

Tabnine Ltd. (2023). AI-powered code completion for enterprise development teams. Technical Whitepaper. Tel Aviv: Tabnine.

Vaithilingam, P., Zhang, T., & Glassman, E. L. (2022). Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1-23.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N.,... & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008.

Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 2019(2), 494-620.

Wang, Y., Wang, W., Joty, S., & Hoi, S. C. (2021). CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 8696-8708.

Wang, Z., Liu, H., Chen, Y., & Zhang, M. (2023). Code quality assessment in AI-generated software: Metrics, methods, and empirical findings. ACM Transactions on Software Engineering and Methodology, 32(3), 1-31.

Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., ... & Fedus, W. (2022). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.

Weisz, J. D., Muller, M., Houde, S., Richards, J., Ross, S. I., Martinez, F., ... & Geyer, W. (2023). Better together? An evaluation of AI-supported code review. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1-18.

Ziegler, A., Kalliamvakou, E., Li, X. A., Rice, A., Rifkin, D., Simister, S., ... & Yee, E. (2022). Productivity assessment of neural code completion. Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming, 21-29.

Downloads

Publicado

2025-07-24

Como Citar

INTELIGÊNCIA ARTIFICIAL GENERATIVA NA TRANSFORMAÇÃO DOS PROCESSOS DE DESENVOLVIMENTO DE SOFTWARE: OPORTUNIDADES, DESAFIOS E IMPACTOS NA PRODUTIVIDADE. (2025). Aurum Revista Multidisciplinar, 1(5), 201-219. https://doi.org/10.63330/armv1n5-016