Our CTO’s Experience Using Generative AI for Coding

Evan Davis, our CTO, has leveraged the latest in AI technology to solve real-world problems. He recently discussed his experiences using generative AI for coding work with the Skim AI team, providing some of his personal insights into the technology’s practical applications and potential pitfalls. With this conversation and resulting blog, we are aiming to provide an inside look into this rapidly changing environment, and specifically why coding will never be the same in this easy-to-use generative AI era.


blank

The Potential and Limitations of Generative AI in Coding

Generative AI, a subset of artificial intelligence, focuses on creating new content, from artwork to code, by learning from existing examples. In Evan’s recent experience with top generative AI models like ChatGPT-3.5, GPT-4, Github CoPilot, Amazon CodeWhisperer, Bard and more, the technology has proven to be a valuable ally in coding tasks. This is an important note before diving into some of the limitations. In its current state, generative AI is making a big impact in coding, and it will only continue to evolve and get better over time.


For those with some coding knowledge, the technology can provide a significant head start, potentially taking care of around 80% of the task and making the debugging process more manageable. This aligns with industry experts who believe generative AI will revolutionize the way we approach coding, speeding up the process and reducing human error.

blank

However, as with any emerging technology, generative AI is not without its limitations. In his personal experience, Evan noted that the technology occasionally generated code that lacked in logic, particularly with less common libraries. The AI would call phantom functions – functions that didn’t exist in the library – leading to confusion and additional time spent deciphering the outputs. This is a known challenge in the field, and researchers continue to refine AI algorithms to address such issues.


The Role of GitHub Copilot and Its Potential Impact

Evan also shared his experience with GitHub Copilot, an AI-powered coding assistant that suggests line-by-line code. He found it less likely to go off the rails, as it operates within the boundaries of the user’s existing code.

blank

Developed by GitHub and OpenAI, Copilot suggests line-by-line code based on the context provided by the user, making it less likely to produce erroneous outputs.


The upcoming release of Copilot X, which integrates GPT-4 into Visual Studio Code and features voice control, is a major advancement in the field. This tool will act as a voice-activated coding assistant, marking a significant step towards integrating AI more fully into our coding workflows, a trend that is gaining traction across the tech industry.

Generative AI: Today’s Rosetta Stone

During the team’s conversation about generative AI, CEO Greggory Elias drew an interesting analogy between AI-generated code and the Rosetta Stone, illustrating the capabilities and constraints of the technology. Just as the Rosetta Stone served as a translation mechanism for ancient languages, generative AI can be seen as a translator for coding languages.

blank

Generative AI, in the context of coding, functions much like a translator. It works to understand the syntax, conventions, and idioms of one language and recreate them in another. It goes beyond a simple word-for-word translation, aiming to capture the intent, logic, and structure of the original code.


However, the accuracy and effectiveness of this translation process depend significantly on the availability and richness of the specific coding language or library in the AI’s training data. For instance, if the AI has been extensively trained on Python and Java, but has seen very little of a less common language like Erlang, it may struggle to accurately generate or translate Erlang code.

The Quality of Training Data: A Critical Factor

Evan also emphasized the importance of the quality of the training data that feeds into AI models. The internet is abundant with code of varying quality, and ensuring that the AI model is trained on high-quality code is a significant challenge. It’s been observed that suboptimal solutions may be produced – they function, but are overly simplistic.

blank

For example, consider a generative AI model trained to assist users in generating SQL queries. In the past, crafting the right SQL query required years of specialized knowledge, but now, with the help of generative AI, hundreds of thousands of users can achieve this in mere seconds. This is a monumental stride in democratizing knowledge and capability. This principle can extend beyond SQL and into more everyday tools, such as Microsoft Excel, or even Business Intelligence tools. Users can be assisted in creating complex spreadsheets, charts, and dashboards with ease and precision, something that would typically demand advanced knowledge and experience.


However, if a library is poorly documented or the AI’s training data did not include comprehensive examples from that library, the AI might produce less accurate or less functional code. This insight parallels broader observations made by AI researchers – an AI model’s performance is strongly contingent on the richness, diversity, and quality of the data it has been trained on.


In essence, generative AI, much like human coders, is better at working with languages and libraries it is more familiar with. It underscores the importance of comprehensive and high-quality training data in AI model development, in addition to emphasizing the significance of ongoing AI training and refinement as new languages emerge and existing ones evolve. In the grand scheme of things, this aids in continuously improving the ability of AI to assist users across various tasks, enhancing their efficiency and productivity.

Ushering in a New Era

Evan’s experiences with generative AI reveal a fascinating glimpse into the immense potential and inherent challenges of this technology. As the field of AI continues to evolve, it’s experiences like these that will guide its development, ensuring it becomes an even more effective tool for solving real-world problems

Remarkably, generative AI models, like ChatGPT, have begun to significantly reduce the need for hundreds of thousands of coding hours and years of specialization within libraries for programmers and end users alike. This transformation represents a monumental leap forward in the accessibility and usability of programming applications.


A look at the findings from our article “ChatGPT and Bard Spell Danger for Coders” underlines this point. Both AI models showcased their capacity to generate code effectively, potentially eliminating countless hours of human coding effort and delivering remarkable results. As these technologies continue to improve, we can expect an even greater impact on the ease and speed of programming tasks.


The emergence and continuous development of generative AI could usher in a new era in the world of programming, characterized by accelerated innovation and inclusivity. By reducing the barriers to entry and simplifying the coding process, these advanced AI models promise a future where coding expertise is no longer a prerequisite to building digital solutions. As we navigate this exciting future, we must keep our fingers on the pulse of this rapidly evolving technology and continue to leverage its transformative potential.

en_USEnglish