Rise of AI
In recent years, the landscape of software development and engineering has been undergoing what feels like a transformative shift, thanks to the rise of artificial intelligence (AI) tools like ChatGPT. From use cases like Customer Service, Content Creation, language translation, and Data Analysis, AI really opens up the possibilities to a seemingly limitless array of options.
Within this post, let's focus on the ways that AI can change the ways developers approach problem solving, coding, debugging and collaborating. This post will specifically focus on ChatGPT-4. . The competing AI tools have many similarities, but to briefly recap on some of the differences:
ChatGPT 3.5 (Free) |
|
ChatGPT 4/4o (Plus) |
|
GitHub Copilot |
|
Google Gemini (Formerly Bard) |
|
Claude (by Anthropic) |
|
We’ll do a comparison later in the article to show a more detailed look at the functional differences.
Unlocking ChatGPT’s Strengths
From a development standpoint, AI is exceptional at removing redundancies and allowing developers to focus on the big picture. Keep in mind, ChatGPT (alongside many of its AI counterparts) is a conversation-style interface. It is fully capable, and in fact encourages, building off of initial prompts. In short: the more context you build, the closer ChatGPT gets to giving you a final product closer to your exact scenario.
Take for example the starting prompt:
“Can you make me a script that will iterate through a CSV format file. Grab the 5th column of each line which represents a dollar amount.”
Here’s how ChatGPT responded:
It provides a clear solution with comments included in a python script and instructions on how to use it! But, let’s say that’s not quite what you wanted and instead you wanted the script in bash due to environment limitations. Let’s ask ChatGPT: “Can you rewrite this logic in bash instead?”
In a matter of seconds, it can pivot to your continued prompt and generate a response.
Understanding Context Example
One of the attributes of AI that does not get as much attention is the inclusion of context. AI oftentimes seems to (deservingly so) get credit for its ability to create from scratch. However, AI remembers your conversations. It will not only generate a solution for you, but it will do it in a way that fits into the narrative of the rest of your conversation.
Imagine you are developing code for a cloud infrastructure that involves different policies and resources such that the components can correctly and securely talk to each other. By utilizing a conversation format in ChatGPT, the context recognition is smart enough to understand, “your serverless function needs to pull files from your blob storage database.” ChatGPT can build the logic to pull the file in your serverless function, but it can also build out the infrastructure code because it understands “we will need to update any security groups and policies to allow the two resources to communicate in your private network.”
One of the best ways to utilize AI in a workflow is as a replacement to Googling for answers and finding results that are similar to your scenario, but instead, get ChatGPT to generate a response tailored for you. This method dramatically improves the initial starting point when investigating the web for answers and saves a ton of time piecing together StackOverflow posts.
Example use case: Infrastructure as Code (Terraform)
Let’s dive in and see how the AI’s generative answers can vary!
Let’s do a quick comparison of some of the AI tools to see how they respond to the prompt:
“In AWS + Terraform, can you create me the infrastructure to stand up an EC2 instance that can host a webapp written in NodeJS?”
ChatGPT 3.5
ChatGPT 4.0
Google Gemini/Bard
Claude AI
As far as general notes goes, all Generative AI tools produced code that was more or less in the same ballpark. They all came with their own “flavors” of instructions on how to use the code, but the underlying code was very similar.
For the specifics:
ChatGPT 3.5: Presented the code via step-by-step instructions (Create the VPC, now create the subnets, etc.). Easy to follow. Comparatively slower to generate the text than its counterparts.
ChatGPT 4.0: Presented the instructions in a recipe format first, then the code in one block. It followed with an explanation on why it generated the code that it did. Also comparatively slower to generate the text than its counterparts.
Google Gemini/Bard: The text appeared basically instantly (faded in) and much like ChatGPT 4.0, it gave a block of code and then a small explanation following.
Claude AI: Same comments as Google Gemini/Bard, but includes syntax highlighting (only one that does for Terraform!).
Example use case: System Design (AWS)
Let’s make things a bit more interesting. Creating a webapp on an EC2 is pretty straightforward, so it makes sense that the solutions are pretty close to one another. What about a more open-ended question about system design?
Let’s consider the prompt:
“Given an AWS Cloud Provider, I want to create a design that ingests CSV records from an external data source. It will then take the data and process each comma-delimited volume into a database table. I want the database table to have at least 2 availability zones for durability. For performance, I want recently ingested data to have quicker lookup times, but older data will be queried much less frequently (I still want to store it in a database though). What AWS resources and connections should I use to build this design?”
ChatGPT 3.5
ChatGPT 4.0
Google Gemini/Bard
Claude AI
Observations
The outcome of this prompt shows much more variance than the first use case and shows why AI isn’t going to take over the job market in the near future (hopefully). The solutions provided by all of the different AIs will technically work. However, each one presents a slightly different approach or technology that could change the efficacy of the solution quite a bit.
The ChatGPT solutions both did not suggest the use of Amazon Glue, which is a tool known for its Extract-Transform-Load capabilities even though this could be a perfectly viable approach to the problem.
Claude AI included the suggestion of Amazon Elasticache and while it is extremely performant for frequently recurring queries, it could be quite expensive depending on the volume of the data and how much needs to be stored.
Google Gemini/Bard recommended the use of DynamoDB as the primary database solution and while it checks off a lot of the boxes on what we’re looking for in durability, availability, and query speeds, we might be able to improve performance even further by using a Relational Database vs. a NoSQL solution.
Situations where there are technically no “one-size fits all” solutions can make AI respond with variance and where the insight of experienced architects can be quite beneficial. Even though the solutions do technically work, there are definitely approaches here that would work “better” than others (what’s more important: cost, performance, how you intend to interact with the data, etc.).
Tips to Consider
ChatGPT remembers
Previously touted in the article as an “underrated feature” in the post, it is also important to be aware of your current context state. If you find yourself talking to ChatGPT and you feel like it’s just not really “getting you,” you may want to open up a new prompt to refresh the cache of the conversation.
Because ChatGPT does generate its responses based on the residual context, it can affect how it responds to you and it may not actually be relevant to (and potentially negatively impact) what you’re asking. For better or for worse, ChatGPT will have different responses depending on the complexity of your prompts (even if you keep your prompts exactly the same). This may be due to the way that it pulls from its own databases or context/insights that are more applicable since the last prompt, but sometimes starting a new conversation can help you start a blank slate if you find yourself going down a route that does not feel applicable.
Trust, but Verify
As great as AI tools are, it’s essential not to run blindly with what the response delivers. AI is not perfect, nor is it advertised to be. In fact, the AI companies encourage you to check their work before you use their information.
This scenario is very similar to the current state of Full Self Driving technology in vehicles. The technology can likely do 80%+ of what it’s being asked to do. However, you shouldn’t take your hands off and metaphorically fall asleep behind the wheel (and also illegal in the case of driving).
AI being AI, it will also sometimes do things that simply do not make sense. As the “driver” using the code, it is our job to recognize these things when they happen and make the correction. As a visual example, see the image below:
Although ChatGPT’s image generation did a great job creating a “personified farm tractor,” it also added an extra pair of eyes in the headlights. It’s easy to spot strange behavior like this in an image, but it does happen in more complex code snippets too (e.g. using outdated libraries, using an inefficient data structure, etc.). So, it’s our job to keep an eye out to keep ChatGPT in check.
Giving a good starting point
If you give ChatGPT the prompt of:
“Write me code to make an array of 100 randomly generated IDs. As you’re creating the IDs, insert them into the array, but first make sure that there are no duplicates in the array.”
That will be exactly what it does. If writing the code in Python it may or may not use a Set data structure (an abstract data type that can store unique values, without any particular order) and it may use a random number generator library to produce the ID instead of an alternative library that can more efficiently do that for you.
By providing more details in the starting prompt, e.g. “Write me code in Python to make an array of random UUIDs using the UUID library. As you’re creating the IDs, use Sets to check for duplicates,” ChatGPT will better understand the approach you’d like to take as opposed to just prioritizing finding a solution that “works.”
Providing perspective
From a strictly development standpoint, perspective may not be the most applicable concept because “working code” is usually “working code.” However, in your prompts, you can instruct ChatGPT to consider your prompt from a specific point of view.
For example, “explain to me the importance of how to incorporate accessibility options into a webpage from the point of view of a Product Owner” would yield a slightly different response than if you were to ask from the “point of view of a Software Engineer.”
And these are a few of my favorite things
Debugging: I won’t go into the exact count, but there’s a nonzero number of instances that involved a very miniscule character or special character that broke my code in a way that was hard to identify.A quick copy/paste on a problematic file/function with a quick description of the behavior can give you a quick scan to make sure you didn’t miss anything obvious.
Documentation: probably considered a trigger word for a lot of developers. I wouldn’t necessarily copy/paste the prompt that ChatGPT provides, but it does give you a useful starting point to add context to.
Testing: sorry, probably another trigger word. Testing can be tedious, but necessary. Much like documentation, you can provide the prompt of what kind of test you are looking for (e.g. “make a test in jest for me for this function <copy paste the function code>) and it will take out a lot of the pain points of adding tests to your application.
Simplify: I am guilty sometimes of seeing a large block of text and going “well, I don’t really want to read that.” Or sometimes reading the said large block of text and not absorbing anything. ChatGPT can receive a large block of text (or if you’re on ChatGPT4, you can even just screenshot it and it has OCR to read it for you if you’re feeling especially lazy) and it can break down information into more digestible chunks. Some examples: summarizing articles, verbose emails, etc.
Spaghetti Method: There’s probably a more scientific name for this concept, but when I say “Spaghetti Method,” I refer to a situation in which you have a lot of ideas, but you don’t really know how to best put it together into a coherent system (or you simply don’t feel like making the effort to). ChatGPT does a surprisingly good job at “untangling” your web of thoughts and putting together a response that fits.
For example, consider the prompt, “I’m looking for a resource that utilizes a Cloud infrastructure. It should also be serverless. Scales easily. I want to be able to deploy apps that can easily communicate with other resources in the Cloud infrastructure. What are some tools that allow me to do this?”
An English teacher would probably lose their hair over the structure of those sentences, but ChatGPT can take those thoughts and give you a slew of options that might fit what you’re looking for. For those wondering, it recommended: AWS Lambda, Google Cloud Functions, Azure Functions, IBM Cloud Functions, Amazon API Gateway, Google Cloud Endpoints, and Azure Event Grid as potential solutions to that prompt.
Uh oh, is my job on the line?
So, what’s the catch? Are all software developers at risk of losing their jobs to AI? The short version of my answer is: no (not yet at least). I find that these AI tools work best when you already know a fair share (aka experience) about the final product you want to build.
Take for instance an example where somebody comes to you and asks you to build a car. You can add whatever components you want to make the car as effective as possible. As the car’s creator, it is now your job to decide what does it actually mean to be “effective”? How will the car be used? Are there certain parts that are better for the job than others? How reliable does the car need to be? All relevant questions to ask for the job.
ChatGPT will likely be able to make such a car to “acceptable” specifications. However, ChatGPT nor AI is not quite at a place that has a one-up on someone that has years of experience in the specific field. For instance, a veteran mechanic has know-how that exceeds just putting together components to build a car. In the realm of software engineering, this may take the form of:
Using certain databases in a specific environment because of compliance, performance, or compatibility with certain proprietary software/data
Opting for different cloud infrastructure/design because you have to adhere to customer constraints
Using React over Angular because you have the domain knowledge that it’s just better (joking, joking)
Conclusion
The integration of ChatGPT into software development represents a significant leap forward in the field of artificial intelligence. Tools like ChatGPT are not only enhancing productivity and efficiency, but also reshaping the way developers approach problem-solving and innovation.
ChatGPT effectively replaces the instances typically spent searching Google for insight or troubleshooting issues and instead provides a more thorough and informative solution. This represents a significant improvement in efficiency. ChatGPT saves a substantial amount of time that would otherwise be used to piece together different Google results to form a tailored solution to the user’s prompts.
Consider a scenario involving the development of a user interface for a frontend project, where the flexbox is not behaving as expected. Typically, one might use Google to search for the correct way to create a flexbox in CSS to resolve the issues in the code. However, with AI, the process becomes more streamlined. The user can simply copy and paste the problematic code and explain their issue, such as “I am trying to get my flexbox to align these items centrally to the parent container and it’s not working with this code.” ChatGPT will analyze the code and provide potential solutions or identify the actual problem and explain why it is not functioning as expected. This method proves to be a powerful time-saver, especially over the course of consistent integration into a developer’s workflow.
The current state of AI is one of rapid growth and transformation, with ChatGPT serving as a prime example of how natural language processing and machine learning are becoming indispensable in the tech industry.
Although AI is not expected to replace the jobs of software engineers in the near future, the benefits of integrating AI into existing workflows are substantial. The advantages of working alongside AI to accelerate redundant and tedious tasks, as well as to address more complex design questions, will likely improve as AI technology matures and absorbs more data. The advancements between ChatGPT-3.5 and ChatGPT-4 alone demonstrate the rapid evolution of this technology.
The software landscape is continuously shifting, and it is anticipated that software developers will adapt their workflows to become more effective in response to these changes.
About the Author
El Park is a "seasoned" software engineer with over 12 years of experience in the tech field, spanning both commercial and government sectors. His professional journey has led him to explore a diverse range from IT, Full Stack development, DevOps, to Cloud consulting.
When he's is not immersed in coding or prioritizing family time, he's looking into the latest tech trends or finding new outdoor adventures.
Comments