Using ChatGPT day-to-day will become a reality for developers, particularly with the advent of new tools like GitHub Copilot X (a GPT-4 powered GitHub Copilot). Eventually, these tools will be incorporated directly into a developer's IDE, and it will feel unnatural not to have access to them.
We will need to adopt our interview process given this fact. Asking candidates to limit its usage in the interview process will be akin to asking developers to code in a Google Doc - it just feels wrong.
However, some companies are attempting to proctor and limit ChatGPT usage rather than embracing it. Platforms such as CodeSignal and HackerRank focus on restricting ChatGPT access and continue to provide candidates with LeetCode-style questions. The issue with this approach is that these questions will become even more obsolete given how easy they are to solve with ChatGPT. Moreover, rigorous proctoring measures increase candidates' anxiety. This stress can adversely affect their performance, resulting in false negatives and ultimately undermining the purpose of these tests.
Instead of taking this approach, I want to discuss how you can craft a take-home assessment that embraces the use of ChatGPT.
Assess new skills
Traditional assessments often focus on evaluating a candidate's ability to produce correct solutions, which can be easily solved using tools like ChatGPT. Instead of solely relying on this approach, it is important to assess a broader skillset that embraces ChatGPT usage.
As generative AI tools become widely adopted, the future of coding will change. Developers are likely to concentrate less on writing code and more on critically analyzing the output of tools like ChatGPT to ensure the generated code is well-written and addresses the specific problem at hand.
Given this fact, code reviewing will become a more important skill in the future, as developers will rely on reading ChatGPT written code. So, consider asking candidates to review code instead of just asking them to write code.
Test your questions with ChatGPT
As the tool becomes more popular, it’s important that we test our interviewing questions with ChatGPT. If you are looking for one “correct” answer from a candidate in your interview process, and ChatGPT can easily provide that answer, your interview process is broken.
ChatGPT has numerous limitations, and it is important to design interview questions that take these constraints into account. Just as we developed questions that weren't easily searchable on Google when it first emerged, we must employ the same rationale with ChatGPT. Our questions must not be effortlessly solvable using ChatGPT.
Limitations of ChatGPT
As you test your question with ChatGPT, it is helpful to know where the tool is lacking, so you can create a question that is not easily solvable by ChatGPT. Here are a few limitations:
- It doesn’t always produce working code. For some questions where we have provided it more context (like a starting codebase), it will consistently produce buggy code. The more context you incorporate and the more unique of a problem, the more likely this is the scenario.
- The code it writes can be poorly written. In one example, we gave candidates a question that already has poorly written code, and ask them to add an additional feature. If the candidate just copies the question into ChatGPT, the solution will continue to use the poor coding conventions in the original starting codebase.
- ChatGPT's knowledge cutoff is September 2021, which is outdated in the rapidly evolving software engineering field. Consider asking questions requiring more current knowledge. This could mean providing a starting codebase with newer libraries. In this case, ChatGPT will provide buggy solutions.
- Lastly, consider asking a more ambiguous question with multiple correct answers. In this case, candidates will need to consider tradeoffs even if they use a tool like ChatGPT to help them.
So there you have it! If you want some help with creating a take-home assessment, don’t hesitate to set up a call with our team here.