I build software. I also use AI to help me build software. So I have opinions about this, and they are more mixed than the marketing materials would suggest.
The short version: AI coding tools are useful. They are not magic. They are really good at some things and surprisingly bad at others, and the gap between the two is wider than most people realize.
The big names are GitHub Copilot, Cursor, and Claude Code (by Anthropic). There are others — Codeium, Tabnine, Amazon CodeWhisperer — but those three get most of the attention.
Copilot was first to market in 2022 and runs on OpenAI's models. It works inside your code editor and suggests code as you type, like autocomplete on steroids. GitHub says it has over 1.8 million paying subscribers as of early 2025.
Cursor is a code editor built from the ground up around AI. It can see your entire codebase and make changes across multiple files at once. It has a strong following among developers who want deeper integration than what Copilot offers.
Claude Code runs in the terminal and can read, write, and test code across a whole project. It is more like having an assistant that works alongside you than an autocomplete tool.
Boilerplate. The repetitive stuff that every project needs but nobody enjoys writing. Config files, test scaffolding, data model definitions, API endpoint wiring. AI is genuinely good at this, and it saves real time.
I used to spend 20 minutes setting up a new API endpoint with all the routing, validation, and error handling. Now I describe what I want and get a working first draft in about a minute. I still read every line before committing it, but the starting point is solid.
Documentation is another one. Asking AI to write docstrings and comments for existing code works surprisingly well, mostly because the AI can just read what the code does and describe it.
Architecture. If you ask an AI tool to design the structure of a complex application, you will get something that looks reasonable and falls apart under real-world conditions. It does not understand your team, your deployment constraints, your scale requirements, or the weird legacy system you need to integrate with.
Subtle bugs are another weak spot. AI-generated code compiles and passes basic tests, but it can have logic errors that only surface in edge cases. I have caught bugs where the AI wrote a sorting function that worked for every test case except when two items had the same value. That kind of thing.
The 2024 Uplevel study looked at engineering teams with and without AI coding assistants and found no statistically significant difference in pull request throughput. The teams using AI were not measurably more productive by that metric. That does not mean the tools are useless, but it does suggest the productivity gains are more nuanced than "everything is twice as fast now."
This is the part that worries me. A 2024 study from Stanford found that developers using AI assistants were more likely to write code with security vulnerabilities, largely because they trusted the output and reviewed it less carefully.
AI tools are trained on public code repositories, which include plenty of insecure code. They will happily generate SQL queries vulnerable to injection, output HTML that enables cross-site scripting, or use deprecated encryption functions — all while looking perfectly professional and correct.
If you are running a website or app, the code behind it matters for your users' safety. A vulnerability in how your site handles URLs could let an attacker redirect users to phishing pages. That is the kind of thing our link checker helps catch on the user side, but it should not be there in the first place.
The Stack Overflow 2024 Developer Survey found that 76% of developers use or plan to use AI coding tools, but only 43% said they trust the accuracy of the output. That gap between adoption and trust tells you something.
You do not need to be a developer to care about this. AI-written code is in apps you use every day. When a company ships an update faster because AI helped write it, the question is whether anyone reviewed that code carefully enough.
A few things worth knowing:
I use AI coding tools every day and I am glad they exist. They handle the parts of programming I find tedious, and they occasionally suggest approaches I had not considered. But I treat every suggestion like code written by a confident junior developer — it might be right, it might look right but be wrong, and I need to check.
The people who get the most value from these tools are the ones who already know how to code well enough to catch mistakes. If you do not know what good code looks like, you cannot tell when the AI gives you bad code. That is the uncomfortable truth nobody in marketing wants to say out loud.
Whether code is written by humans or AI, vulnerabilities end up affecting real people. Use our free tools to check for scams and stay safe.