Neural networks could help computers code themselves: Do we still need human coders?
When I was in college, we had to write out code by hand on computer science exams.
On paper. With pens.
If you learned to code anytime in the past ten years, you probably think that sounds barbaric, inefficient, and just plain stupid. And you’d be right. But there is also a serious point here: that the technologies we use for computer programming are constantly evolving, and are doing so pretty fast. I count myself lucky that my exams didn’t involve punched cards.
The next big revolution in coding practice might be closer than we think, and it involves helping computers to code themselves. By utilizing natural language processing and neural networks, some researchers think that within a few years we can remove humans entirely from the coding process.
If you work as a coder, you’ll be glad to hear that they are wrong. We are going to need human coders for a long while yet. In this article, I’ll explain why.
Neural networks and coding
First, let’s look at this new generation of coding tools, and see what they can do. The idea of using neural networks, machine learning, and AI tools in programming has been around for decades, but its only now that the first usable, practical tools are emerging. These tools can be broken down into three types.
The first is tools that aim to automatically identify bugs. This has been one of the most successful applications of neural networks to programming and has certainly been extremely useful for some coders. Swiss-based company DeepCode has been leading the way in this type of tool, but even their offering has serious limitations, which I will come to shortly.
Secondly, there is a range of tools that aim to produce basic code by themselves, or which can autocomplete code for programmers. These tools are now being released across many popular development platforms. Facebook has made a system called Aroma that autocompletes small programs, and DeepMind has developed a neural network that can come up with more efficient versions of simple algorithms than those devised by humans.
Then there is the most exciting application of neural networks to programming: the research being done by a team from Intel, MIT, and the Georgia Institute of Technology. These researchers have developed a system called Machine Inferred Code Similarity, or MISIM, which they claim is able to extract the “meaning” of a piece of code in the same way that NLP systems can read a paragraph of human-generated text.
This MISIM system promises to be a revolutionary tool if its full potential can be realized. Because it is language-independent, the system could read code as it is being written and automatically write modules to achieve common tasks. Much of the code that is used for automating cloud backups is the same across many programs, and compliance processes are also a major time-sink for many coders.
Systems like MISIM promise to make the process of writing code far more efficient than it is currently, but they still have significant limitations. Let’s look at some.
The limitations
Coding tools based on neural networks are unlikely to replace human coders anytime soon. To see why let’s look at the limitations inherent in the three main ways that these tools are being used.
First, ML and AI programs that are designed to catch bugs in human-created code are extremely useful, but only up to a point. At the moment—and as you will be painfully aware of if you’ve used one of these programs—they tend to produce an enormous number of false positives: features that the AI thinks might be bugs, but are not. The fact that these tools err on the side of caution is, of course, great from an infosec perspective, but is also an indication of their limited ability to understand the complexities of contemporary programming.
Second, tools like Aroma and OpenAI’s GPT-3 language model can churn out simple pieces of code, even from natural language descriptions, but only under the direction of humans. They perform extremely well when given a limited, controlled problem to solve, but are (so far) incapable of looking at a design brief and working out the best approach to take.
The third type of tool I’ve mentioned above—MISIM and it’s related systems—is undoubtedly the most innovative use of neural networks in coding, and holds the most promise to make a real difference to the way we work. However, it should be noted that this system is still in the early stages of development, and is a long way from even a public beta. I’ll withhold judgment on its limitations, therefore, until I get my hands on a version.
Finally, though, it’s also worth pointing out that there is a more fundamental limitation implicit within all of these tools: creativity.
In other words, while these tools are great at completing code given a prompt, they are not going to win any design awards, either for coding or design. Even the best web design software has tried and failed to implement AI-driven aesthetic tools—there is a good reason for that: humans know what looks good to other humans.
Using this aesthetic, creative capacity has been a major focus of coding paradigms over the past few decades. It is one of the reasons, for instance, why many of the best front end development frameworks around today are so visually-oriented. Humans are great at spotting patterns in seemingly unrelated data, and AIs are great at performing repetitive, time-consuming tasks.
Collaboration and creativity
This inability to create new solutions is why, ultimately, neural networks are not going to replace humans. Instead, we need to identify which tasks are best done by AIs, and which are best done by humans, then build a collaborative approach to coding that draws on the strengths of both.
There are a couple of clear ways forward when it comes to doing this. One is to use AI coding tools to train human developers in a much more flexible, efficient, targeted way than is currently possible in our education system. It could be, for instance, that automated recommendation systems could be used to teach programming security for beginners, by providing detailed guidance on securing real-life systems as they are being coded.
Secondly, AIs are showing great promise when it comes to tracking the activities of human coders and making their work more efficient. A good example of this is the automatic invoices that many companies now use, in which an ML system is used to track the activities of human employees. Indeed, providing each human coder with an AI assistant that learns how they work, and then makes recommendations based on their previous solutions, would be of great benefit to the majority of developers.
Third, systems like MISIM, even if they are not able to fully automate the writing of code, may have a slightly unexpected benefit: they can be used to rewrite legacy systems. Because tools like MISIM are platform-independent, they could potentially teach themselves to understand ancient (and now pretty obscure) coding languages like COBOL and then re-write these programs in a usable format like Python. Who still uses COBOL, you ask? Well, the US Government, for one.
All of these approaches do not seek to replace human coders with machine analogs. In fact, they are all informed by a different paradigm: that when it comes to coding, humans, and machines can work together as colleagues, rather than as rivals.
The bottom line
That might sound like a vision of utopia, but take a longer view and you’ll see that it is one that is eminently feasible. In many ways, the advent of AI and ML tools in coding mirrors the prior development of graphical coding tools and even programming languages themselves. Neither your front-end development tools nor your python scripts interact with your hardware on a fundamental level: everything, lest we forget, needs to be translated to binary machine code.
Seeing coding as a process of “translation” might be out of fashion now, but it was an approach that certainly informed my own training. Two decades ago, we were explicitly taught the way in which our code would be converted into assembly language; nowadays, this approach would seem to be a total waste of time.
Ultimately, this is what the development of AI coding tools seeks to extend. The end goal here is that systems like MISIM will be able to take a description of a computer program—or even a description of a problem to be solved—and produce a program on their own. But it’s important to remember that the description will still be given by a human.
Tags: ai, artificial intelligence, coders, neural networks
44 Comments
Great article, and I agree that, while unable to replace completely humans, it will be a great aid. I’m particularly interested in seeing how the debugging side of things goes, and if there is any chance of getting it involved in automated test generation and test code coverage. On a side note, being aware of the actual translation to machine code is definitely not a waste of time, and is sometimes essential when the project starts to err on the lower side of things (IoT and in general all embedded programming for example), and, for what it counts, I graduated three years ago after six years of university, and my first exams were still on paper, luckly 🙂
As someone working in this space, I wanted to add a couple of additional points. If you are a software developer or looking to go into software, don’t worry… this stuff is still mostly hype and will probably never do what people are claiming it can do. Your career is not in jeopardy, in fact, it is getting better every day because of AI. Let me explain.
Unless there are major advances in AI that are probably at least 50-years out from today, we will not achieve a real AI system. We don’t even know how to. The assumption is that the machine-learning we are doing today will lead to that. I have my doubts. But certainly, new innovations will open new frontiers we can’t even imagine today, so I am willing to leave that option open.
The first thing to keep in mind is that what we are calling AI today is not AI. I call it AAI (artificial artificial-intelligence). The AIs we have today are a VERY small subset of AI and only really gets into advanced statistical analysis based on correlations of data. They call it AI because they think this will lead to real AI systems. But the key point here is in the training and learning processes. Currently, to teach an AI system anything, you must spend about 90% of your time working on data and training. This is called machine-learning and the result is an algorithm developed by the AI, which we then use. In my mind, nothing artificial or intelligent here. We are simply using statistics to automate the development of algorithms.
The advances that really come into play for us today are in the development of better chips to do AI training. We are hitting the limitation of Moore’s Law and we deal with that by moving from software to hardware. To do this, we take software and build a hardware version that can run 3000% faster. In the future, computers are going to contain these hardware-based chips… your phone already does. Google FPGAs and AI and you will a lot of stuff from AWS, Intel, Azure, and others.
I personally thing that using AI to generate algorithms, then putting those algorithms on FPGAs, has a great deal of potential to change how we use computers. If we continue developing FPGA technologies, in a few more decades it will be pretty amazingly. I think we will start seeing an AI generate an algorithm in seconds (we already are), dump it to FPGA to convert it from software to hardware, and execute it at amazing speeds that we can only imagine today. But still, it is just advanced statistics. While that lead to a real AI? I doubt it. But if it does, this is the start.
Well, universities still have students write code with pen and paper in exams, so I think programmers will be around for a lot longer!
I hope we get code-writing AIs though, writing code is perhaps the least interesting and most tedious part of software development.
I agree with much of what you say here. I started in college using punch cards (let’s just say, a little while ago) and progressive iterations of technology have only served to move the human input higher up the development food chain. That is to say, supported by compilers, then GUI tools, then open source code libraries and much more. Humans tend to build tools to help themselves, not tools that will go off and grow their own objectives a la Skynet. I can see smart algorithms being part of this but probs not until a long time after I retire (worse luck, it sounds great). The future is coming to find us…
But in this time frame, humans will have more pressing problems to solve, like, 2.5+m min sea level rise by 2100. So let’s hope the algorithm s can be usefully harnessed
There are already systems trying to interpret the code to understandable English language. Mainly for web programming. I don’t think the the AI coders are comming anythime soon but people are developing more and more lazy ways to code. It has it’s advantages and ofc disadvantages when we start to talk about code optimisation. So I don’t expect no AI anytime soon.
We still have to write our undergrad Computer Science exams by hand btw.
TL; DR :
“The end goal here is that systems like MISIM will be able to take a description of a computer program—or even a description of a problem to be solved—and produce a program on their own. But it’s important to remember that the description will still be given by a human. ”
So basically, the highest of high-level programming languages…? Why read _like_ English (Python) when you can read English?? So basically a plain English description/specification that is translated into the code that people have become too lazy to write, which is translated in to Assembly, which is translated into Machine code?
…Fret not boys & girls. This is abstracting the “coding” from “coding”. But there will probably still be some kind of syntax to be wary of (if not for the sake of it working for the sake of optimization) so it’s technically still “coding”.
If you are a programmer, you are already doing this. You are writing it in an English like language description/specification. We already have rapid development tools that implement design patterns and allow much of the tedious work to be automated. None of that requires ML or AI. Replacing software engineers with an AI, we just don’t have any idea how to do it. Automating program tasks with AI, sure, that is coming and is already here. But it’s not going to replace anyone.
I work in this space every day and I can tell you, there’s no AI solution that can actually write code. There’s a few interesting research projects that can write a few lines of code, IBM supposedly can translate some simple COBOL over to Java using AI… although a cross-language compiler would work much better. But there’s nothing on the level of replacing programmers. There’s nothing like what you are reading hyped up in various articles. Articles like AI has cross some line and can do this or that better than a human. That sort of thing is mostly bogus.
Here’s a great example, Google claims that Google Translate is better than human translation. Really? So lets test it. Write something in English and covert it to another language, then back to English. Note what changed. Here an example, “He came over to the nurse. He looked very nice.” Translate that to Greek and back to English and you get “He came to the nurse. It looked very nice.” What looked nice? The guy, the nurse, what? Why did it do this? Because it doesn’t understand the text nor does it understand either of the languages. It just ran through some statistics to translate it. It was not able to connect the gender of subject in the first sentence to the “it” in the second sentence. Think that is better than human? Not a chance.
Hmm interesting article. I kind of agree with the message but not really with your arguments.
e.g. “Humans are great at spotting patterns in seemingly unrelated data, and AIs are great at performing repetitive, time-consuming tasks.”
Yes, humans are great at spotting patterns, but so is AI. There are today tasks in which AI surpasses human skill and they often do this by analyzing patterns in statistical data.
also: the argument that AI isn’t creative
There are AI algorithms which can generate “paintings” from a huge library of pictures. Those paintings actually look quite creative. Even we humans don’t know what creativity really means and where our creativity comes from. So, how can we say that AI can’t be creative, maybe human creativity also stems from an endless collection of examples, pictures and memories in our head which just gets mixed together at random and pushed through some filter to get good output.
Finally, I think it’s also kind of a fallacy to compare humans and AI, because I think over time humans will always adapt and avoid competing with technology (e.g. AI) to instead make good use of it. E.g. nobody is competing with a calculator anymore, everybody’s making good use of it. Because instead of wasting our time competing with a tool (like AI) we should invest our brain power to do things neither we nor AI would be able to do individually.
In any case I do agree with you that AI won’t replace coders any time soon, because “these researchers” are far too optimistic regarding future research progress in my opinion.
I’ve heard this kind of talk since at least 30 years now and we have not come one single step closer, as far as I can oversee.
We also had to write code during the exam on a piece of paper with a pen and we all thought that was hugely annoying, since we all used laptops and thin clients during classes.
And PS, not even -three- decades ago (time flies, darn…), was I taught about how code is interpreted in low level language. Yes, we were shown underlying principles, we did write in assembly for a few sessions, and I can imagine this is still being done.
C is still around; C++ is; Java is. Object oriented languages still do basically the same thing.
Yes we may have added another abstraction layer here or there but that is really nothing that comes even close to AI controlled programming – it is just that – a human-programmed abstraction layer here or there, to simplify certain tasks.
The same tasks, that we had 30 years ago…
Compiler creators have done phenomenal job with generating efficient code and are probably at the pinnacle of doing any better with the average coder but could AI systems turn a average coder into a more efficient coder, probably, wouldn’t that take the fun or the money out of coding, there is a creative aspect to writing code: organizing modules, defining data structures, requirements in performance. Currently, AI has to be taught these somehow, how tedious would the teaching become, seems to be much easier to create libraries, I would rather have AI suggest a library call, this is already done by search criteria, google it.
AI cannot come up with a new way to do something because it had to be thought of by the teacher of the AI system, I guess it depends on the granularity of the AI system. How many ways are there to: mov ax, [01×00], I am being a little smart ass here. Humans don’t have to be taught every single thing, humans have IQ. A computer has 0 IQ. The computer can chose the best answer and that is also taught to the AI.
I think AI could program for specialized systems obviously, COBOL could write reports pretty much on it’s own, I don’t see much difference.
I admit I haven’t researched AI much but I do know how typical computers work and I am basing my comments on that knowledge. Forgive me if i am being really AI naïve.
I think that a hybrid approach using ML and human software designers may be available in the reasonably near future. I agree that machines are short on creativity but I do think that they may be capable of handling a lot of the low-level coding tasks if we improve our design tools and make them capable of communicating well with computers. BTW I actually had to keypunch my own programs in the old days and I just missed out on the joys of communicating via toggle switches or teletypes.
In order for machines to completely take over, customers will need to specify precisely what they need in their requirements.
Trust me, we’ll be safe. 😉
I’ve done a lot of business process automation in the last 20 years and few customers even understand what their process is, let alone how to describe all the requirements precisely. They know they enter this information and push that button, but they can’t tell you why or what happens after that. The people who built the software didn’t document it and left the company years ago. The original button pushers trained successive button pushers who trained their replacements in an endless cycle with information being lost each time like a game to telephone. I can see where MISIM could help greatly in this area.
Regarding new systems, I don’t see it. Few customers understand what they want with enough detail to describe all the requirements precisely. They leave those details to their underlings who often inadvertently tell untruths and sometimes outright lie. Until AI learns how to read between the lines and detect BS, it’ll be useless for all but the simplest new process development.
This is why it will never work
https://youtu.be/BKorP55Aqvg
Never seen that comedy sketch, was bloody funny, thanks for posting.
Oh and on the topic, 4GL has been talked about since I was at college back in 1991, 29 years on and it’s still talked about in one guise or the other. How about we stop trying to get rid of humans, incorporate each others skills into our society and life a happy balanced life with each other whilst we are on this rock for our allotted time.
Instead of trying to replace programmers who dedicated quite a few years of their life to be a proficient programmer how about using AI for
a serious problem like tackling a cyber-crime ? If most of the efforts towards AI development and implementation would be more concentrated
on how to solve cyber-crime the online world would become much better place. I understand that AI can be used in more areas than just
cyber security but I like to ask myself on regular basis “Am I focusing my efforts on the most important things ?”.
So perhaps instead of trying to eliminate programmer’s livelihood maybe channeling that energy into question “How can AI help solve greatest security
issues humans face ?”
I suspect the author and I are in a similar age range. However…
For nearly as long as I can remember, people have been saying that computers are going to start programming themselves and asking if programmers will be needed in the future.
This has been the claim for *decades*. Each time, it’s surely true this time! This time, the tech is real! This time it’s true!
So far, it has always been hogwash. Yes, it has become easier. No, computers don’t generally program themselves. I’ll believe it when I see it.
i think machine will never replace a human
because machine dont have a sense and no have creativity and just some algorthim AI
Regards,
Tokeqq
I am still highly skeptical about AI performance. What I have seen so far is still very close to “interpolation” techniques that mimic the samples shown to them, and are highly unable to generalize from them. Spectacular performance are obtained by means of huge databases, in applications where approximations are well accepted (e.g. automatic translation).
Programming is a mathematical discipline that requires a true understanding of the code, which is a formal thing, and cannot be generated by randomly pasting available chunks. Current AI systems are completely enable of any understanding.
Will we get rid of programmers in 5 years? Most certainly not. In 10 years? Not so much. 15 years? I wouldn’t be confident in answering that question. All that progress we made in Computer Science is exponential, building upon more and more powerful tools. Some of the things that those algorithms do right now would have been deemed impossible not such a long time ago and now they have a foot in the door.
Combine that with a growing number of low-code and no-code platforms and writing code will – not for all but for a number of use cases – become less and less something that humans do.
But while this is going to speed up the development process, it doesn’t mean that we don’t need software developers anymore. After all, writing code is the least important step of the development process: finding out what the requirements are, identifying possible unwanted side-effects and ramifications are where the real work is and this isn’t going to be automated any time soon.
This reminds me of the early 1990s when I was a Network Administrator. Plug-and-Play for hardware had just been announced. All of my colleagues (software engineers) told me that I would soon be out of a job. Fast forward to 2020 and there are still plenty of Network Administrators out there.
In 1978 I was told not to get a job in computing. The need for computerd programmers would die in 2 or 3 years. Pundits are crap at time frames or reality in general. Go watch forbidden planet again. Then tell me that ai will match with what the human intent was
Neural networks could help people diagnose themselves. Will we still need doctors?
“it’s important to remember that the description will still be given by a human”
and those humans will have to learn how to word the description so as to obtain a good result. however it is done, there will still be some form of a ‘write, run, evaluate’ loop. new tools, same mindset.
> Humans are great at spotting patterns in seemingly unrelated data, and AIs are great at performing repetitive, time-consuming tasks.
Actually computers in general are great at performing repetitive, time-consuming tasks, and AI in particular is beyond great at potting patterns in seemingly unrelated data. Neural networks are about finding patterns.
Human are good at human interaction and creating innovative stuff
Why would we want AI’s to code. Can’t they just do the task without coding it? It’ll still be the end of coding
That’s really great.
I love this article.
You have done a great job, Nahla Davies
From MIT in 1971, here is a 4 line story showing how hard AI really is:
1. John goes into a restaurant
2. He orders lobster
3. He pays his bill and leaves
Now, what did John eat?
No AI is remotely close yet to answering this, but easily answered by a 4 year old child
😂😂 lol…I think we could simply classify that.
Restaurant is a place where people go to get food
Lobster is a category under food
John is the name of a person
Eat has to do with food.
What’s the only food in the sentence that John can eat ?
The AI has two possible response
John never eat nothing
Or John ate a lobster I think it would say John ate a lobster with 0.80 confidence.
And nothing with 0.65 confidence….
If you asked what did john buy then it could clearly answer that at 0.95 confidence
Nope. NLPs are terrible at working above the level of the sentence. You can test this with Google translate.
“John goes to a restaurant and orders lobster. He looked nice.” Translate to Greek and back to English and you will get “John goes to a restaurant and orders lobster. It looked nice.” Why? Because it cannot connect the “he” in the second sentence to “John” in the first. This is because of how NLP works and the fact that it doesn’t actually understand the text. It is just using statistics based on patterns it has been trained with. Nothing more. But a 2-year old child actually understand the language and connects “John” to “he” and would know “it” would be wrong. But AI cannot do that.
The whole point of every kind of automation is to replace a common and repetitive task with machinery that can do the same thing without human fatigue, or faster/lower cost etc. The dirty secret often swept under the rug is that the machine didn’t: a. realize that something it was doing was inefficient, then b. design a way to mechanize that, then c. implement, test and certify and document how that works. Even worse, long after the automation is put in place, other criteria change and now the automation doesn’t do what is needed, and therefore must be redesigned.
These seemingly simple things are staggeringly difficult design problems in and of themselves, and yet this is all part of human design.
The very act of designing anything involves complex aesthetic judgements, tradeoffs, elegance etc.
While all of this is interesting, it’s a waste of time because in the end even if you could achieve this sort of self-programming what you would end up with would likely be unsupportable and entropic in the sense that it would become ever more complex and tangled in its own inefficiency. No way out of that rabbit hole.
We should be spending more time thinking and talking about what makes a great program great, and what barriers are there that hamper writability, readability, maintenance, and extensibility in the future. Let alone little details like efficiency, use of limited resources (no we don’t have infinite RAM and CPU cycles) etc. Instead of trying (at all) to replace humans, we should be focused on how to make humans more powerful and expressive.
Why aren’t we talking about the incredibly stupid ways that people are wasting resources with bloated code, inefficiency, different OS platforms, undocumented mechanisms, the list goes on and on? We are awash in exabytes of data. Even with quantum computers we won’t be able
to do any kind of brute force searching in that space. Why aren’t we talking about how to organize and index all of that? That my friends
is what 99% of future computing will be about.
A real artificial intelligence would ask “how can I make myself stronger and better?”
Maybe we should start by asking ourselves that same question.
This is already being done and is one of the key uses of AI in software. There are examples of application going to production for large fortune 100 companies without a single line being tested. Of course it was tested, but the AI does the testing.
A very interesting read. I’m a huge Terminator fan and I can’t help but imagine things when this topic pops up.
Creativity is not far from being synthesized. Essentially, creativity is the ability to fabricate data & information and see if that info fits the gaps. How elegant our solution is depends on how invested we are in the subject, how learned we are. That is how we refine, parameterize and direct our creative processes.
For AI to completely outdo humans at coding, they must first match the human experience, human upbringing and nurture – they too must possess a pool of wisdom and knowledge that allows us to refine our creative processes so efficiently. To create beautiful music you are going to need much more than just musical instruments and reiterations all day. Therefore my opinion is that AI will eventually takeover when all AI systems become integrated to form something similar to the human mind, and not just the brain which is akin to plain hardware.
In the meanwhile, could machines possibly do a better job at coding if they chose a purely mathematical approach?
“Ultimately, this is what the development of AI coding tools seeks to extend. The end goal here is that systems like MISIM will be able to take a description of a computer program—or even a description of a problem to be solved—and produce a program on their own. But it’s important to remember that the description will still be given by a human. ”
What is the description of a computer program if not computer code? It may be very high level computer code, but it is computer code none less. (This is basically paraphrasing a common response to this kind of question).
So, that human will probably be a “programmer” for anything that isn’t trivial or common, though what is considered trivial/common will continue to expand in scope, as a programmer is still needed to tell the AI exactly what is needed in such a way that the AI can build it.
At the highest level of programming (psudocode) a programmer is trying to describe the program to be solved, as part of the logic that solves the problem. The better match between the solving logic and the description in code the happier the user/customer is. How it is solved is an implementation detail and often an important one as going about it the wrong way can yield problematic results. Tools such as AI will only enable much harder problems to be solved, unless of course we get Strong AI, rather than the current Weak AI, but if we have Strong AI, we have much bigger potential concerns as a species than if programmers will be out of work or not.
In the end, programmers will still exist, even if it’s just to create a design requirement the computer can understand (code), regardless how of that requirement is processed.
The ultimate wet dream for every executive at every company is to replace all employees with computers, software, and automated order fulfillment machinery so that all the executives have to do is push the “Start” button, sit back, and collect their bonuses.
Shortly after that the economy collapses. This is called “Bad Luck”.
I think in the near future these autocompletion programs will become redundant since machines will be writing *all* the code for us. Machines making machines. And humans trying their best to make autocompletion programs, even though they may be perfect or near-perfect.
Alan Turing.
An algorithms is generally not able to validate an algorithm generally.
Who still uses COBOL?
Well, every single bank in the world.
We must be careful about where a given tech will end up. Most everybody got the future size of computers wrong because of technological advances. No one thought that computers would get small enough to carry, much less small enough to carry in your pocket. So the speculation about the role AI will play in future computing advances may miss the mark; we don’t know what new developments elsewhere may affect the development of AI.
I’ve always think the same. AI will never reach a level of functionality to be able to replace humans completely, or at least I see it that way. And even if they reach such a level, I wonder what kind of problems it would cause to cybersecurity? Maybe none, but I’m not sure. What do you think?
Great article, I liked it.
No.
Generation of boilerplate? Sure. But an AI is not capable of completely describing and optimising an algorithm since there is no generalised loss function for algorithms. ML isn’t even capable of optimising its own datasets yet.
Nor is it capable of cleaning its data or creating it.