Since the GPT-3's demos have stormed Twitter, more and more devs are getting anxious about how this would affect their jobs in the future. I think the tweet that has struck the developers the most is the one by Sharif Shameen
If anyone can create UI and generate React code just by describing it in plain English, why would any company hire a Frontend Developer anymore?
Let's start by getting a little more context about GPT-3
What is GPT-3?
Generative Pre-Trained Transformer (GPT) — 3, in simple words, is a pre-trained model that can explore the internet for answers and generate responses in a contextual way. This is why it is able to start giving predictions even when provided with a small input. You can check out Paul's demo for more context.
How GPT-3 will help evolve your jobs rather than taking it
"With Products like debuild.co, should we even bother learning Frontend development?"
The fear of AI taking on all our jobs has been the topic of discussion for years now, and since GPT-3, developers are able to see this happening to them sooner than later. This is the same situation when people were of the impression that firebase will kill backend jobs, and products like Netlify and Vercel will kill Cloud engineering jobs, but on the contrary, all these services have played a role in increasing the job market. It has been empowering more and more startups to rise up with low initial investment, and these startups in turn hire more engineers when they want to scale up.
In my opinion, one should consider GPT-3 to be the milestone in AI which will lead us to the next evolution of the way we work.
The Concept of Priming in GPT-3 models
The biggest reason why GPT-3 is a potential game-changer in AI tech is that anyone can use this without having to find a dataset for the ML to start predicting as expected, few inputs will help you get exceptional results. That being said the quality of input is definitely a factor for the results that you get. This is where the concept of priming comes up.
Priming is giving the model a series of input and expected outputs which should be easy for the model to calculate context and create a guide for itself for future inputs.
I highly recommend you to look into Carlos' pieces if you want to dive deeper into how priming works starting from the one mentioned below
According to the material that is out there as of now, it seems that using GPT-3, it would become extremely easy to create a super tool for a niche use-case, but there is still time for it to be mature enough to replace a whole field of work.
You can follow Swyx's repo to be up to date with all things GPT-3
GPT-3 is the closest we can get to have something like Jarvis from Iron man so that should be something that can be exciting
I think the best way to handle this new progress in AI is to sit tight and be curious about how this can change the way developers would be building products in the future.
Disclaimer: This blog is all written without any help of GPT-3 😂
Meanwhile, I am waiting for my GPT beta invite like:
Do follow me on twitter @bhimtebhaisaab to stay tuned with all the updates