Minh𝕏emailLinkedIn

For the last two weeks I've had the opportunity to use Claude Sonnet with their artifact feature. Things are moving so fast. LLMs are more sentient than most people on Earth. Sure they can't feel love, happiness, sadness, anger, envy, lust but who gives a shit? They literally do like 80% of the work the last 20% is just me tweaking a few shit, reprompting it and creating files, installing software and all the other bullshit you have to do with software to get it running. With the trajectory we're going in with scaling law, more money, data, compute being put into LLMs I think alot of software engineering maybe automated away very soon. But LLMs are getting commoditized right? I mean open source is winning? Thanks to Zuck! Ehhh I agree, but the best one has always been proprietary and I don't think this is gonna change Somebody's got to pay for the training and somebody's gonna train for the inference. The economic incentives are there so that the best model will always be proprietary in my opinion.

But here's the good thing. LLMs are eventually going to get so good even the open source model will be fantastic. Think about GPT-2 in 2019 and GPT-4 in 2023. a 4 year difference and a 1000x increase in parameters (from 1.5Billion to 1.7Trillion). Think about the emergent capabilities of a 170 trillion param model. Multi modal. Think about all systems that this model will power. We are only beginning to see some multi modality with gpt-4o the realness of the voice, understanding of visual space, sub 300ms latency all into one. I mean that Figure robotics company is already using the model to power their humanoid thingy. Imagine all the B2B SaaS that you can build with these models! In five years time gpt-4 and claude sonnet will seem like a joke. A kid will be able to recreate and train gpt-4 in their bedroom by renting the equivalent of $100 of GPU compute or some ASICs. With $1000 they'll be able to train from scratch a model that is significantly better than the the state of the art today. Man I love technology! A part of me is scared though, as more and more of humanity's infra and systems are dependent on these fundamentally stochastic models the risk of catastrophy goes up and up, eventually we'll reach a point where existential risk from ai systems will be non-zero.