34 Comments

The biggest limitation now on it being useful for code is the context limit. Code is a holistic thing, at least when you're writing any more than a trivial application. It's not easy to context switch either. One thing I think we may be able to do is summarize whole systems, and fit different summaries of the systems (or sub-systems) into stored contexts that can be recalled from an offline store... that could make it possible for a 'main idea' agent to keep these summaries loaded/unloaded as needed, and the 'worker' using its context to generate/edit/evaluate a particular file or part of a file that does fit in context. It'll be quite a while, I'm guessing, before all of this is transparent enough for non-programmers to construct full systems, but indeed I think the day is rapidly approaching when nobody will need to remember 'syntax' anymore to get things done.

Expand full comment
author

One thing that worries me about that future when most "programmers" won't look at code is reliability. I mean, we trust compiled code and never read it, because we can formally prove that compiled code is semantically equivalent to the code you wrote. But, so far at least, it is impossible to guarantee that code generated by Copilot actually does what you ask it to do. And I don't know if that's solvable, given that you'd be asking in natural language, which by definition has no formal semantics. At some point, we could converge into using some subset of natural language that is formal enough, but at that point, wouldn't that just be another programming language?

Expand full comment
Apr 21, 2023Liked by Alejandro Piad Morffis

A other issue that comes to mind is the efficiency of produced code. How do you specify which data structures to use etc. ? More generally programmers will have vague notions of how the code is executed. We moved from assembly to C to Java. Each jump has a loss in efficiency but not that much you still feel what is manipulated. Already in Python there are things not clear (like picking an element in a list appears atomic etc ). Now using LLM it looks like a giant jump towards semantic description (and no idea how it is going to be effectively implemented).

Expand full comment

When we give instructions to each other, 'do that thing', we mean 'given the entire state of the world, and that you are who you are and I am who I am, and we are having this conversation in a particular context for a particular reason, I want you to do that thing, which implies a lot of other unspoken stuff we have previously talked about before and/or industry standard primitives'. In other words, it is like saying 'the whole world, and do that thing'. Once we get better at referencing the rest of the world and/or standards, the implications of asking to 'do that thing' in context will be much less depend upon syntax. But we will theoretically need to be able to look back through and see what assumptions it did make.

Expand full comment
author

Perhaps we have to forgo the idea of formally provable correctness and treat AI generated code the same way we treat human code: ask it for a good PR with concise explanations and have another human review it.

Expand full comment
author

If P != NP, then validating generated code will always be exponentially easier than generating said code, thus a human validator with dozens of AI generators might scale.

Expand full comment
Apr 21, 2023·edited Apr 21, 2023Liked by Alejandro Piad Morffis

Perhaps the requirement to have 'the whole world' in common between the AI and the other agent (the user) is one of the reasons why scaling up appears to work so much better. It could be just a perceived effect. De facto alignment.

Expand full comment
Apr 21, 2023Liked by Alejandro Piad Morffis

Thanks, Alejandro!

I think (or hope, more precisely) that coding will be a part of the apparatus of thinking that kids get exposed very early to, and perfect more as students in high schools and college.

In a way, I expect coding to be what writing is today. I teach critical thinking and I plan to have students write some basic code to exercise argument-making.

Expand full comment
author

I believe coding should be a universal skill as well. Do you think the current move towards AI generated code will make it more likely, or less likely, that kids will learn to code in school? I mean, learn to code in 3rd generation languages like Python, not in super abstract, natural language like new stuff.

Expand full comment
Apr 21, 2023Liked by Alejandro Piad Morffis

It depends on what ‘coding’ will mean. Yeah, AI can help us write small (or medium) pieces of code, but what I hope for is that kids and students in non-CS majors will learn the basics of algorithms because that is the part most relevant for critical thinking.

For example, I am using a simple explanation of chatGPT to help students understand probabilities and prediction as part of broader system of inductive reasoning.

Expand full comment
author

So, not so much being proficient with a concrete programming language, but developing algorithmic thinking? Yeah, I agree that's even more important, even if you end up never writing any code at all.

Expand full comment
Apr 21, 2023Liked by Alejandro Piad Morffis

Yeah, coding would be instrumental for that.

Expand full comment

My day job is teaching coding to kids and what's clear is that schools are not in a position to teach coding. They can only teach very basic concepts and write very simple programs which stops well short of what's needed for programming.

Whether the new breed of coding we may see in the future will change this is another question which I don't have an answer for as yet. It's likely that the critical thinking and structuring of a program will still need human skills, in which case the lack of programming expertise in schools will still be an issue

Expand full comment
author

Interesting take. Do you think something like an "AI coding tutor" would have a significant impact in changing this landscape? We cannot hope to train all elementary school teachers as programming teachers. Can we get maybe halfway there with AI?

Expand full comment

I think "AI tutors" should have a role in teaching all subjects, not just coding. No teacher can have as broad a knowledge on any subject as an AI tutor. Imagine a student who's interested in planes and is studying about WWII in history, say, and gets engaged in the details of the planes used and how they changed as the war progressed, while his classmate is more interested in how art was affected during the war. Teachers are unlikely to have the depth of knowledge to guide and keep both engaged. But I digress, here…

Expand full comment
author

We have a lot to solve in terms of reliability of LLMs, but yeah, I totally agree with you!

Expand full comment

Khan Academy is trying this now. I wonder how they are handling the 'correctness' issue.

Expand full comment
author

Maybe it doesn't matter that much for non critical systems. But you wouldn't commit AI generated code to land an airplane without serious review.

Expand full comment
May 4, 2023Liked by Alejandro Piad Morffis

I think we'll see a new paradigm open up - one which we're already seeing the beginnings of with "prompt engineering", which I see as the early days of a new assembly language for AI systems.

Since programming languages in the general sense can be seen as abstractions over hardware instructions, I suspect we'll come to see abstractions within the AI domain which translate natural language input into domain-specific prompts.

In essence, I think what we'll see is an emerging market for AI tools to comprehend your input and convert it to "model friendly prompts", rather than users trying to engineer prompts directly themselves.

The primary problem with AI will remain (for a long time, imo) comprehension: understanding what the user is actually trying to do. Tools which take natural language input, analyse the target models / systems, and generate domain-specific prompts for those models - that's an area where a lot of effort will be spent imo. Basically a compiler for human language, where the output targets some subset of models / systems.

Expand full comment
author

I sympathize with that vision, though I suspect for a long time we will still need to have direct access to the generated code. Contrary to compilers, where we go from formal (high-level) semantics to equivalent formal (low-level) semantics, in natural language we have no formal semantics, so I'm not sure we can ever (or at least in the near future) trust the generated will do what we asked unless we review it.

Expand full comment
May 4, 2023Liked by Alejandro Piad Morffis

The correctness of the generated output is one thing that I'm not sure we could ever realistically formally verify, but I suspect that an inability to formally verify it wouldn't necessarily stop such a toolchain from being widely adopted, if it could be demonstrated to be "good enough" in a wide enough variety of cases. The "correctness" of any AI system will always be debatable in my opinion - I'm not sure it's a reasonable assumption that we will ever be able to implement an AI system that is guaranteed to correctly understand natural language - humans have a hard enough time understanding each other so the idea we're able to build a machine that can always understand us is probably a pipe dream! Formal semantics of natural language is certainly not something I'd profess any expertise in, but I do believe there will be enough active interest and research that significant progress will be made in this field in the not-too-distant future, even if no universal solution is on the horizon any time soon!

Expand full comment
author

Yeah, I totally agree with you. There's a sense in which, today, some of us also program via natural language when we manage a large project and basically trust other programmers understood us and coded what we wanted. Yes, we can look at a PR, but we don't formally verify that Bob's code is semantically equivalent to what we thought. We rely on imperfect unit tests and some intuition. I guess AI coders will replace some of the Bobs of the future.

Expand full comment
Apr 22, 2023Liked by Alejandro Piad Morffis

If by coding you refer to the act of writing the code, I think it will be an order or magnitude different on a scale where we are shifting more to write prompts for tools like ChatGPT instead of writing it from scratch. There is no reason to think otherwise if you can get your code written faster. This will probably be also defined by how the industry accepts it as a skill in new hirings, I see it as a skill people will display in resume and interviews to get hired.

If by coding you refer to the act of reasoning about a problem or a system in order to produce a solution to it, I think the usefulness of AI tools will be less, because most of such reasoning are not stored in a form that can be used to train those AI tools. The apparent reasoning these tools can produce today and will probable produce in the next 10 years or so, is by extrapolating the training data and the capacity of generating valid ideas will be limited to a combinatorial suffle of what it has seen in the training data.

Probably will happen that a natural selection of some sort will occur over time in the programming field, prepared programmers will probably get better using these tools, and could probably dominates the industry as companies sees the values of the programmer + AI combos, over unprepared programmers that with or without AI tools won't be enough. Just, make sure to know your stuff, people! 😄

In general, it will change, by how much will depends on a multitude of factors, including how people will learn. This is where teachers and educators like you comes in, in thinking and designing how people will learn in the next 5 to 10 years, so no pressure 😁.

Expand full comment
author

Agree. I'm not sure I will only be writing natural language prompts any time soon, but surely this will become a significant part of my total coding time, in the same way as browsing through IntelliSense suggestions, debugging, or applying refactoring tools has become an important part of coding, beyond simply writing the code.

And yes, the other part of coding, that is, algorithmic thinking and other high-level considerations, from architecture to performance, will be even more important as we are freed from the low-level tasks.

Expand full comment
Apr 21, 2023Liked by Alejandro Piad Morffis

More and more applications are no longer standalone. Most are in facts distributed systems. Actually many programs working together to provide a service. Being able to think holistically and how to recover from errors will be important. I don't know how it will look like concretely.

Expand full comment
author

Yeah, the history of programming languages has mostly been about freeing us from the low-level thinking to be able to focus on the high-level, holistic picture.

Expand full comment
Apr 22, 2023Liked by Alejandro Piad Morffis

Yes but it was done *locally*. You abstract from the specifics of your machine/operating system/internal representations. In distributed programming the system is many programs working together to achieve something. Can it be embedded at the language level ? There were proposals like Erlang but not very convincing yet I find.

Expand full comment
author

But what is fundamentally different in distributed systems than in other types of modular designs, like operating systems? I guess the question is, what has to be upgraded in our way of thinking (and teaching) about computational processes, to face qualitatively different challenges in distributed systems than before?

Expand full comment
Apr 22, 2023Liked by Alejandro Piad Morffis

The problem is that you have to synchronize with other programs. But you have really no information on how they work. Of there is a problem : is it because the other program doesn't work ? Hasn't it received the information? Was the problem on communicating with you ? It raises question on your programming (do you have stateless/idempotency properties ?). All those questions are not real problem on a single computer.

Expand full comment

Great idea (discussion). For a start, already now the barrier for entry are lower. Learning the fundamentals takes you a lot further today then it did six months ago. That's the first biggest change that's already here

Expand full comment
author

Uhm, interesting. Why do you think lower entry barriers are a challenge?

Expand full comment

"change" not "challenge". I think it's a positive thing. More people will be able to get into programming. We've seen this shift already in recent decades, also thanks to languages such as Python (I myself come from a scientific background, not CS), but this new tech will broaden the pool of people who get into coding further. That's good.

It also means that newcomers can do interesting and useful stuff much earlier. One of the problems with learning to code from scratch is that for a while, there isn't much you can do until you've learnt more intermediate and advanced topics. That may change now

Expand full comment
author

Got it, I misread that :) I agree with you, and to talk to your second point, the moment could be near where a kid with a couple natural language prompts could create a 3D environment with characters that talk and act. That could be super cool for getting kids introduced to coding at an early age. I would love to see a game engine a la Scratch but using natural language, diffusion models, etc., so kids can create games super easily.

Expand full comment