CascadaScript and AI-Assisted Development

CascadaScript and AI-Assisted Development

| 5 min read

Not long ago, Anthropic announced the Claude C Compiler: a C compiler developed by a team of AI agents over about two weeks, producing roughly 100,000 lines of Rust code, at a cost of around $20,000 in API usage. Granted, this would not have been possible if the LLM had not been trained on myriad similar projects, and if the agents had not had complete, human-curated C compiler tests at hand.

But it still hit a nerve in the compiler and programming language communities, where the sentiment against AI-generated code is much stronger than in many other industries - and for very good reason.

Nowadays everything runs on top of a programming language, and developers are accustomed to never having to deal with language and compiler bugs. Searching for a bug in the language is the last resort, after every other avenue has been exhausted. That is why, in projects like these, every piece of code is carefully curated, inspected, and tested. One can understand why many people in the field are openly hostile to AI-assisted development.

So what is CascadaScript, and what part does AI-assisted development have in it?

CascadaScript

CascadaScript explores mostly uncharted territory: what would ordinary imperative programming look like if safe concurrency were built into the execution model itself? Not bolted on with promises, callbacks, task graphs, or visual workflows, but made implicit in a language that is instantly familiar to JavaScript and Python developers.

You write code sequentially, as you always have. No special syntax, no orchestration. The runtime figures out what can run concurrently and runs it. Results are always identical to sequential execution - just faster.

Async concurrency is how you handle code that needs to wait on multiple things at once: a database query, an LLM call, a stateful API. Done right, it is dramatically faster. Done wrong, it is one of the most common sources of subtle bugs. Managing it either falls on the developer, manually reasoning about time, order, and dependencies, or requires a specialised language with unfamiliar syntax and a completely different way of thinking.

CascadaScript offers a third way: familiar imperative code, with concurrency transparently handled by the runtime. JavaScript and Python can one day have this - with the right constraints. CascadaScript is the proof.

Back to AI

AI, used wrongly, can quickly derail any complex project and produce AI slop. Used correctly, it enables things that would otherwise be too costly to attempt: deep refactoring, architectural restructuring, rapid exploration of different approaches, and systematic consolidation. The result can be a codebase that is more coherent, more elegant, and more maintainable than the one you started with.

This matters especially for a project like CascadaScript, which is essentially research. It means navigating unexplored territory where there are no established patterns to follow, no prior art to copy, and where the cost of a wrong architectural decision compounds quickly. CascadaScript does not use dependency graphs or dataflow execution in the usual sense. The concurrency built into it is not a magic switch or a single universal algorithm that makes everything possible. It is a web of cooperating algorithms.

That is exactly where AI assistance, used well, earns its place.

Not long ago, I arrived at a new concurrency algorithm: faster, more universal, and better in every important way, opening the path to new features that were not possible before. But it requires a fundamentally different underlying architecture, and the change ripples through the entire codebase.

There are two tempting paths.

One is to spend another year carefully rewriting everything from scratch. The other is to reach for the AI sledgehammer: keep pushing until it works, stable and clean code be damned.

Neither is right.

The real power of AI is something in between: let it carry the weight of the mechanical transformation while you stay in control of the architecture. Deep, expensive refactoring that would otherwise be too risky to attempt - done systematically, and done right.

The Workflow

Here is the workflow I use.

1. Have a Well-Defined Task

Iterate in your head, in your editor, or on a napkin until you have a solid idea of how the change works and how it should be implemented.

You can use AI, but it should not discover the architecture for you. It can challenge it, refine it, and expose gaps, but you should have a solid idea what you are trying to build.

2. Write It Down

Do not ask the AI to write the initial plan for you. Write dry, short, to-the-point descriptions. Include the essentials. Leave no ambiguity. Skip the things that can be logically inferred.

3. Generate Q&A

Ask the AI to generate Q&A: both the questions and the answers it thinks are correct.

Most answers will be correct, or will need only small corrections. Some will be completely wrong. That is useful. Wrong answers reveal places where your plan is underspecified or where the model is likely to wander.

Iterate until you have the details and edge cases covered. Use different LLMs. Use your brain.

Here is an example of the initial architecture proposal and the notes from the Q&A: extends architecture raw notes.

4. Generate a Technical Design Document

Ask the AI to generate a technical design document from the proposal, then iterate over it. Read every detail. Ask different LLMs to review it. Evaluate every review point yourself.

The TDD is not there to make the project feel formal. It is there to make hidden assumptions visible before they become code.

5. Create Implementation Steps

Create the implementation steps: each small, but not necessarily atomic, step required to implement the task.

The steps should be concrete enough that progress can be checked, but not so small that the AI loses sight of the architectural direction.

6. Divide the Work into Phases

Divide everything into phases that include many implementation steps. Things should not be implemented one step at a time. The AI will try to finish every step by adding extra code to make things run, often without following the broader plan.

Each phase should bring the project to working and testable code, and, if possible, with minimal temporary scaffolding.

Let the AI write the initial tests, but evaluate them yourself. Are all conditions handled? Are all edge cases covered? Are the tests proving the actual design, or only proving that the current implementation behaves like itself?

7. Implement a Phase

Ask the AI to implement a phase. If new problems or tasks arise, ask it to stop so you can evaluate. Unplanned obstacles happen. Plans can change. Sometimes patching forward is not the right move. Change the plan and go back if necessary.

Have the AI review its own code. Ask other LLMs too. Then have a cursory look at whether the AI followed the plan and did not do something obviously wrong.

Depending on the task, you may decide not to do a thorough review after every phase. Some phases are intermediate by nature, with many things still wrong, unnecessary, or suboptimal. That can be acceptable. It is not always beneficial to micromanage at this stage. Use your discretion.

8. Review the Whole Picture

When all phases are finished, review the whole picture thoroughly, even if you reviewed each phase along the way.

To inspect all changes across multiple commits, you will need tooling that lets you view everything changed so far as one coherent set. The excellent Git Tree Compare VS Code extension is good for this. I have also been developing a similar extension, not yet released, for exactly this purpose: gitbase.

With everything in place, this is the right time to check everything and make sure you understand every bit of code, that it is correct, and that it matches your vision.

Be Ready to Start Over

The most important thing is this: at every step, ask whether you are happy with the direction things are going. Be ready to scrap everything and start over. This does not happen very often, but it does happen.

AI assistance does not remove the need for judgment. It makes judgment more important. The code can move faster than before, which means your architectural decisions matter sooner and compound harder.

Learn About CascadaScript

If you are still curious about CascadaScript, it is nearing the end of a big transition and is already ready to be tested by you.