Cascada Script: Parallel by Default, Sequential by Exception

Cascada Script: Parallel by Default, Sequential by Exception

| 7 min read

Cascada Script is a scripting language with JavaScript and TypeScript integration that turns async programming upside down: everything runs in parallel by default, you never write await, and concurrency issues like race conditions are no longer a problem - by design.

The Nightmare: Why Async Programming Is So Hard

async/await is a huge improvement over callback hell. But the moment you want performance, you're back to being a concurrency expert.

The fundamental problem: you manually orchestrate what runs when. Which operations can run together? Which must wait? You figure it out yourself, every time.

Shared State

All operations read and write to shared variables or objects. The complexity explodes:

  • Manual dependency tracking. Should operation A await operation B before updating state? Which operations can safely run in parallel? You trace through every possible execution order in your head.

  • Get the order wrong: race conditions. Operation A reads a value, operation B modifies it, operation A writes based on the stale value. Your state is corrupted.

  • Over-serialize to be safe? Make everything await everything else to avoid conflicts. Now your parallel code runs sequentially anyway.

  • Changes ripple everywhere. Add one new operation that touches shared state? Review every other operation to ensure no new conflicts. Your code becomes brittle and impossible to refactor.

State Machines & Batch Synchronization

To avoid this complexity, most frameworks use state machines (with chokepoints between states) or batch synchronization (parallel tasks that must all complete). Simpler to reason about, but: 19 API calls finish in 50ms, one takes 3 seconds. Everything waits at the chokepoint for the slowest operation—and you still manually coordinate the convergence.

The payoff for getting this right is enormous—optimal orchestration can cut response times in half or more. But the manual complexity? That's where code breaks.

What if you could skip the manual orchestration entirely? That's exactly what Cascada Script does.

Let's dive into how it flips the script on async programming.


1. ⚡ Parallel by Default

The most fundamental shift in Cascada is that it's parallel by default. In most languages, code runs line by line, one after the other. In Cascada, any independent lines of code run at the same time.

Think about fetching a user's profile and their recent posts. These are two separate operations that don't depend on each other. In traditional JavaScript, you'd need Promise.all to run them concurrently. Cascada does it automatically.

// These three operations start at the same time, automatically!
var user = fetchUser(123)
var preferences = getUserPreferences(123)
var analytics = getAnalytics(123)

// Build your result with the @ operator
@data.user = user
@data.preferences = preferences
@data.analytics = analytics
// Result: { user: {...}, preferences: {...}, analytics: {...} }

No Promise.all, no await, no special syntax. If it can run in parallel, it will.


2. 🚦 Data-Driven Flow: Code Runs When Its Inputs Are Ready

You might be thinking, "What if one operation does depend on another?" Cascada has you covered.

The engine automatically analyzes the data dependencies in your script. An operation will only run once all the variables it needs are ready. This simple rule guarantees the correct order of execution and completely eliminates race conditions by design.

// 1. This runs first
var user = fetchUser(123)

// 2. This depends on 'user', so Cascada waits for it to resolve
var userGreeting = "Hello, " + user.name

// 3. Meanwhile, these run in parallel with everything above
var posts = fetchPosts(123)
var comments = fetchComments(123)

Here, userGreeting won't be calculated until fetchUser(123) is complete and the user variable has a value. But posts and comments start fetching immediately since they don't depend on user.

No more subtle timing bugs that only appear in production. The engine orchestrates everything automatically.


3. ✨ Implicit Concurrency: Write Business Logic, Not Async Plumbing

This is where the magic really happens. Notice the lack of await in the examples above? In Cascada, you never have to think about whether a variable holds a value or a promise. You just use it.

// Traditional JavaScript: Promise hell
const userPromise = fetchUser(123);
const name = await userPromise.name; // Wait, can't do this!
const actualUser = await userPromise;
const name = actualUser.name; // Finally!

Cascada makes this completely invisible:

// Cascada: Just use it
var user = fetchUser(123)

// The @data command builds our final JSON output
@data.greet = "Hello, " + user.name
@data.email = user.email
@data.status = user.isActive ? "active" : "inactive"

Forget .then() and forget manually tracking promises. Cascada handles the asynchronous state invisibly under the hood. You can pass a "future value" (a promise) into a function or use it in an expression, and it just works.

This lets you focus entirely on your business logic, not the async plumbing.


4. ➡️ Implicitly Parallel, Explicitly Sequential

Of course, sometimes you absolutely need things to happen in a specific order, especially when dealing with operations that have side effects - like writing to a database or making stateful API calls.

For these cases, Cascada provides a simple escape hatch: the ! marker. Marking a call or path with ! enforces a strict sequential order on that specific path, without slowing down the rest of your script. A call does not need a data dependency from another to run in sequence.

// The ! marker creates a sequential chain for a specific path
var account = getBankAccount()

// 1. This MUST finish first
account!.deposit(100)
// 2. This waits for deposit to complete
account.getStatus()
// 3. This waits for getStatus
account!.withdraw(50)

// Meanwhile, these run in parallel with everything above
var preferences = getUserPreferences()
preferences!.update(prefs)   // Sequential chain for preferences too
var analytics = fetchAnalytics()

The ! creates a sequential chain for just that specific path, without affecting the parallelism of everything else. It's parallel by default, sequential by exception - the opposite of traditional programming.


5. 📋 Chaotic Execution, Predictable Output

While independent operations run in parallel and can finish in any order, Cascada guarantees that your final output is assembled predictably.

Data manipulation commands (like adding an item to a list) are applied in the exact order they appear in your script. So even if a for loop's iterations complete out of order, the final array will be structured correctly.

var userIds = [101, 102, 103]

// All three fetchUserDetails calls run in parallel
// Maybe user 103's data comes back first
for id in userIds
  var details = fetchUserDetails(id)
  @data.users.push(details.name)
endfor

// Even so, the final output is ALWAYS predictable and in order:
// { "users": ["Alice", "Bob", "Charlie"] }

This gives you the best of both worlds: maximum I/O performance from parallel execution, with the reliability of sequential data assembly.


6. ☣️ Dataflow Poisoning: Resilient Error Handling

Traditional try/catch blocks don't work well in a massively parallel system. If one of fifty concurrent API calls fails, should everything stop?

Cascada uses a more resilient model called dataflow poisoning. When an operation fails, it doesn't throw an exception; it produces an Error Value. This error then "poisons" any other variable or operation that depends on it. Crucially, unrelated operations continue running completely unaffected.

// Let's pretend fetchPosts() fails, but fetchUser() succeeds
var user = fetchUser(123)            // ✅ Succeeds
var posts = fetchPosts(123)          // ❌ Fails and becomes an Error Value
var comments = fetchComments(posts)  // ☣️ Poisoned because it uses 'posts'

// Now, let's see what happens:
@data.userName = user.name     // ✅ Works fine, uses the successful result
@data.postCount = posts.length // ❌ Becomes an error because 'posts' is poisoned

// You can check for errors and provide fallbacks
if posts is error
  posts = []  // Assign a default value
endif

@data.postCount = posts.length // ✅ Now works with our fallback

This approach isolates failures, prevents corrupted data from producing incorrect results, and makes your workflows incredibly robust.

The beauty of dataflow poisoning is that you have complete control over recovery. You can detect errors at any point in your workflow (using is error conditions), repair them with fallback values, log them for monitoring, or even retry failed operations - all while the rest of your script continues executing normally.


7. 💡 Clean, Expressive Syntax

Cascada offers a clean, expressive syntax that will feel instantly familiar if you know JavaScript. You can use variables, loops, conditionals, and build reusable macros.

// Variables, conditionals, loops
var discount = 0;

if userType == "premium"
  var discount = 0.10
endif

// Build reusable macros
macro formatPrice(amount, discount=0)
  var final = amount * (1 - discount)
  @text << "$" + final
endmacro

formatPrice(100, discount)  // Outputs: $90

You get variables (var), conditionals (if/else), loops (for/while), macros for reusability (macro), and modular code organization with import and extends.


8. ⚙️ Under the Hood: Chaos Managed Gracefully

Underneath all this simplicity is a powerful engine that:

  • Tracks data dependencies automatically so operations run when their inputs are ready
  • Handles concurrent execution safely with maximum I/O throughput
  • Guarantees consistent, deterministic outputs even with chaotic parallel execution
  • Propagates errors without crashing so failures are isolated and manageable

You focus on what you want to happen. The engine handles how to make it happen safely and efficiently.


🎯 Why This Matters

Cascada isn't trying to replace JavaScript - it's designed to be the backbone of your data layer.

Use it to compose complex workflows that wire together LLMs, APIs, databases, and external services. By inverting the traditional programming model - parallel by default, sequential by exception - it lets you build high-performance data pipelines that are surprisingly simple and intuitive all with maximum I/O throughput and minimum mental overhead.

The result? Code that reads like synchronous logic but executes with the performance of carefully orchestrated async operations. You get to focus on what you're building instead of how to manage promises, race conditions, and execution order.

And the best part? When you look at your Cascada script six months later, you'll actually understand what it does.


🚀 Ready to Simplify Your Async Code?

Cascada is a work in progress under active development and evolving quickly. Install it with:

npm install cascada-engine

And start writing async code that makes sense:

import { AsyncEnvironment } from 'cascada-engine';

const env = new AsyncEnvironment();
const script = `
  var user = fetchUser(123)
  var posts = fetchPosts(user.id)

  @data.welcome = "Hello, " + user.name
  @data.postCount = posts.length
`;

const result = await env.renderScriptString(script, context);
// { welcome: 'Hello, Alice', postCount: 42 }

⚠️ Heads up! Cascada is a new project. You might run into bugs, and the documentation is catching up with the code. Your feedback and contributions are welcome as we build the future of asynchronous programming.


Learn More

👨‍🍳 The Kitchen Chef's Guide to Concurrent Programming with Cascada - Understand how Cascada works through a restaurant analogy - no technical jargon, just cooks, ingredients, and a brilliant manager who makes parallel execution feel as natural as following a recipe

📖 Cascada Script Documentation

🧩 Cascada GitHub Repository

🤖 Casai - an AI orchestration library built on Cascada for effortless, parallel agent and workflow design.


Final Thoughts

If you've ever fought through promise hell, callback pyramids, or race conditions, Cascada might just feel like magic.

It turns async programming from a juggling act into a walk in the park. You write code like a human - simple, sequential, intuitive - and let the runtime handle the parallelism and safety behind the scenes.

The future of async programming isn't about getting better at promises. It's about not having to think about them at all.