Joonas Ruotsalainen

I'm a full-stack developer and DevOps engineer with over 15 years of experience specializing in building scalable cloud applications and distributed systems.

Programming languages: Javascript, Node.js, TypeScript, Golang, C++, Python

Technologies: Kubernetes, Terraform, Ansible, React, MongoDB, PostgreSQL, LLM, DevOps, Linux, blockchain

Cloud: GCP, AWS

Education: M.Eng in Modern Software and Computing Solutions

🤖 AI tools I recommend

I believe AI is the future of software development. These are the tools I've found most effective in my own workflow for boosting productivity and code quality.

  • Opencode – My favourite coding CLI tool at the moment.
  • Z.ai Code Subscription – Currently the best value LLM coding subscription. Use GLM 4.6 for coding. It's not the smartest model but with good prompting and planning it's sufficient for most of the tasks. Use my referral link for a discount: https://z.ai/subscribe?ic=E2ARZ5VRHU
  • Codex CLI – One of the smartest coding tools out there. It's very limited though with OpenAI's subscription. I recommend this for complex tasks.
  • Gemini CLI – Good for planning, UI related tasks, writing
  • Context7 MCP – Connect this MCP to your CLI tool like Opencode to get up to date documentation for different frameworks and libraries
  • Playwright MCP – Control your browser from your AI CLI tool. Automate UI testing.
  • n8n – Perfect for connecting AI tools and automating many kinds of tasks

Context Management in AI Development

The most important aspect of AI-assisted coding is context management. Without a structured approach, LLMs can drift or overwrite logic. These are the strategies I have found most effective:

  • Keep tasks small and scoped to a single goal.
  • Maintain a clear to-do list for the agent, rather than using single, long prompts.
  • Split large files and components to give each a focused purpose.
  • Use tools that support separated contexts for different tasks.
  • Guide the AI by pointing it to relevant files and folders.
  • Use Model Context Protocols (MCPs) like Context7 to automatically fetch up-to-date documentation.
  • Summarize and compact the current context frequently to keep the AI focused.
  • Pull in only what is needed. Don not load the whole repo or documentation.
  • Know the context window size of the model. For example, GLM 4.6 has 200k tokens context size.

Keeping AI-Generated Code Maintainable

While LLMs make writing code easier, I have found that keeping it maintainable requires a disciplined approach. Here are some of the practices I follow to keep my AI-generated codebases clean and easy to manage:

  • Versioned prompts: Save useful feature prompts in files. Version them.
  • Automatic formatting: Use automated formatting tools such as Prettier, ESLint, or clang so the model focuses on logic, not layout.
  • Linting and type checks: Run static analysis after each AI change. Catch unused variables, missing types, and unsafe imports early.
  • Constant code review: Treat every AI change like a pull request from a junior developer. Read diffs and ask the model to explain decisions before merging.
  • Git discipline: Commit and branch often. Small, isolated commits make it easy to revert or compare outputs.
  • Unit tests: Generate them after the code works and is verified. Use tests to lock in correct behavior and prevent regressions.
  • Context hygiene: Do not use AI for linting or formatting. Compact context often. Keep the context focused on reasoning and refactoring.
  • Do not be afraid to throw away AI code: If it feels wrong, start over. Frequent commits make it safe to revert. Regenerating code costs seconds. Maintaining it costs hours.