Loading your experience...
Loading your experience...
A prompt I saved while spinning up a side project called Plague Doctore — a constructive AI-agents app. Saving it here as a record of the shape of the idea on this date, not as a polished spec.
/init is analyzing your codebase… your task is to create a project, Plague Doctore - Constructive AI Agents. Start with the latest bun, which is installed to initialize a next.js app (use canary version of next.js). The goals of this project will be described below and you must disperse this knowledge into the seed of this codebase to reach every root to every branch. The project dependencies are to be installed always latest and greatest otherwise from the react react-dom and next lib. Use the ai sdk to efficiently execute the specific purposes of this project. Create an outline of the project, and we're going to go fast because we know exactly the next steps to take, and also we ensure strictness and rigid security, stability, and scalability through procedures of ai-assisted linting using claude sdk (https://docs.anthropic.com/en/docs/claude-code/sdk) which also shall be installed and depended on to perform our duties to our members. Focus on the current task of planting and documenting exactly my instructions as I have outlined them with the specifics including documentation links with context each time regarding the specific url provided's intended abstraction implementation. Create a plan that will allow us to execute, sequentially ensuring each step of the outline is vetted to make sense using resources, and VERY IMPORTANTLY, SAYING I DO NOT KNOW RATHER THAN TAKE A hallucinated infactural substantial material AND USE A HALF-BAKED GUESS. Use 100% tough and rad crazy all out typescript strict to the MAXIMUM baby. Create all types in a types file, or if shared, in a shared global types folder for clarity. ALWAYS USE LIBRARY TYPES - Check if the library has a type rather than putting simple interface or types based on looser types than the actual one exported, especially, but not limited, to enums and database model types; prefer the typescript verbs omit and pick rather than just make a plain ole jane. Do not disobey my orders ever which I have prompted you with initially here within our space of this repository to be planted. Here are some key features to grind on (sequentially and with dependencies upon the previous tasks outlined clearly, allowing for some tasks later on to be parallelized in a full review). Once you are done initialing our workspace to grind, review everything to make sure it is accurate, abundantly framed, and absolutely against amok.
- Run an advanced and powerful application to leverage the totality of advanced artificial appliance of intelligence to execute your work on a codebase. In the app, user experience follows as the user enters the app and sees a pleasant loading experience as the client connects to the server, downloading the user's current workspaces and relevant documents. The url shall always reflect the user's current visual dissection of the view, whether it be the workspaces page, works page, the now page, a subpage using a dynamic id (make sure to ALWAYS AWAIT NEXT.JS PARAMS https://context7.com/vercel/next.js/llms.txt and follow latest practices to create this app using PPR). The home page contains a brief but navigatable with a few quick actions from each of the main server calculated model references. The workspaces list contains each folder the user has opened. There are global settings for the user to also view on a user page in a tab (the tabs and modals also must be referenced via a hash in the browser url for specifically nice navi ux) which contain global settings like prompts, mcp servers, self-documentation to be ingested for knowledge. Each workspace has specific MCP servers, prompts, Plague Doctore's specialized context management capability (cook), and user specific settings. Make the global and workspace utilities extensible and with modular composition. You'll need to think through and reason realistically about the following to determine practical application for advanced assistance.
- The user must have installed postgres and redis.
- Use the bun redis library.
- Use trpc for end-to-end typesafe APIs https://context7.com/trpc/trpc/llms.txt
- We will recreate this in typescript to run in our backend and perform analysis of the codebase, lessons, etc. https://github.com/langchain-ai/langconnect - the reason for interfacing with the langchain library with typescript https://langchain-ai.github.io/langgraphjs/llms.txt - the langgraph performs duties to precalculate important context information magically for the user through an understanding of the prompt, current workspace git status, learned lessons over time, open files, and more using a modular composition pattern to ensure we have extensibility to step into flows of the user experience
- Look at the way chat was implemented in this github folder to model our components for the UI off of: https://github.com/FranciscoMoretti/sparka
- Create a TODO.md that contains each task outlined and stages it for MVP, then forward thinking of versions to 1.0.0 and beyond
- Leverage the principles of context here https://github.com/davidkimai/Context-Engineering such that the user experience follows the pattern to maximize user efficiency in their codebase
- in the simple file browser with diff tool provided in the user interface for the workspace as well as a list of checkpoints that allow the user to go to that point in time by using a separate git worktree to simply create very specific file level edits by knowing ONLY the files that were modified and APPLYING the changes at that point in time on JUST THAT FILE to give the user an ability to transcend the space time continuum and go back AND FORWARD
- use a state machine on multiple layers in synchronicity to perform user duties to create tasks and return back the user experience, the latest and greatest https://context7.com/statelyai/xstate/llms.txt
- ai sdk will be used as well, using the v5 version https://context7.com/context7/v5_ai-sdk_dev/llms.txt to leverage the latest and greatest benefits
- gemini sdk (https://context7.com/googleapis/js-genai/llms.txt) and claude sdk (https://context7.com/anthropics/anthropic-sdk-typescript/llms.txt) shall be used to perform various coding tasks.
- rely on agent mode and allow the user to kill or stop the process with ctrl c and a simple ui experience for viewing in-progress procedures. Give the user the option to keep going without review, but make it an enum option such that one of the modes automatically determines whether it needs to use the function call "loop human into the madness"
- all functions should be modular and reusable anywhere plausible
- The user will work in their current directory chosen specific to the workspace, and workspaces can exist for more than one url, so we must distinguish this clearly and ensure we use GETTERS and SETTERS everywhere to ensure the objects obtained f
- KEEP NEXT.JS PAGE SERVER RENDERED ALWAYS, COMPOSING THE ui in a way that wraps the data fetching to leverage ppr and show loading states in specific component layers specific to the data fetch response value and meaning to the component; make everything very simple and REDUCE COMPLEXITY to keep matters of concern separate.
- implement a domain expert collection that is modular to the codebase to provide the AI behind the scenes but transparent to the user in the output of all input/output tokens markdown file that has a TOC organized by chat, step in sequence, logs, etc. The domain expert is a collection that is basically an id that can be created OR updated by using an id that can be recomposed using the following: "workspace-name:rule-overarching-pattern". The workspace-name is self-describing. The "rule-overarching-pattern" involves deep integration however with understanding the type of domain we are dealing with and providing an interface and aggregate list of all rules to interact, find, update, and utilize within a task of a work.
- tasks shall be determined on the initialization of the task using the user checked plan options. The plan options are sequentially presented and are designable by the user within the UI but the core starting ones included are PLAN, EXECUTE, and FOLLOWTHROUGH. Each step can use a different user selected sdk option from the ones enabled (we need to also have the server just check if the user has it available using a ping endpoint that the user can refresh for each, eg check if claude is installed).
- The application will allow the user to leverage multiple user global and workspace specific agents and choose which ones perform what part of the tasks.
- AI Chat writes to a markdown file as you chat with it to perform tasks on your codebase. Output max must be capped, and each chat session stored in its own dated folder with the session id and (~) date. When you edit the file during a session, the conversation history in the messages sent to the AI will be edited too so that the previous context is correctly referenced, with a clear correction note that is created using a 1-time process after the document is saved by queueing a job.
- The application remembers the MCP server configuration and as part of the PLANNING step, determines using the langchain collection of current mcp (must make sure it is up to date always when running since it is a background embedding process using database models for everything including the status) to pick which servers to enable. This must be a method call to always allow the other steps to re-evaluate the need to use MCP for each step in the sequence.
- Use a separate bun app that will be a worker in a workers folder. The workers should be modular enough to be extensible and allow engineering expansion of specific worker processes. Since this is a next.js app with bun, make sure to run with "--conditions=react-server" like "bun --conditions=react-server workers/start-chat-worker.ts".
- When a user prompts the agent, we shall follow steps that the user can choose from on a global and workspace specific codebase to execute upon that are stored and obtained as part of the entry into a workspace.
- Setup tailwind v4 and shadcn-ui for components, remembering to use a design system.Most of these prompts get thrown away. This one is shaped like a manifesto — half spec, half pep talk — and that shape is worth remembering even if Plague Doctore never ships. The interesting artifact is the prompt as a genre, not the project it describes.