Clone and inspect the public bundle
The bundle contains sanitized JSONL corpora for docs, code, and guides, plus MCP, agent, feed, and pipeline examples.
git clone https://github.com/knowledge2-ai/k2-coding-context-demo.git
Developer path
Load the public Java R&D bundle, connect the K² MCP server to your coding agent, and verify that the agent receives scoped, cited evidence from docs, source, tests, and guide corpora.
Commands are intentionally explicit so a developer can reproduce the public bundle path.
The bundle contains sanitized JSONL corpora for docs, code, and guides, plus MCP, agent, feed, and pipeline examples.
git clone https://github.com/knowledge2-ai/k2-coding-context-demo.git
Verify the harness, sample scorer, and configuration without contacting K2 or any external model.
PYTHONPATH=src python -m unittest discover -s tests PYTHONPATH=src python -m k2_java_rd_demo.cli show-config PYTHONPATH=src python -m k2_java_rd_demo.cli score-sample
Use your own K2 project and API key. Keep credentials in environment variables or a local ignored .env file. If you do not have a project yet, create one in the K2 Console.
export K2_API_KEY="<your-k2-api-key>" export K2_API_HOST="https://api.knowledge2.ai" PYTHONPATH=src python -m k2_java_rd_demo.cli bootstrap-customer-demo --execute
Use the same stdio server entry for Codex, Claude Code, Cursor, or any MCP-capable agent. Inline examples are below; the full config is in the asset bundle.
docs/customer-demos/demo-java-customer/k2-assets/examples/mcp-config.example.json
Ask for a Flink REST-handler implementation plan and verify the answer cites guide, docs, source, and test evidence.
Which guide rules, source files, and tests should I use before adding includeMissing to JobVertexWatermarksHandler?
These examples all point at the same K² stdio MCP server. Use the format your coding agent expects and keep keys in environment variables.
{
"mcpServers": {
"k2-java-rd": {
"command": "python",
"args": ["scripts/k2_java_rd_mcp_server.py"],
"env": {
"K2_API_KEY": "${K2_API_KEY}",
"K2_API_HOST": "https://api.knowledge2.ai",
"K2_PROJECT_ID": "${K2_PROJECT_ID}"
}
}
}
}{
"mcpServers": {
"k2-java-rd": {
"command": "python",
"args": ["scripts/k2_java_rd_mcp_server.py"],
"env": {
"K2_API_KEY": "${K2_API_KEY}",
"K2_API_HOST": "https://api.knowledge2.ai",
"K2_PROJECT_ID": "${K2_PROJECT_ID}"
}
}
}
}[mcp_servers.k2-java-rd] command = "python" args = ["scripts/k2_java_rd_mcp_server.py"] env_vars = [ "K2_API_KEY", "K2_API_HOST", "K2_PROJECT_ID", "K2_FLINK_DOCS_CORPUS_ID", "K2_FLINK_CODE_CORPUS_ID", "K2_GUIDES_CORPUS_ID" ]
The first cited query should return a concise plan with role-separated evidence, not a generic Java web answer.
Once the demo query works, the useful question is how the same pattern maps to your own repo.
Most quickstart failures are configuration or indexing state, not model behavior.
Confirm the key belongs to the selected K2 project and is exported in the shell that starts the MCP server.
Restart the coding agent after editing the MCP config and check that the stdio command runs from the repo root.
Wait for ingestion and indexing to finish, then verify corpus IDs in the environment.
Reduce the demo query loop or move the project to the tier used for the pilot.
Do not commit API keys, project IDs from private tenants, or raw evaluator dumps. Use environment variables locally, rotate keys after demos, and keep live scorecards outside public bundles unless they are scrubbed.