Prompts are Engineering Artifacts
As AI assisted coding becomes more widely adopted, we software engineers wrote less code and more prompts. Some of the prompts are persisted and saved in the codebase or development environment, like CLAUDE.md, SKILL.md in agent skills, but more prompts are what we enter to the agent coding tool as we navigate through different tasks.
In software engineering, code is the medium we instruct computers to execute and compute what we want. Besides code, other artifacts like product requirement documents, mocks, design docs, issue trackers etc. are also important to keep the knowledge about the software. In the age of agentic coding, prompts shall be treated as an important artifact too.
Firstly, it’s a new way we instruct computers to execute and compute. Although it adds a layer of indirection where code -> software becomes prompt -> code -> software, it captures lots of information about how an engineer approaches a problem, including brainstorming / exploration, design, design review, task breakdown, prioritization, testing, bug fixing etc.
Secondly, together with the summary of the response for each of the prompt, they could consist of important knowledge of the software. Despite that they could be noisy, they could still add value when we want to revisit on why certain code was written in certain ways, or how certain decisions were made. I think they’re similar to ticket comments in that sense. When we pick up a feature or debug an issue, we keep updating the ticket with progress, thoughts, partial investigations. They could be noisy, but can be very helpful to track the work progress.
Some Experiments
Today most if not all AI coding tools saves the session history. However, they are not easy to consume as knowledge documents, but rather mainly used to resume sessions. I did some experiments to see how I can better leverage them.
Experiment 1: Prompts and Response Summary Lives in the Codebase
The first experiment is to treat it as part of the codebase. I started to by exporting them as markdown files and check them in as part of the codebase.
While it provides a clearer sense of how I interacted with different agents daily, what tasks I did, it polluted the git history. Many of the prompts I do are exploratory in nature, and they just have too much noise to live in the codebase.
Experiment 2: Prompts and Response Summary Lives in Commit Message
I tried ask agents to include the prompts and response summaries as part of the commit message. However, there are times where my exploration doesn’t result in any code changes, and I need to dump them to files.
Experiment 3: Prompts and Response Summary Lives in Issue Tracker
I switched to record the prompts and response summary in the issue tracker. I used GitHub to track my tasks. Prompts, response summary, code are all tagged into the GitHub issue, and it’s fairly easy to see my thinking process and how AI is helping me navigate through different phases of competing a task.
I am mostly doing this manually by copying pasting things back and forth. After doing it for a while, I think it’s valuable habit to have and decided to automate it.
Automation
Github supports gh issue comment to add a comment to an issue, we can make it a Claude code agent skill, and ask it to update a new comment when we want to.
The agent SKILL.md file looks like below:
---
name: add-prompt-history-to-github
description: Adds a summary of the current Claude Code session to a GitHub issue. Use when the user wants to document session history, save prompts to an issue, or track conversation summaries in GitHub.
allowed-tools: Bash(gh issue comment:*)
---
When this skill is invoked with an issue number:
1. **Review the conversation history** from this session and extract:
- Each user prompt/input, including any commands they entered.
- A brief summary of Claude's response for each prompt
2. **Format the history** as follows:
```
## Session Summary
**prompt:** [user's first prompt]
**response summary:** [brief summary of what was done/answered]
---
**prompt:** [user's second prompt]
**response summary:** [brief summary of what was done/answered]
---
[continue for all prompts in the session]
```
3. **Update the GitHub issue** using the `gh` CLI:
```bash
gh issue comment <issue_number> --repo [replace with your repo] --body "<formatted_history>"
```
## Example
```
## Session Summary
**prompt:** Fix the login button not responding on the home screen
**response summary:** Identified missing onPressed handler in login_button.dart and added the navigation logic to AuthScreen
---
**prompt:** Add unit tests for the login functionality
**response summary:** Created login_button_test.dart with 3 test cases covering tap behavior, loading state, and error handling
```
## Requirements
- Requires `gh` CLI to be authenticated with appropriate permissions
- Summaries should be concise but capture the key actions/outcomesAgent skills are widely supported by other coding agents like Codex Agent Skills and GitHub co-pilot Agent Skills.
Learnings from the Prompts History
One benefit of keep tracking of the prompt history is to reflect at how one approached problems when interacting with coding agent, and continues to refine the agentic coding workflow by automating them.
One thing I learnt by looking at the prompt history for non-trivial tasks is how to better ask agent to write design doc. I often write a design doc first, and then clear context and ask the agent to review it. Often it can finds good things to improve. As I see myself repeated doing this, I make this as part of the review skill.
Summary
As agentic coding becomes better, we’re moving from Code-Centric Development to Intent-Centric Development. While code remains the central piece of software engineering artifact, prompts and agents responses are important trails of software development. It’s worth thinking about how to keep them as part of the knowledge base for our software.
update on Jan 8th 2026:
claude-code-transcripts is a good tool to extract detailed transcripts from Claude Code and view it in a nice format.

