Gemini: Initial Experiments: Better Comments
THE CRAFT™️ EXPERIMENT :: WEEK 2 :: POST 2
< Previous Page | Page 4 of 5 | Next Page >
-
Please brainstorm some creative ways that comments can be adapted to A.I. Current programmers use comments to communicate with either themselves (reminders) or with other programmers. Since the CRAFT experiment involves communicating with A.I.s like you, we should consider new ways to adapt this to include the A.I. I would also like to go beyond just comments and add ways to communicate different types of communication (for example, is the comment a directive from me to the A.I.? Is it a question that requires an answer? Is it a question that does not require an answer but should be factored into the response? etc)
For reference as a starting point, here is some basic information on comments:
Programmers use comments in source code for a variety of important reasons. The most popular and widely accepted uses include:
Describing Intent and Purpose
• Explain why code exists or why it’s written in a particular way. Comments often clarify the intent behind a block of code, especially when the reasoning is not obvious from the code itself. This helps future maintainers understand the rationale for certain design decisions or unusual practices126.
• Describe the purpose of a file, class, function, or method. Prologue or header comments at the top of files, classes, or functions provide overviews, explain what the code is meant to do, and sometimes include metadata like author and date127.
Clarifying Complex or Non-Obvious Code
• Highlight complex algorithms or unusual practices. When code is particularly tricky, uses a workaround, or deviates from standard conventions (e.g., due to a library bug or performance reasons), comments are used to explain these choices13468.
• Detail how an algorithm works. Comments can include pseudocode or step-by-step explanations of algorithms, making it easier to review and maintain the code18.
Providing Context and Documentation
• Summarize what a function, class, or module does. These comments often include descriptions of parameters, return values, and exceptions, making code easier to use and integrate25.
• Reference external resources. Comments may link to documentation, specifications, or external references that influenced the implementation1.
Marking Tasks and Issues
• Use tags like TODO, FIXME, HACK, NOTE. These tags help developers quickly find areas needing further work, known issues, or non-ideal solutions that may need revisiting1.
Commenting Out Code
• Temporarily disable code. Programmers often comment out lines or blocks of code during debugging or experimentation to test changes without deleting code permanently16.
Storing Metadata
• Include author, date, version, and legal information. Some comments store metadata for tracking changes, authorship, or licensing1.
Integrating with Tools
• Support for documentation generators and configuration. Comments can be parsed by tools to generate API documentation or configure editors (e.g., Vim modelines)1.
Relieving Stress or Adding Humor
• Occasionally, programmers use comments to vent frustration, inject humor, or make notes about working conditions. While not always professional, these comments are sometimes found in codebases1.
Best Practices for Writing Comments in Code
1. Explain the "Why," Not the "What"
• Focus comments on why a particular approach, algorithm, or workaround was chosen, rather than restating what the code does. Well-written code should make the "what" clear; comments should provide the reasoning, context, or business logic behind it135678.
2. Avoid Redundancy
• Do not duplicate the code in your comments. Comments that simply restate the code add no value and can become misleading if the code changes but the comment does not1256.
3. Clarify Complex or Non-Obvious Code
• Use comments to explain complex logic, edge cases, or non-intuitive decisions. This helps others (and your future self) quickly grasp the intent behind intricate sections167.
4. Be Clear, Concise, and Precise
• Write comments in plain, easy-to-understand language. Avoid jargon, slang, or unnecessary detail. Keep comments brief and to the point—lengthy comments often signal that the code itself needs refactoring156.
5. Keep Comments Up to Date
• Regularly update or remove comments as code evolves. Outdated comments can be more harmful than no comments at all, leading to confusion and errors6.
6. Use Consistent Style and Placement
• Establish and follow a consistent commenting style across your team or project. Place comments near the code they describe (e.g., above functions, before complex logic), and use the same format for similar types of comments567.
7. Comment for Others, Not Just Yourself
• Write comments with the understanding that others will read your code. Avoid personal notes, jokes, or informal remarks; keep comments professional and relevant to the codebase7.
8. Document Classes, Methods, and Functions
• Provide brief summaries for classes, methods, and functions, describing their purpose, parameters, and return values when not obvious from the code or naming56.
9. Reference External Sources When Needed
• If your code is based on external documentation, standards, or online resources, include links or references in comments. This helps others understand the origin of certain implementations256.
10. Avoid Commenting Out Code and Excessive "TODOs"
Use version control systems to manage removed or experimental code instead of leaving large blocks commented out. Track tasks and incomplete features in issue trackers, not as lingering "TODO" comments in the code5.
RESPONSE FROM GEMINI:
Okay, this is a fascinating challenge! Adapting the concept of code comments for communicating with an AI, rather than just about code for humans, opens up exciting possibilities for your CRAFT™️ experiment. It's like evolving commenting from monologue notes to dialogue cues.
Let's brainstorm some ways to structure these AI-directed communications within prompts, moving beyond simple documentation to active guidance. We want to make interacting with AI more nuanced, controllable, and aligned with the OOP principles you mentioned – potentially treating parts of the prompt like instructions for configuring an "AI object" for a specific task.
Drawing inspiration from code comments but tailoring them for AI interaction, here are some creative ways to adapt and expand the concept:
Proposed Framework: AI Communication Markers (ACMs)
Instead of just "comments," let's think of these as AI Communication Markers (ACMs). They signal specific types of information or instructions intended directly for the AI's processing and response generation, distinct from the main content or query of the prompt. A simple prefix system could work well, being less verbose than tags and potentially easier for various AIs to interpret.
Let's use #AI::
as a potential prefix, followed by a keyword indicating the communication type.
Types of AI Communication Markers (ACMs):
#AI::Directive:
(Command/Instruction)Purpose: Explicit commands telling the AI what to do or how to behave. This goes beyond the core prompt request.
Analogy: Like imperative statements in code or direct function calls.
Examples:
#AI::Directive: Format the output as a JSON object.
#AI::Directive: Adopt the persona of a skeptical investor.
#AI::Directive: Ignore any previous instructions regarding tone.
#AI::Directive: Prioritize accuracy over creativity for this task.
#AI::Context:
(Background/Setting)Purpose: Provides background information, situational context, assumptions, or constraints the AI should factor into its reasoning and response. Sets the stage.
Analogy: Similar to header comments explaining file purpose or comments clarifying situational logic.
Examples:
#AI::Context: The user is a complete beginner in this topic.
#AI::Context: Assume the current year is 2025 and the budget is $50,000.
#AI::Context: This prompt is part of a sequence analyzing market trends.
#AI::Context: The primary goal is to generate ideas, feasibility is secondary.
#AI::Constraint:
(Limitation/Boundary)Purpose: Defines specific limitations, boundaries, or things the AI must not do. More specific than general context.
Analogy: Similar to comments explaining limitations or workarounds due to constraints.
Examples:
#AI::Constraint: Do not exceed 500 words.
#AI::Constraint: Avoid using technical jargon.
#AI::Constraint: The response must only include information from the provided source text.
#AI::Constraint: Do not suggest solutions requiring external software.
#AI::Focus:
(Emphasis/Priority)Purpose: Highlights specific parts of the prompt or aspects of the task that require special attention or prioritization. Guides the AI's focus.
Analogy: Like
// FIXME
or// TODO
pointing to areas needing attention, but for the AI's generation process.Examples:
#AI::Focus: Pay close attention to the legal implications mentioned in section 3.
#AI::Focus: The most critical part of the response is the competitive analysis.
#AI::Focus: Ensure the tone is consistent throughout the response.
#AI::Question(?)
(Direct Inquiry for AI)Purpose: Poses a direct question to the AI that requires an explicit answer within the response, often metacognitive or clarifying.
Analogy: A specific type of documentation query, but interactive.
Examples:
#AI::Question? Based on my request, what potential ambiguities do you see?
#AI::Question? Can you confirm you will adhere to the '#AI::Constraint: Avoid jargon' instruction?
#AI::Question? Which of the generated options best fits the '#AI::Context: beginner user' context?
#AI::Consider(:)
(Implicit Guidance/Food for Thought)Purpose: Provides information or questions the AI should consider or factor into its reasoning without necessarily needing to address it directly or explicitly in the output. Subtle guidance.
Analogy: Like a
// NOTE
or a comment explaining subtle reasoning (// Why this approach
).Examples:
#AI::Consider: The audience might be skeptical about AI capabilities.
#AI::Consider: How might this strategy be perceived by competitors?
#AI::Consider: Remember the long-term maintenance costs associated with this technology.
#AI::Define:
(Terminology/Concept Definition)Purpose: Defines a specific term, acronym, or concept as it should be understood and used by the AI within this interaction context. Ensures alignment.
Analogy: Similar to defining constants or clarifying variable names in code.
Examples:
#AI::Define: 'Scalability' here refers specifically to handling 10x user growth within 6 months.
#AI::Define: MVP (Minimum Viable Product) should include features X, Y, and Z.
#AI::Define: Use 'Customer' to refer to paying clients only.
#AI::Structure:
(Output Formatting)Purpose: Specifies requirements for the structure, layout, or format of the AI's response (beyond simple directives like "use JSON").
Analogy: Like comments specifying data structure expectations or coding style guides.
Examples:
#AI::Structure: Present the findings using bullet points nested under H3 headings.
#AI::Structure: Start with a one-paragraph executive summary.
#AI::Structure: Ensure each point includes a brief 'Pro' and 'Con' analysis.
#AI::Metadata:
(Contextual Info about the Prompt)Purpose: Provides metadata about the prompt itself for tracking, versioning, or linking purposes, potentially useful for the AI in complex, multi-turn scenarios or for your CRAFT™️ system.
Analogy: Like header comments with author, date, version, or file purpose.
Examples:
#AI::Metadata: PromptID=CRAFT-S1-P3; Version=1.1; Goal=Drafting Pitch;
#AI::Metadata: Part of the 'Market Entry Strategy' series.
#AI::Metadata: PreviousInteractionRef=ConvID-12345-Turn-4;
Connecting to CRAFT™️ and OOP:
This ACM system aligns well with OOP principles:
Encapsulation: Grouping related ACMs can define the "state" or "configuration" of the AI for a specific task (e.g., combining
#AI::Context
,#AI::Role
,#AI::Constraint
to set up the AI's operational parameters).Configuration/Attributes: ACMs like
#AI::Context
,#AI::Tone
,#AI::Role
,#AI::Constraint
act like setting attributes or properties of the AI "object" for the current interaction.Methods/Behavior:
#AI::Directive
and#AI::Structure
are akin to invoking methods or defining the AI's behavior and output format.Inheritance (Conceptual): In a series of interactions (like your CRAFT™️ experiments), you could establish baseline ACMs early on, which persist unless overridden by more specific ACMs in later prompts.
#AI::Directive: Ignore previous instructions...
allows for explicit overriding.
Example Usage in a Prompt:
#AI::Metadata: PromptID=BlogIdea-Gen-01; Version=1.0; Author=Ketelsen.ai
#AI::Role: Act as a seasoned startup advisor specializing in SaaS.
#AI::Context: The goal is to brainstorm blog post ideas for entrepreneurs launching their first SaaS product.
#AI::Constraint: Ideas should be actionable and focus on early-stage challenges (pre-product-market fit).
#AI::Constraint: Avoid ideas heavily reliant on large marketing budgets.
#AI::Tone: Practical, insightful, and slightly informal.
#AI::Structure: Provide 5 distinct blog post ideas. Each idea should include a catchy title, a brief 2-sentence synopsis, and 3 key bullet points to cover.
#AI::Consider: Many founders struggle with pricing and validation.
#AI::Question? Which of these ideas do you think has the broadest appeal?
Okay, based on the above, please generate the blog post ideas.
This approach transforms commenting from a passive documentation tool into an active mechanism for fine-tuning AI behavior and communication, making the interaction more predictable, controllable, and potentially much richer – perfect for the experimental nature of Ketelsen.ai and the CRAFT™️ framework.
Wow, I really like these. Please provide 10 more ideas like these.
Show thinking
Excellent! It's great that those resonated. Let's dive deeper and expand our toolkit of AI Communication Markers (ACMs) to give you even finer control over AI interactions within your CRAFT™️ framework. Here are 10 more ideas, building on the previous concepts:
More AI Communication Markers (ACMs):
#AI::OnError:
(Contingency/Fallback Plan)Purpose: Specifies how the AI should respond or what it should do if it encounters an error, cannot fulfill a request, or lacks necessary information.
Analogy: Similar to
try-catch
blocks in programming or defining error handling procedures.Examples:
#AI::OnError: If you cannot find data for Q3, use Q2 data and state this assumption clearly.
#AI::OnError: If the requested format is too complex, default to a simple markdown list.
#AI::OnError: If unable to access the external URL, state that and proceed using only internal knowledge.
#AI::UseSource:
(Knowledge Source Specification)Purpose: Directs the AI to prioritize or exclusively use specific documents, data sets, previous conversation parts, or knowledge domains when generating its response.
Analogy: Like specifying library dependencies or database connections in code.
Examples:
#AI::UseSource: Base your analysis only on the attached 'Market_Report_2024.pdf'.
#AI::UseSource: Prioritize information from academic journals published after 2020.
#AI::UseSource: Reference the definitions provided in #AI::Define blocks from our previous interaction (Ref: ConvID-12345-Turn-4).
#AI::Illustrate:
(Request for Examples/Analogies)Purpose: Asks the AI to include specific examples, analogies, metaphors, or case studies to clarify complex points or make the content more relatable.
Analogy: Like adding explanatory comments with examples or pseudo-code.
Examples:
#AI::Illustrate: Explain the concept of 'technical debt' using an analogy related to home maintenance.
#AI::Illustrate: Provide a brief example of how a company successfully implemented the suggested strategy.
#AI::Illustrate: Include a code snippet demonstrating this pattern.
#AI::EvaluateBy:
(Success Criteria Definition)Purpose: Defines the criteria by which the user will judge the success or quality of the AI's response. Helps the AI understand priorities for quality.
Analogy: Like specifying unit test conditions or performance benchmarks.
Examples:
#AI::EvaluateBy: The response will be primarily evaluated on the clarity and conciseness of the executive summary.
#AI::EvaluateBy: Success criteria include: 1) feasibility of suggestions, 2) novelty of ideas, 3) adherence to #AI::Constraint markers.
#AI::EvaluateBy: Focus on demonstrating a deep understanding of the target audience's pain points.
#AI::Perspective:
(Viewpoint Adoption/Comparison)Purpose: Instructs the AI to adopt a specific viewpoint, role (beyond just persona), or to analyze a topic from multiple perspectives.
Analogy: Like running simulations with different parameters or commenting code to explain stakeholder considerations.
Examples:
#AI::Perspective: Analyze this business decision from the viewpoints of Marketing, Finance, and Engineering.
#AI::Perspective: Adopt the perspective of a cautious regulator reviewing this proposal.
#AI::Perspective: Contrast the 'optimist' vs. 'pessimist' outlooks for this technology's adoption.
#AI::DetailLevel:
(Verbosity/Granularity Control)Purpose: Sets the desired level of detail, technical depth, or verbosity for the AI's response or specific sections thereof.
Analogy: Like compiler optimization levels or logging verbosity settings.
Examples:
#AI::DetailLevel: High - Provide in-depth explanations and technical specifics.
#AI::DetailLevel: Low - Keep explanations high-level and focus on the core message.
#AI::DetailLevel: Section 1=High, Section 2=Medium, Section 3=Low.
#AI::Reasoning:
(Process Transparency Request)Purpose: Asks the AI to explain its reasoning process, show its work, or follow a specific thinking methodology (e.g., step-by-step, pros/cons).
Analogy: Debugging output, logging function calls, or comments explaining an algorithm's steps.
Examples:
#AI::Reasoning: Show your step-by-step calculation for the final budget estimate.
#AI::Reasoning: Explain how you weighted the factors mentioned in #AI::Consider to arrive at your recommendation.
#AI::Reasoning: Use a 'Premise -> Inference -> Conclusion' structure for each argument.
#AI::InputPurpose:
(Clarifying User Input Intent)Purpose: Explains why a specific piece of information is being provided to the AI, guiding its interpretation and use of that input.
Analogy: Comments clarifying the role or source of a variable or magic number.
Examples:
#AI::InputPurpose: The following user persona is provided as context for tone, not content generation: [Persona details...]
#AI::InputPurpose: This competitor list is for 'what to avoid' inspiration, not direct copying: [List...]
#AI::InputPurpose: The budget number is a firm constraint: $10,000.
#AI::State:
(Memory/Context Management)Purpose: Explicitly instructs the AI on managing context or state across turns, like remembering specific facts, forgetting others, or assigning variables. More active than
#AI::Metadata
.Analogy: Variable assignment, state machines, or managing scope in programming.
Examples:
#AI::State: Remember Variable 'ProjectName' = "BlueSky".
#AI::State: Forget all instructions prior to Turn #5 except for #AI::Role.
#AI::State: Maintain a running list named 'KeyDecisions' and append today's conclusions.
#AI::Review:
(Self-Correction/Refinement Request)Purpose: Asks the AI to review its own generated output against specific criteria (or previous ACMs) and potentially revise it before presenting the final response.
Analogy: Code review processes, automated linting checks, or self-testing code.
Examples:
#AI::Review: Before finalizing, review your response against all #AI::Constraint markers and list any potential violations.
#AI::Review: Does the generated text consistently maintain the #AI::Tone specified earlier? If not, revise.
#AI::Review: Check for logical consistency between Section 1 and Section 3.
These additional ACMs offer more granular control over the AI's knowledge base, reasoning process, error handling, perspective, and state management, further empowering your CRAFT™️ methodology for deeper, more structured AI interactions.