Skip to content

[Feat]: Proper Feedback Mechanism #274

Open
@DevangML

Description

@DevangML

Describe the problem and solution that you'd like.

I just tried out Command Dash and found out one flaw in it, see we have ability to chat with it but it will not rethink on its approach much like we have in say GPT 4, though the first product is quite good here due to fine tuning but feedback mechanism is weak which makes it difficult for us teach the AI how to adapt to our codebases and write exactly how we write.

Please can you add a dedicated button with prompt engineered prompt which will attach to our prompt to improve response and regenerate entire testcase applying those changes and not giving same test code again which is the current behaviour.

Also, another solution for it can be to make the AI think that it is a wizard who will embark on a journey to iteratively generate test case for the problem by designing and deliberation

Describe alternatives that you have considered.

No response

Additional Information

No response

Acceptance Criteria

Acceptance Criteria:

  1. A button for feedback for the test or refactoring done which will successfully push the AI to rethink
  2. Long term memory to remember stuff about the specific testcase setup and environments we give it, which should be exportable as a file like a memory chip. (Using it we should be able to bring back the AI we trained in any machine)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions