Shell-AI (shai
) is a CLI utility that brings the power of natural language understanding to your command line. Simply input what you want to do in natural language, and shai
will suggest single-line commands that achieve your intent. Under the hood, Shell-AI leverages the LangChain for LLM use and builds on the excellent InquirerPy for the interactive CLI.
You can install Shell-AI directly from PyPI using pip:
pip install shell-ai
Note that on Linux, Python 3.10 or later is required.
After installation, you can invoke the utility using the shai
command.
To use Shell-AI, open your terminal and type:
shai run terraform dry run thingy
Shell-AI will then suggest 3 commands to fulfill your request:
terraform plan
terraform plan -input=false
terraform plan
- Natural Language Input: Describe what you want to do in plain English (or other supported languages).
- Command Suggestions: Get single-line command suggestions that accomplish what you asked for.
- Cross-Platform: Works on Linux, macOS, and Windows.
- Azure Compatibility: Shell-AI now supports Azure OpenAI deployments.
OPENAI_API_KEY
: Required. Set this environment variable to your OpenAI API key. You can find it on your OpenAI Dashboard.
OPENAI_MODEL
: Defaults togpt-3.5-turbo
. You can set it to another OpenAI model if desired.OPENAI_MAX_TOKENS
: Defaults toNone
. You can set the maximum number of tokens that can be generated in the chat completion.SHAI_SUGGESTION_COUNT
: Defaults to 3. You can set it to specify the number of suggestions to generate.OPENAI_API_BASE
: Defaults tohttps://api.openai.com/v1
. You can set it to specify the proxy or service emulator.OPENAI_ORGANIZATION
: OpenAI Organization IDOPENAI_PROXY
: OpenAI proxyOPENAI_API_TYPE
: Set to "azure" if you are using Azure deployments.AZURE_DEPLOYMENT_NAME
: Your Azure deployment name (required if using Azure).AZURE_API_BASE
: Your Azure API base (required if using Azure).CTX
: Allow the assistant to keep the console outputs as context allowing the LLM to produce more precise outputs. IMPORTANT: the outputs will be sent to OpenAI through their API, be careful if any sensitive data. Default to false.
You can also enable context mode in command line with --ctx
flag:
shai --ctx [request]
Alternatively, you can store these variables in a JSON configuration file:
- For Linux/macOS: Create a file called
config.json
under~/.config/shell-ai/
and secure it withchmod 600 ~/.config/shell-ai/config.json
. - For Windows: Create a file called
config.json
under%APPDATA%\shell-ai\
Example config.json
:
{
"OPENAI_API_KEY": "your_openai_api_key_here",
"OPENAI_MODEL": "gpt-3.5-turbo",
"SHAI_SUGGESTION_COUNT": "3",
"CTX": true
}
The application will read from this file if it exists, overriding any existing environment variables.
Run the application after setting these configurations.
This implementation can be made much smarter! Contribute your ideas as Pull Requests and make AI Shell better for everyone.
Contributions are welcome! Please read the CONTRIBUTING.md for guidelines.
Shell-AI is licensed under the MIT License. See LICENSE for details.