Ask HN: Any tool for managing large and variable command lines?

49 points by bjackman 11 days ago

I often find myself building up big complex command lines for testing my code, which live in my shell history.

Sometimes it's possible to just wrap these up in scripts but often my invocations of that script get so variable that I need many flags etc and now the commandline gets really long and I'm back at square one.

A little hard to describe in the abstract so here's a present example:

- I wanna compile a kernel and boot it up in QEMU, then SSH into it.

- But sometimes, I wanna skip the build (it takes a few seconds to detect that the tree is unchanged)

- Sometimes, I wanna boot with a different special disk file.

- Sometimes, I wanna change the networking setup. Sometimes this requires me to charge the part of the commandline that SHH's into the VM, to match the changes I made to the QEMU commandline.

- etc etc

So I just have a huge command line in my shell history and I bring it up and manually edit it each time.

But I think it's possible to make this process more convenient. An inspiration for this is Magit, an Emacs plugin that allows (among many other things) building up and stringing together lots of complex git commandlines with varying flags, via a few keypresses in a discoverable text UI.

I wonder if something like this already exists? If not I think it could be a fun design challenge. By looking at the shell history and using heuristics about common CLI design patterns perhaps you could automatically create keyboard shortcuts.

philomath_mn 10 days ago

I started using just [0] on my projects and have been very happy so far. It is very similar to make but focused on commands rather than build outputs.

Define your recipes and then you can compose them as needed.

[0] https://github.com/casey/just

  • jarpineh 9 days ago

    I second this. Though I might just (heh) be too biased having today added some common Just recipes as global shell aliases [1]

    I also use Just for build and deploy commands. Other thing I was thinking of was setting environment variables. Complex pipelines can be split into several dependent recipes. Or using shell aliases you can string them together yourself with regular pipes in very compact form. Just recipes defer to shell or a programming language of your choice. Just makes some shell stuff easier without replacing or hiding it.

    [1] https://just.systems/man/en/chapter_68.html

    Edit: I also added Atuin https://atuin.sh/ recently to better deal with shell command history. Though that one I’m still learning.

izoow 9 days ago

I rely on an unlimited bash history and fzf to fuzzy search through it. Commands that get too long and complicated might get thrown into a simple bash script, often quite literally "thrown" as a single line without any cleanup or extra logic.

I've tried making pretty scripts with many options, but as you said, then you just end up with another tool with many complicated options, not to mention the time it takes. I noticed that the quick and dirty scripts that I often just copy paste and tweak if I want to get a different functionality get by far the most use and survive the longest.

I've also been meaning to have a look at invoke[1] which seemed interesting, but I haven't gotten to it yet.

[1] https://www.pyinvoke.org/

nequo 11 days ago

If you're using bash, you can edit long commands by hitting C-x C-e. This will open an editor with the command in the buffer. When you save and exit, bash will execute the edited command.

This is not in the style of `transient` in Emacs but it could work well, especially with your wrapper script that takes command line flags.

  • jmholla 9 days ago

    In addition, I think bash's `operate-and-get-next` can be very helpful. When you go back through your shell history, you can hit Ctrl+o instead of enter and it will execute the command then put the next one in your history on the command line, and keep track of where you are in your history. This way, you can rerun a bunch of commands by going to the first one and Ctrl+o till you are done. And you can edit those commands and hit Ctrl+o and still go to the next previously run command.

    Note: fzf's history search feature breaks this. https://github.com/junegunn/fzf/issues/2399

    Edit: And up and down will still keep the relative position you are at in your history in case you want to skip a command.

  • jonS90 9 days ago

    I use this in combination with FZF's ctrl-r command history which allows me to fuzzy search command history.

lambdaba 9 days ago

I use vi-mode and when i need to make more elaborate edits, I have this binding in zsh to edit the current line in $EDITOR:

  autoload -Uz edit-command-line
  zle -N edit-command-line
  bindkey -M vicmd v edit-command-line
For "normal" Emacs keybinds this is the standard:

  bindkey '^x^e' edit-command-line
As others have mentioned this goes well with fzf and its shell history plugin.

But, in truth, the shell interface could use a major overhaul to make it more like a notebook, there are early attempts at this (e.g. Warp), but it's still quite timid IMO.

I'll also say this is one of the perks of being a vi user, it's the CLI usage where it really shines, I can make surgical edits really quickly, thanks to all the jump and change keybinds that are so central to vi usage.

martinbaun 10 days ago

I have a `run` bash file where I just return exit 0 on third line. Then I can copy/paste anything I need up from below the exit 0.

Silly, yes. but it's super easy and extremely fast.

  • quesera 9 days ago

    > exit 0

    If you use `exit $?` instead, you will preserve the exit status from the previous command.

  • secwang 9 days ago

    It's great.

    • martinbaun 8 days ago

      Thanks. Simple and dirty, but works everytime.

      Even more, I put a vim shortcut so I can run it without leaving the terminal.

invalidator 9 days ago

I create wrapper scripts. I try to keep them simple... they're convenience scripts, not a replacement for a build system.

Here's one for a QEMU dev env where I add and remove bits of command line in a freeform way. I just break out the command line into components, and uncomment the ones I want for a particular use:

  #!/bin/bash
  
  set -ex
  
  # Drive created with:
  # /usr/bin/qemu-img create -f qcow2  ngx.qcow2 40G
  
  #vga='-vga cirrus'
  vga='-vga std'
  #vga='-vga qxl'
  
  drive='-drive file=ngx.qcow2,cache=writeback'
  
  tablet='-usbdevice tablet'
  
  #vnc='-vnc :3'
  
  #cdrom="-cdrom base_install.iso"
  #cdrom='-cdrom documentation.iso'
  #cdrom="-cdrom /dev/sr0"
  cdrom=''
  
  #net='-net user -net nic'
  net='-net user,restrict=off,hostfwd=tcp::6666-:6666 -net nic,model=rtl8139,macaddr=00:11:82:C3:54:55'
  #net='-net user,restrict=on,hostfwd=tcp::6666-:6666,hostfwd=tcp::3389-:3389,hostfwd=udp::3389-:3389 -net nic,model=rtl8139,macaddr=00:11:82:C3:54:55'
  
  monitor='-monitor stdio'
  
  qemu-system-x86_64 -enable-kvm -nodefaults -m 768 $vga $drive $tablet $vnc $cdrom $net $monitor
If you have multiple stages to your build/run process, you can chain together a few scripts. build-and-run.sh builds it, and then calls run.sh which runs it. If you want to skip the build step, you use run.sh.
jedberg 9 days ago

    history | sed 's/^ *[0-9]* *//' > commands.sh

    vi commands.sh

    (edit the file to only include your big nasty command lines, then comment them all out)

    chmod +x commands.sh
Now every time you want to run a command, you just uncomment that line. If it needs changes, you can copy the line, change it, and now you have another command. You can even add comments to help you remember what each command is for.
  • bomewish 7 days ago

    Wow this is kinda elegant. An improvement I can think of is putting it on your PATH so you can just invoke “commands” directly each time?

jauntywundrkind 9 days ago

I'm going to recreate something I did a while back, which was to create a vim command/script which would: copy the current selection or line to a specified term window, and hit enter. And then version two, do that, but then copy the contents of the line/selection into a new line.

Basically using vim as an input buffer. With the option to copy the current line- when working systems there was often incremental improvement that I was trying to do, so running then creating a new line was valuable. I probably should just learn how to insert a line offset into the current line though.

What's still missing in both my vim-driven world and in shells is any ability to make references & changes. Ideally I'd like to be able to have lines that interpolate vim registers & more complex expressions. Ideally I want a shell that lets me see my previous raw uninterpolated expressions, that hps me distil out pieces for reuse. The way that history just shows the interpolated version destroys the ability to see what & where I'm borrowing bits of the past that I could be using as hints to re-compose new futures.

kazinator 10 days ago

For that specific example, I might think about a script which asks interactive questions.

  Do you want to do the kernel build? [Y/n] _

  Which diskfile to use: /path/to/diskfile_    # pre-stuffed editable input
Or else well-documented command line options, nicely explained with --help.
  • gverrilla 9 days ago

    I'm an apprentice programmer. I also like interactive programs. I use the library questionary[0] in Python, and ChatGPT is great at providing the necessary code in no time.

    0: https://pypi.org/project/questionary/

dusted 10 days ago

I have removed limit for bash history lines and file size and am using https://github.com/junegunn/fzf for reverse-search, and of course I use ctrl+x ctrl+e to edit long command lines in vim (I have export EDITOR=vim in bashrc)

x11antiek 10 days ago

I am a simpleton, me using CLI is basic compared to what you've described, but I have a tool called Warp on my Mac which helps me with saving terminal commands and making notes on what/why they do stuff. It's free and it's cool imo.

nepthar 10 days ago

I made something called shell workspaces[0] for this.

I like to tinker with bash and this has helped me keep all of the commands relevant to a particular project discoverable, accessible, and documented.

It uses a function "," for command execution. So, for instance, if the workspace file defines a function "build", when you're in the workspace, the command `$ , build` will run it.

It's not the most comprehensive solution to this, but it's probably the bit of shell programming that I use almost every day and has saved me tons of time.

[0] https://github.com/nepthar/ws.sh

  • Jeff_Brown 10 days ago

    I'm not sure I understand. Is the idea to step through a script, maybe executing the default value of each line, but offering the opportunity to edit that line before executing it?

kjkjadksj 9 days ago

I would just have a script with either all the variables on top of the script for you to set, or have it look for them in a configuration file you might change a little every run. That’s just my preference for this sort of thing, though. If I can avoid using external stuff or spending time learning new syntax, and keep it as simple and stupid as possible I try and just do that. Sounds like your solution is simple enough with shell history file keeping a record for you. Maybe playing with aliases or writing functions in your bashrc could save some typing for you

OJFord 10 days ago

Sounds like you want `just` (pretty much a riff on make, but more designed for this use case). I don't find much use for it personally, but I think it fits what you're looking for.

moritonal 9 days ago

A collection of Jupyter notebooks with "!" calls to binaries. The python requirement is inconvenient but the consistency, output tracking and variables make it worth while.

binarymax 9 days ago

Which shell? If you’re using bash, you’re missing out - so I’d first upgrade to zsh or fish, which have better options.

The other thing I’d do is get really really good at writing shell scripts. Also nowadays it’s super easy to take an existing complex one liner and ask GPT to make it a robust script with args (but check it of course!)

I also like some of the other suggestions if you’re predisposed to them already, like Makefiles, which are simple, flexible, and powerful.

akincisor 10 days ago

I've recently used a ruby script with a bunch of methods that I can configure with a few yaml files to generate a shell script which I can then run.

Each run generates a timestamped folder with not only the bash script, but also a copy of the configs used to generate it, and all the data that the commands in the bash script needs (json files). I find this style of generator a common pattern for commonly used but frequently tweaked scripts.

nailer 10 days ago

After about 30 years I'm really good at shell. But I don't think it's the best for managing complex workflows.

You could use a local port of something like github actions, but personally I'd pick a language (Python, TypeScript, Ruby) and make a function that takes an input kernel and optional disk and networking config, builds if the tree has changed, and returns the ssh command to the created VM.

BerislavLopac 9 days ago

If you're comfortable with Python, Invoke [0] can be a great help. You can include your bash code directly inside it, or collect your scripts in a directory and run them from Invoke tasks. Plus, you can easily use all the power of Python on top of your scripts.

[0] https://www.pyinvoke.org/

wooptoo 9 days ago

I've heard good things about atuin

https://github.com/atuinsh/atuin

  • codetrotter 9 days ago

    I use it, it’s pretty good.

    For some reason it seems to try to connect to some atuin server even though I am using default config. That worries me slightly sometimes.

    I also had one case where I was using ssh from my home to a machine at one of my parents place and there was an ongoing packet loss problem and so every time I press Ctrl-R to run some previous command I had to wait actual ages for the terminal to redraw because it opens atuin and shows as many lines of history as it can and also updates when you type to search. Of course that is an extreme situation, but it was wild nonetheless to see how much worse the command line experience became in those conditions because of that.

    I still use it though, and will continue to as well.

tonymet 9 days ago

1. .sh scripts

2. makefiles

3. If you're using bash, try the edit-and-execute-command command. By default, this is assigned to Ctrl-x Ctrl-e (type ctrl-x, then ctrl-e).

teyc 9 days ago

I’ve used this https://github.com/secretGeek/ok-bash

It is a menu that works off a file that contains a collection of single line commands.

I like it because it is a local command that is only relevant to the project I’m working on, and sometimes after months of absence it is easy to pick up where I left off.

Kalanos 10 days ago

rather than invoking the script with arguments, you could just define the variables at the top of the script and run it

  • Jeff_Brown 10 days ago

    +1

    Unless you depend on the results of partial execution to decide how to execute the rest of the script, this seems like the most natural solution.

austin-cheney 11 days ago

I would do that via a custom Node application executing commands as child processes driving a container.

dogline 9 days ago

If I understand you correctly, this is where I just use a Makefile, but keep it very simple ```render:\tffmtpeg ...``` Do a have dozen of those, and you now have a `make render` command to run that task.

xlii 9 days ago

Lately I’ve been running such shell scripts/small commands through org-babel in Emacs. They can be fed one through another which is cool.

Also can recommend Justfile (it has great discoverability and it just works).

dgfitz 9 days ago

Compose all your commands into “does one thing well” and abstract the “thing” it does to an argument… if you have to. Then just chain them together as needed.

untech 9 days ago

Can’t you use aliases or functions in the ~/.bashrc? That’s what I do for complex lines I want to reuse.

philipov 9 days ago

Put all the command-line configuration into a shell script with zero arguments.

sargstuff 9 days ago

copy/write edited shell history to unique shell file for each set of edits. make shell file executable. run shell file containing action want to use.

replwoacause 9 days ago

I save stuff into SnippetsLab (macOS only)

from-nibly 9 days ago

Learn bash. Make a bash file. Make it executable. Put it on your path, or create a ~/bin directory and add that to your path.

You can write whatever crazy nonsense you need.

seivan 9 days ago

Snipkit and Pet.

I prefer Pet, mostly because Snipkit makes me verify 20 times (slight exaggerated) and doesn't give me the actual command so it'll be part of the shell history.

Honestly, just dump the command in my shell and let me press enter and make that the confirmation, but whatever.

You can give arguments with default values, so they are reusable across projects.