ilyash / ngs
- пятница, 22 апреля 2016 г. в 03:11:56
C
Next generation UNIX shell
Next generation UNIX shell. See the man page.
Shells are Domain Specific Languages. The domain has changed greatly since the shells we use today were conceived. The shells never caught up.
What I see is a void. There is no good language for system tasks (and no good shell). What's near this void is outdated shells on one hand and generic (non-DSL) programming languages on the other. Both are being (ab)used for system tasks.
The problem with outdated shells looks pretty clear: they were made with one kind of tasks in mind but are used for other, bigger and more complex tasks. Such scripts usually look as a fight against the language and working around it much more than using it to solve the problem.
The problem of using general purpose programming languages (Python, Ruby, Perl, Go) is not so obvious. Domain-specific language makes your life much easier when solving the tasks that the language was built for. Of course you can write to a file in any language but probably not as easy as echo something >my_file
. You can run a program but it's probably won't be a simple ls
. The scripts that I've seen (and written in Python and Ruby) look too verbose and show unnecessary effort. Such scripts do not look an optimal solution (at the very least).
This document started as internal draft. Some sections might not be clear. Still exposing it as per "release early" policy. Feel free to open a GitHub issue or email me directly: ilya (DOT) sher (AT) coding (DASH) knight (DOT) com
Development. Help is welcome.
apt-get install uthash-dev libgc-dev libffi6 libjson-c2 libjson-c-dev
cd c
make
./ngs SCRIPT_NAME.ngs
cd c
make test
Fork on GitHub, work on whatever you like, make a pull request. If the change is big, it's better to coordinate with Ilya before you start.
Screencast of small-poc
is on youtube: http://www.youtube.com/watch?v=T5Bpu4thVNo
Not to block, allow typing next commands even if previous command(s) still run(s).
Provide good feedback. In GUI for example, this can be green / red icon near a completed command to show exit status. Tweaking prompt to include such info or typing echo $?
all the time is not what I dream about.
All operations made via a UI, including mouse operations in GUI must have and display textual representation, allowing to copy / paste / save to a file / send to friend.
Different UI modules must exist. In the beginning we can start with these:
Commands scroll up, new commands are added at the bottom. When a command that haven't completed yet, reaches top of the screen, it can be converted to a mini-area at the top (right?) of the screen, representing the command and current progress (and exit status later).
[Later] Confirmation mode. One user in collaboration mode gives the command to execute, another user must approve the command for execution.
Display structured results as real f*cking structures (JSON, Yaml, ...)
awk
. Well, if the fields in records are the same it's actually a table. $1
in awk could be id
or name
, referencing the data by column name and not by field number. Yes, you have jq
and it's close but it still works (in best case) with list of records with same fields.Underline red/green for existing/non-existing files?
Actions on objects that are on screen. Think right click / context menu.
Commands history: among duplicate commands all but last should be grayed out, so that non-grayed out commands are unique.
When hover over an object, highlight the same object everywhere it appears.
Feedback
Ability to add new commands after a running one completes
Manage multiple servers at once
Preferably using nothing more than standard SSH, maybe uploading required parts automatically when a command needs to run.
Smart displaying of results, such as "OK on all T servers", "OK on N servers, fail on M servers, out of total T servers"
Smart handling of failures, maybe divide into groups depending on command status / output and then letting to manage these groups. Consider dividing to several "fail" groups depending on the fail mode. Think deploy script that should handle the conditions. Also make one large group for any failures (contains all fail sub-groups).
Automatic server groups by
netstat -lpnt | grep -q :8000
or pgrep java
or dpkg -l '*apache*' >/dev/null
Allow to run commands on remote hosts and connect them with pipes
@web_servers { cat /var/log/messages } | @management_server { grep MY_EVENT | sort > /tmp/MY_EVENT }
That's just for the sake of an example, it would probably be better to grep
locally.cat
can not be pushed or pulled directly between
the machines (and therefore will be transferred through the controlling host, where the shell runs)
Easy integration with currently available software
Smart completion, context sensitive
wget .../x.tgz
, tar
[COMPLETION_KEY] [Maybe some choice keys] -> xzf x.tgz
"Mentioned" completion
apt-cache search ...
, apt-get install ...
Isn't this copy+paste annoying? It's already on the screen, it's a package name, and still the system can't
complete...Toaster/script prepare mode/assist
Processes communication
Remove stupid limit of one output is connected to one input of the next process, allow fan out and fan in
Support named inputs and outputs
Allow on the fly connection to pipes/files. Think a logger is writing to a file on full disk. Just reconnect to another file on another partition/disk.
UI representation of a job. A map with all involved processes, their open files, sockets, pipes, resource usage (CPU, disk, network), process running time, accumulative CPU time, ...
Two types of history:
ls $a
, when the cursor is on $a
)Copying mydata.txt to /tmp/
)70%
or File 7 out of 10
)Two languages actually.
$(...)
syntax{...}
syntaxreplace(Str orig, Str a, Str b)
replace(Array orig, Str a, Str b)
- replaces in all strings in the orig
array
all(orig, isStr)
replace(File f, Str a, Str b)
- will sed
the file, possibly backing it up.Measure and graph pipes throughput (and/or process performance for example by how fast it reads an input file)
In a job, per process state / title
Number of open files / sockets is part of the process status
Interactive pipe construction: show intermediate results
Preview output with a shortcut key (don't run the command and insert it's output to the stream)
Dynamic update command output, think ps
/ top
when their output is in the stream
On the fly check of called commands existence
On the fly check of command arguments
Hosts group that is automatically updated should show last update time
pending
for EC2 machines
(in the shell can be pending till SSH connection is ready)Statuses should have "debug level" to them so that background jobs could be showed as less important
Sessions. Total and global environment with it's own (saved) history etc. Think session per project or client. Option to share a session with other people. Open issue: history - common for session or per user (probably per user).
Quick navigation. Search for session, host, host group, IP, history, etc.
Icons (in any UI that allows it). Icons are processed much faster than text.
Every failed script must have exact error information to it.
No re-run with added -x
or echo
s should be required.
Commands of the shell will pass objects in pipes between them, not strings. External commands should use JSON or API (not sure here).
For remote host or hosts group, give an option to execute command(s) as soon as the host is available. Notify when done, failed or timed out.
In-line editor (maybe) for when a command line becomes few lines. Or quick way to take the long line and continue editing it in an editor.
BIG: Arguments / etc. - description language. Think Javadoc.
curl URL
-> curl has argument of type URL.
curl [SHORTCUT_KEY_PRESSES]
->
menu with object types -> remember the selection.ec2kill
< ec2din ...
.Define which commands will run where when using hosts group. Think ec2...
on
a group of machines which include all ec2 machines: "management" machine, web, app,
etc. servers.
Hosts groups should be organized in a stack ( pushd/popd style )
Hosts group will be ordered. When running commands, one could specify to run in order or async.
Following instructions should work (tested on Debian)
cd small-poc
mkdir ssl
cd ssl
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout mysitename.key -out mysitename.crt
cd ..
npm install
nodejs server.js
ls
pr
- a long process with progress barsleep
- a process that sleeps for 5 secondsfail
- a process that fails