- new
- past
- show
- ask
- show
- jobs
- submit
OK. I love Raymond’s blog but this is crazy. Microcomputers existed only as a prototype in 1973 (things like Intel’s Intellec dev systems) and there were no operating systems for them. Strictly speaking, Kildall did start developing CP/M in 1973, but at that point it ran only on a simulator on a PDP-10 mainframe.
1979, sure. 1973? Way too early…
CP/M could be developed only after the launch of the 8080 and the delivery of the development system.
In UNIX, the environment variables were added in the Seventh Edition (1979-01), together with the Bourne shell.
I do not remember whether any other command interpreters used something equivalent with environment variables before the UNIX shell (excluding the interpreters for general-purpose programming languages, like LISP and APL, where you can run a function in REPL and that function can access global variables).
Therefore the quoted year may be a typo for 1979, when environment variables appeared in the UNIX shell, but were not available in the CP/M Console Command Processor (CCP, the predecessor of COMMAND.COM).
CP/M 1.0 was demoed to Intel in 1974, but they didn't buy it. iCOM FDOS was the first operating system that was available to people, and it sure didn't have environment variables.
Anyway, these operating systems didn't have multiple directories. But you could use CP/M 2.x's ASSIGN command to bind a logical name to a physical name. Minicomputer operating systems had this, also IBM mainframe had JCL DD commands.
Which is fun, because this is the same time difference between 2020 and now
And then we think we didn't have ChatGPT in 2020
The whole story is that Microsoft just never bothered to standardize, despite using it themselves.
"Over time, programs were written with MS-DOS as their primary target, and they started to realize that they could use environment variables as a way to store configuration data. In the ensuing chaos of the marketplace, two environment variables emerged as the front-runners for specifying where temporary files should go: TEMP and TMP."
And before that there are a few paragraphs describing the migration of applications from i8080/Z80 based CP/M towards x86 based DOS via mechanical translation.
The background is that the issue hadn't existed in CP/M because there hadn't been environment variables. Perhaps if the issue had already been seen in CP/M, the developers of MS-DOS might have defined a standard variable to avoid it. Maybe. Other than that it doesn't seem to have much to do with CP/M specifically.
It could simply be: When envars were added to MSDOS…
How did you measure the fitness and decide it was 'over'?
Why do we need to adopt extant standards? (I was going to ask, why standardise? But realised that might confound the North Americans. : )
I assume that they first tried /dev/null which failed, so then moved onto just plain null?
Otherwise it would not make sense that a unix programmer did this. More likely ula dos programmer misspelled NUL as null.
That's been a feature since DOS 2.0, there was even an undocumented option AVAILDEV to make the prefix mandatory, instead of having device names present everywhere. But it broke the common trick used to detect if a directory exists ("if exist c:\some\path\nul").
I honestly would have liked that better for a lot of programs than the dotfiles they litter all over my home directory.
And now in my late twenties, suckless terminal is the only one that would work reliably on a shitty old enterprise linux system at work. Yeah, we got xterm and konsole (the older one). I am seeing them in a whole different light now. I did not read the source code now and it is effectively a foreign language to me, but just being able to have modern features in it without too many dependencies is a different level of bliss. This time, I am glad I have the flexi patch to the rescue since, i passed on suckless terminal as a real alternative since I don’t want to patch it manually or solve merge conflicts!
Even though I don’t like the elitist attitude of the project, can’t deny they got a point. Why does a terminal emulator need to be so complicated!
I wonder, is this really such a big problem? How often do people add patches or change their config?
I've configured my st once and haven't touched that build for years. I use only few patches like scrollback, custom colortheme and a "plumb" for few scripts.
I've also had an opportunity recently, to try a "modern" and trendy terminal and I can't see myself switching to something slower (in terms of lag) and using 10x more memory and cpu even when idle for literally zero gain.
Additionally, it helps lower the barrier to entry for a lot of people, who would have shied away from the manual patching flow. You’d be surprised how often i have seen people squint with default xterm on our servers, not knowing how to configure it and messing around with xrdb. (Which takes a while to propagate across LSF clusters). With flexipatch it feels easier to introduce it to them since I just say run make after any config changes to apply the setting and restart the terminal emulator.
Ohh i tried wezterm and ghostty. Couldn’t get them working using just software rendering. And once st worked, I realized I don’t need it tbh.
I almost want to set up a VM that sets up XDG_CONFIG_HOME as ~/.foobar and see how many apps actually respect it, and how many still write to ~/.config.
And if you install more than one version of go, they get placed in $HOME/sdk [0]. Last I checked, this path is hard-coded with no override, despite this being a known issue for 8 years.
/tmp must be world-writable and for multi-user or multi-tenant systems it becomes a security hole. Storing temporary files in the current user's home directory (or a subdirectory thereof) makes sense.
What doesn't make sense is this blog post about TMP and TEMP, and ending with "I don't know why" (in different words).
The reason is simple: different programmers thought the other name was bad. They were under no obligation to come to a consensus.
Don't forget about TEMPDIR and TMPDIR! Also Windows has its own environment variables for this. But generally, Linux software ported to Windows still use TMP or TEMP.
It makes sense when it's a user option. If /tmp isn't an option due to security concerns, then use $CWD by default. I can always alter the config to some other location if I do not like it. With the amount of programs that litter $HOME, especially with caches, you have to whitelist directories when backing it up. With a naive rsync, you'll find half your transfer is junk.
I don't disagree that it would be cleaner if things were more organized, but I definitely prefer something more like that the person you replied to said, ie "~/.xdg/foo/{config,share,state,cache}"
The only XDG folders that seem reasonable to me are .local/share/fonts and .local/share/applications, and I think both of those are still just conventions, not actually described in any spec.
Interfacing with people is never easy.
export CONFIG_DIR="${XDG_CONFIG_HOME:-$HOME/.config}"
export CACHE_DIR="${XDG_CACHE_HOME:-$HOME/.cache}"
...
XDG_*_DIRS are a bit more complicated, but nothing that can't be solved with a simple for-loop in any modern programming language.The remaining holdouts tend to be very old applications. (The XDG standards are less than 25 years old, then you have to give time for them to be adopted.) For some of those applications, it would create support issues even if it would be trivial to implement. For others, it would create issues since other software would have to be modified to reflect the changes. For others, the software did not have a distinct configuration directory so untangling it would be a major effort.
In the case of the latter, just look at Firefox. Yes, it recently moved the .mozilla directory to .config. It is in no way reflective of the XDG standards. Among other things, there are log files, cache files, and add-ons in there. In my mind, that is worse than having ~/.mozilla. Instead of having a directory that can be cleanly backed up, with the exceptions being elsewhere, I am left having to sort through everything. I don't blame Firefox for taking that approach though: users were demanding a clean home directory and the developers had legacy code to deal with. They simply took the path of least resistance. (That said, Firefox isn't the only culprit here.)
I do like dotfiles for portable apps where everything the program needs is in one folder. Personally, my need for portable apps has gone down year on year.
So I guess the moral of the story is: Ensure they always point to the same path, or else...
As I’ve gotten more senior turns out I was right to question it and we can actually talk to the original Microsoft devs now and they explain the whoopsie and how they had to keep it for backwards compatibility.
So then I ask why that excuse is valid, backwards compatibility, when many changes are frequently breaking core compatibility and active business flow (like New Outlook) and they go full hands off. Whoah I’m not the bad dev you’ll have to ask the new guys.
You can’t ask the new guys. And they’re hiding behind leetcode screens. It’s no wonder why these real problems don’t get fixed and we have new outlook. It’s the senior dev from earlier who now works there. All the real devs are retired.
Even when I do get a real answer from Microsoft on annoying things like the user home documents folder being used inappropriately by random programs or straight up forcefully deleted by onedrive in an oopsie, their answer they senior dev invented to give me or go on length about in a technical document or angry interview online is invalidated within 6 months when Microsoft just vibe pushes a random change that randomly alters how these things work in both not a good way and it invalidates their entire core argument.
Just like notepad updates as another example off the top of my head. There are dev interviews talking about how this is a very simple program because it needs to be 0 risk. Then it gets a Microsoft auth login with copilot.
The whole leetcode dev attitude and Microsoft culture really ruins the entire industry. We can’t have civil discussions. Everything turns into Nuh uh your argument is invalid because you don’t work at Microsoft.
Google Chrome famously having chrome install into appdata to exploit and bypass admin rights is a core memory. That’s clearly not the actual intention of that feature for 3rd parties to use in order to bypass having administrator authority. But now this is retconned by the devs as an intended feature because chrome ended up being good at the time at having to sort out the mess of deploying a 3rd party exception program on millions of locked down business computers would’ve been a nightmare.
I noticed this first when I assigned TZ to .tar.xz (or .tar.gz) as I was lazy. Then things suddenly no longer worked. Turns out TZ is ... timezone. So you should not define all variables, right? Well ... how to know that? People can perhaps read tons of documentation, but I want to ... minimize time investment here. So I learn mostly by learn-and-doing. And as far as I know there is no trivial way to know about "this can be dangerous". Those shells are fairly simple at all times - and not that sophisticated.
TMP versus TEMP may also be trivial but ... who can remember the differences? And why is it important?
I have come to think that environment variables are useful; I use them all the time. But they are a bit hackish and not super-elegant and may have side effects. This is not a very important side effect per se, as most people will not run into issues such as TZ or having to think which variable to use and which not, but in the moment when you DO get surprising results, and you don't know why, this may become a problem suddenly. I kind of semi-work around by prefixing via:
MY_
This is even less elegant, to then have e. g.: MY_TEMP = /home/x/temp/
or something like that. But with the prefix "MY_" I rarely
run into problems. So this then becomes my main pointer,
e. g. MY_TEMP_DIRECTORY, and then TEMP and TMP may just
be aliases to that. It's a bit stupid but it is also simple
and kind of works. Beauty is not to be found here, but it is
practical and that is not a bad thing.In my own ruby code I also have to use ENV[] sometimes, which is a wrapper over environment variables, but I also try to be as independent as possible from that. For instance, all my settings are stored in simple .yml files, and these are then used to autogenerate environment variables or handle things in environments where there are no environment variables available (this is sometimes the case; I had that issue with .cgi files many years ago, where I wanted to access all environment variables but for a reason I don't fully remember, this was not available. Then I transitioned into those .yml files and that problem went away; now of course I need to load those .yml files, if necessary, but this is trivial in ruby, YAML.load_file() works very well for the most part, and I find this is more reliable than depending on reading environment variables output. On some restricted environments this may be unavailable though; I had that on university campus running ancient linux systems, so I have to be flexible in this regard).
> Uppercase and lowercase letters shall retain their unique identities and shall not be folded together.
> The name space of environment variable names containing lowercase letters is reserved for applications.
I love rotary woofers :) I hope to get one some day.
Huh. That is interesting, it was before my time, and I never heard of this :D
It was necessary because both RAM and disk space were so severely limited, and because almost every computer came with an assembler.
Many CP/M programs were expected to run in as little as 32K RAM, and 130K of slow-ass floppy disk. Or worse – From a cassette tape. If you had 64K of RAM and a 360K disk, you were something special.
Unlike today, most programs were optimized for the bottom of the market, not the top. You wanted your program to run on as many system as possible so you could sell more copies. You didn't just shrug your shoulders and tell people to upgrade their hardware. The failure was yours, not your customers'.
There simply wasn't room for any kind of external configuration file, or a program to generate that configuration file. Common functions could be accessed via a command-line parameter, but even that logic eats valuable bytes.
Today people complain about the MacBook Neo having just 8,000,000,000 bytes of RAM, saying you can't do anything in such a limited space.
Meanwhile, in 1978, people could write an entire rudimentary IDE in 2,048 bytes.
But I thought specifically patching something to configure it is such a weird concept that I never would have thought of.
One could say that the difference is whether the developers intended the changes you're making to be possible or not, but what about programs with dedicated modding APIs?
But these didn't normally have so much to do with patching executables to add/change functionality.
What was very common on those devices was using the "poke" command in BASIC to change a handful of values, but while it was possible to change code in this way it was much more common to be changing the value of variables - things corresponding to "number of lives left" and the like. Not all that different.
Fairly quickly, though, the games were entirely in machine code and used fancy loaders (still from tape mostly) so you didn't get access to BASIC. This created a market for devices that let you get at a monitor program - the "Multiface" series of addons². They had at least three generations of that device, but the company slightly weirdly evolved into a music production company³ after that, which is kind of cool.
Er, ok, I'll stop now while I still can...
Edit: PS - you should ask him about it. Tell him another former ZX81 owner says "Hi" and that my fingers still ache from that keyboard. Although I sneer a bit at its capabilities, it kicked off an interest in computing that's still paying the bills 40 something years later...
----
¹ https://spectrumcomputing.co.uk/entry/2000265/Book/Not_Only_...
² https://en.wikipedia.org/wiki/Multiface
³ https://en.wikipedia.org/wiki/Romantic_Robot
He says hi back, for him it was purely a work machine (PhD in Chemistry, never did anything with computers as in CompSci), he doesn’t remember too much from back then but he said he loved the architecture of it.
Their success was largely down to their very low price point (clever cost-shaving engineering) at a time when there was a huge untapped public interest in computers.
> Huh. That is interesting, it was before my time, and I never heard of this :D
Yep, it was a thing, and for /some/ programs that were originally CP/M programs (i.e., WordStar 7.0 for DOS) it continued for a long time. The WordStar 7 documentation included patch locations to use (this time, IIRC, for DOS debug.exe) to change various behaviors of the program.
At least on PDP-11 that's how it went for something like RSX-11. I believe same is true for early IBM mainframe operating systems like DOS 360- I think all programs had to be relinked because one option you had was to move things around in memory, and ancillary programs had to know the memory map.
Even later: I wrote a device driver for Xenix: you got a link kit for the OS, you relinked it with your custom driver object file included.
On CP/M you patched the running image (perhaps with DDT), then use the CP/M SYSGEN command to install it on a disk.
https://suckless.org/
Edit: oops just saw that this was already mentioned in another subthread on this page.