- new
- past
- show
- ask
- show
- jobs
- submit
Ran into some bugs, like one machine that seems to cause errors and mess up the output on restart, although that looks like it might have been addressed in this release.
If it helps, I put together a video when initially exploring PyInfra: https://www.youtube.com/watch?v=S-_0RiFnKEs
You don't have to do crazy things with Ansible for that yaml DSL becoming the opposite of helpful. Things which would be quite straightforward to express in code become quite cumbersome, hard to understand and hard to debug. Also Jinja is often a horrible choice (you don't have in Ansible). Also Ansible excessively requires it in places where you want proper types and not just a string.
The biggest difference is that Pyinfra is simply Python code. It's incredibly easy to control the system in whatever manner you need to. You can probably do the same thing in Ansible, but it's never quite as obvious how to do it. This also means it's much more clear where and why things work the way they do in Pyinfra, where in Ansible I end up digging through numerous role files to try to find where some variable gets injected.
Incredibly frustrating that the data you want is right there but you can't easily grab it.
If you're doing data manipulation locally you would simply write Python code.
Operations[1] are Python functions which execute (yield) commands which will be run on hosts.
That's the gist of what it takes to write custom modules for Pyinfra.
[0] https://docs.pyinfra.com/en/3.x/api/facts.html [1] https://docs.pyinfra.com/en/3.x/api/operations.html
But the main guy who developed it at that company left, so no idea on its longevity.
Switched to Pyinfra and the difference is day and night. You write python code you can organise your stuff into functions, classes and whatever you like and then instantiate them as you like. Highly reusable configuration.
You have full pwoer such as you can call boto to fetch the list of servers to target, filter base on tags and what not. Only sky is the limit because it is NOT a DSL (or YAML) rather full blow real python.
It worked well and was nicer to deal with than test kitchen for testing UNIXy things (is service running and/or enabled, does file have right permissions, does file include $TEXT, etc). It was very useful for us during big linux upgrades, such as when ubuntu went from upstart to systemd. It can also be good at capturing edge cases with brittle outcomes (especially as ansible went through enormous changes after the red hat acquisition).
Dislikes? I had to fight with pyenvs a bit..
Honestly the bigger issue was testing x86 docker images on an arm mac, as molecule didn't cleanly support cross platform images and we did pull in x86 binaries for our playbooks (by the end of my time at said company, I was also directly managed by product managers who didn't care about tech debt and I couldn't deal with the otherwise desirable idea to move our compute to ARM - a rant for another day). This may also be fixed now.
The only issue was I had to implement some facts and operations myself that probably were available in some Ansible package but to be honest it was trivial.
I found PyInfra to be a great tool for the job at hand. Even though it didn't have many of the operations I needed, I found it easy to write new operations specific to macOS management tasks.
I recently looked at it again to help build EC2 Mac AMIs in combination with Packer, but I ended up with pydoit this time instead.
If you're a software engineer who wants to setup and maintain infrastructure, give PyInfra and Pulumi a go!
Huge fan of PyInfra. For my homelab, I use Pulumi with Python and PyInfra to build fully declarative intent based infrastructure. You can use actual software engineering principles like composition, inheritance, DI to setup and wire your infrastructure and services. One of the benefits of this is your infrastructure and services are now self documenting (have them write out a mermaid diagram!) and easily testable using pytest (from cheap unit tests to extensive integration tests (I use Incus)).
Instead of Pulumi, I originally used Terraform CDK with Python before CDK got IBM'd. The migration to Pulumi was refreshingly painless. My original reason for not choosing Pulumi was the crippled state of the open source, self hosted backend support a decade ago but it looks like that is now way more mature and less crippled.
PyInfra is a breath of fresh air compared to Ansible - its not just fast, it's more Pythonic, so IDE features actually work, readable, maintainable, debuggable. I call it infrastructure for software engineers.
If anyone wants to use an AI agent to try out PyInfra - One issue I've faced is that PyInfra was rearchitected in v2 (and some more in v3?) but what belongs in v1 vs v2 vs v3 isn't very clear, so an AI agent could spend a lot of time writing v1 code, having it fail and iterate to v2 and then to v3.
The official site uses the version in the URL as the namespace but it seems like the SOTA AI agents don't pay much attention to that.
Maybe writing a llms.txt for PyInfra v2, or v3 would be an extremely useful task to help with onboarding newcomers?
---
The original post by the OP https://news.ycombinator.com/user?id=wowi42:
Disclosure: PyInfra core contributor here. We just shipped 3.8.0.
PyInfra is an agentless infrastructure automation tool. Same job description as Ansible, Salt, Chef. SSH into hosts, describe desired state, it diffs and converges. No agent, no central server, no daemon.
The difference: your "playbook" is just Python. Not Python cosplaying as YAML. Not Jinja smuggled inside YAML inside a Helm chart inside a Kustomize overlay. Actual Python:
from pyinfra.operations import apt, files, server
apt.packages(packages=["nginx"], update=True)
files.template(src="nginx.conf.j2", dest="/etc/nginx/nginx.conf")
server.service(service="nginx", running=True, enabled=True)
Idempotent operations. Facts gathered from hosts, branched on with normal `if` statements. Real loops, real imports, a real debugger, real type hints. Your editor autocompletes arguments because, brace yourself, they are just function signatures.
About YAML. Wonderful format. For about eleven minutes. Then someone needs an `if`, and you have `{% if %}` inside a string inside a list inside a map. Then someone types `no` as a country code for Norway and it ships to prod as `False`. Then someone indents with a tab and the parser dies without saying where. Congratulations, you reinvented a programming language. Badly. The honest move is to admit you wanted code, then write code.PyInfra skips the eleven good minutes and goes straight to code.
Release notes in the link. Happy to answer questions.
Infrastructure as Code, not infrastructure as YAML.
Disclosure: another contributor here.
TBH, I was worried a few years ago that there was basically just one (original) contributor. This now gives me added trust that I'm taking the right decision to lean heavily into it.
I hope more people start using pyInfra.
Thank You for your contribution and attention!
If you're reading this, I'll indulge and reask you the two questions:
- question 1: There's clearly a demand for a "Python as a DSL" for infrastructure projects - CDKTF/Python, CDK/Python, Pulumi, cdk8s etc are very popular. I would have imagined pyinfra to be way more popular and ubiquitous than it really is! Do you have thoughts on why pyinfra isn't more popular? How do people typically discover pyinfra? I would imagine any Python dev would intuitively grab pyinfra over Ansible?
- question 2: Do you have any thoughts about cdk8s? As you know well, Kubernetes has similar YAML "hell," and as someone who spends significant resources on pyinfra, I would guess you have given something like cdk8s thought?
I'm happy to engage either over email or here, don't have a preference.
Again, Thank You for building and sharing pyInfra.
There are currently 3 active maintainers incl. the creator of pyinfra. But there are many more contributors incl. repeat contributors.
You can substitute Pulumi for Terraform, PyInfra for Ansible and google for sample projects that use Terraform and Ansible to get a good idea of their strengths and how they come together.
Then, you take that understanding and you realize using PyInfra and Pulumi, you can do all of that in just Python, using all of Python's rich ecosystem.
I worked for a telco company that had a lot of Nortel Passport devices (does anyone know what Frame Relay is?). We started changing the network from Nortel to Cisco. Cisco used telnet (later SSH), but Nortel people were extremelly reluctant to switch.
Turns out the Nortel network managment system (nortel nms) had a very interesting feature: you could open the command console to connect to one of the passport devices... or you could connect to a device group (or all the network) and run the same command in all devices.
This was great for auditing which version had every single device in the network... or for changing access-lists globally.
https://codeberg.org/common-good/welder
The desire was to have any kind of configuration management in a team with people for whom the barrier of learning the YAML based Ansible DSL was too high.
- Doesn't unnecessarily send code over the network.
- Has some sort of "execution optimizer".
Think for example a query planner/optimizer of a db. Or, as a good example, the query planner of the polars framework as opposed to how it works in pandas.
If I do a for loop and each loop iteration copies a file into the same dir, the optimizer should catch that and send over one compressed tar file.
“Built on Python, Salt is an event-driven automation tool and framework to deploy, configure, and manage complex IT systems. Use Salt to automate common infrastructure administration tasks and ensure that all the components of your infrastructure are operating in a consistent desired state.”
https://docs.saltproject.io/en/latest/topics/about_salt_proj...
pyinfra is just python that gets transpiled into ssh commands
https://github.com/pyinfra-dev/pyinfra/blob/3.x/src/pyinfra/...
Stuff I threw into the inputs before working with pyinfra
I could likely vibecode something up if I had to, but I'm interested in a job orchestration system that can run things like upgrades, scheduled backups, ideally with a nice dashboard showing successful/failed jobs.
I despise YAML, but I can appreciate that it makes it harder to introduce imperative logic, and it forces you to stay on the paved path - which is very well-tested.
This is just the pendulum swinging back again, and at least Python tends to be a little less "clever" (and therefore less write-only) than Ruby.
It seems to me that infra management is inherently suited to declarative logic. I'm pragmatic enough to understand why SWEs with little infra experience might prefer an imperative approach, but I tend to think you should pick one or the other and stick to it. In my experience, hybrid systems end up combining the worst aspects of both.
Yep. IMO, imperative is definitely easier to reason about, and it’s what most programming languages are designed around, but it is absolutely the wrong approach for infrastructure. There are too many things that can go wrong that you may or may not have designed for. Declarative _is_ the state.
On footguns. Totally hear you that "Python lets you do anything" feels like a footgun. The flip side that I think gets missed: because it is real Python, you can actually test it. Pytest, mypy, ruff, jump-to-definition, refactor-rename, all of it just works. Unit-testing a 400-line YAML role with nested Jinja conditionals is genuinely hard, and that gap is what pushed me toward PyInfra in the first place.
On "importing Python libraries introduces bugs". This one I think is worth a closer look, because the mechanics are not what they appear. PyInfra does not run Python on your servers. It runs Python on your control node to plan the change, then transpiles each operation to plain POSIX shell and pipes that over SSH. If you run with `-vvv` you can see it: `sh -c '...'` and nothing else on the wire. The target needs zero Python, zero agent, zero runtime. So whatever library you imported into your deploy script ran locally, produced a string of shell, and that string is what touches the box. A bug in some PyPI dependency cannot throw mid-operation on the host, because there is no Python on the host to throw it. Worth noting that Ansible, by contrast, ships a Python interpreter and module code to the target for most tasks, so if anything the library exposure on the executing side is larger there, not smaller.
On the control node, sure, you have dependencies, same as Ansible has Jinja2, PyYAML, paramiko, cryptography, and a long tail of Galaxy collections of varying quality. PyInfra has a stable API, solid test coverage, idempotent operations, and a real two-phase model (gather facts, then apply) so the apply phase is deterministic generated shell rather than arbitrary code running on the box.
On YAML keeping you on the paved path. I really wanted this to be true for years, honestly. In practice, the moment you need a conditional you end up writing `{% if %}` inside a quoted string inside a map inside a list inside a role, with no type system, no debugger, and a few sharp edges in the parser (`no` as boolean, leading zeros as octal in YAML 1.1, tab/space mixing failing without a useful pointer). And the escape hatch when Jinja-in-YAML cannot express what you need is... writing a custom Python module. So you end up writing Python anyway, just with worse tooling around it.
The way I would put it: PyInfra is Python where Python helps (writing, testing, planning) and shell where shell belongs (executing on the host). Happy to dig into any specific footgun you have run into though, those are usually the most useful conversations.
I can't get over the fact of how suspicious he looks while doing it. And doesn't even cover his face. Crazyness
https://x.com/porqueTTarg/status/2047652413306277970 https://xcancel.com/porqueTTarg/status/2047652413306277970
We just shipped 3.8.0.
PyInfra is an agentless infrastructure automation tool. Same job description as Ansible, Salt, Chef. SSH into hosts, describe desired state, it diffs and converges. No agent, no central server, no daemon.
The difference: your "playbook" is just Python. Not Python cosplaying as YAML. Not Jinja smuggled inside YAML inside a Helm chart inside a Kustomize overlay. Actual Python:
Idempotent operations. Facts gathered from hosts, branched on with normal `if` statements. Real loops, real imports, a real debugger, real type hints. Your editor autocompletes arguments because, brace yourself, they are just function signatures.About YAML. Wonderful format. For about eleven minutes. Then someone needs an `if`, and you have `{% if %}` inside a string inside a list inside a map. Then someone types `no` as a country code for Norway and it ships to prod as `False`. Then someone indents with a tab and the parser dies without saying where. Congratulations, you reinvented a programming language. Badly. The honest move is to admit you wanted code, then write code.
PyInfra skips the eleven good minutes and goes straight to code.
Release notes in the link. Happy to answer questions.
Infrastructure as Code, not infrastructure as YAML.
This war will never end ... because there are genuine tradeoffs on both sides. YAML being a bad data description format isn't actually central to the question of whether you describe infra as data or as code. You can use JSON if you want. Data is static, 100% predicatable. Code is non-deterministic right up to the halting problem. If your infra should look different on wednesday to thursday, well it can do that! Some people like it, some people think it's the definition of hell.
Terraform makes an interesting tradeoff to try and have the best of both worlds but ultimately still falls on the same issue ... I've not seen one project yet of any complexity that didn't use workarounds to implement optional components (let's just pretend there's a list of them and it has 1 or zero elements in it!).
Ultimately I agree with your philosophy but maybe not your language. IMHO You really want a language that is built from the ground up around static typing and immutable constructs for this. Get as close to that predictable determinism as possible. But then, if the whole world knows python, I guess python it is.
Right on.
It's amazing to me that we've spent decades with programming languages and environments which can accurately guess what you're about to type next, which have enormous expressiveness while maintaining cogency, which are intuitive and well understood by humans, which have endless libraries and an infinity of ways of connecting with the world.
And what do we use to configure the most sophisticated infrastructure to run such code? Yet another mark-up language!
Real regexes (actually regular…) are infinitely better than Python code matching the same string (if they are sufficient) - you can compute their intersection, union, complement; check if they can match anything at all (and generate an example automaticallly).
For software builds, Bazel and others use Starlark, which is a restricted Python subset, so builds can be guaranteed finite and can be reasoned about.
Ansible may or may not offer any benefits in return for the limits (I am not an ansible guru), but in general, most tasks do not need a Turing complete configuration/specification language - and it is then better to NOT have Turing completeness.
This is not really different than C vs Rust, or even Perl regular expressions (unbounded execution time) vs real regular expression. With great powers comes great abilities to shoot yourself in the foot.
The power/guarantee balance is delicate, and you can’t hold the stick at both ends. People will always complain.
https://github.com/bazelbuild/starlark
In the same way that it's possible to have an xml/json/yaml/toml config that creates despair in those who have to maintain it, a python or bash script can grow into a monster in the basement.
Or, it could be a cogent script that makes its intent and operation obvious. I prefer that when possible.
Convex does this well, replacing SQL (somewhat yaml-like sucky old declarative language) with JS/TS but in a well-locked-down environment with limits to ensure one mutation or query doesn’t take down the whole DB.
You've almost guessed the problem. Too much expressiveness is a bad thing. This is a problem I encounter a lot more often then I'd be happy to. It's very often is much easier to build something more generic than what the user actually needs, and then testing it becomes a nightmare.
To make this more concrete, here's a case I'm working on right now. Our company provides customers with a tool to manage large amounts of compute resources (in HPC domain). It's possible to run the product on-prem, or in different clouds, or a combination of both. Typically, the management component comes with a PXE boot and unfolds from there. A customer wanted integration with a particular cloud provider that doesn't support this management style, nor can it provide a spare disk to be used for management, nor any other way our management component was prepared to boot.
The solution was to use netboot that would pre-partition the disk and use the first N partitions to store the management component as well as the boot, ESP / bios_grub partition etc. It had to be incorporated into the existing solution that encompasses partitioning and mounting all the resources available to a VM, including managing RAIDs, LVM, DM and so on.
The developers implemented it as a GPT partition name with a pre-defined value that would instruct our code to ignore the partitions found prior to the "special" partition and allow the user to carry on as usual, pretending that the first fraction of the disk simply didn't exist (used by netboot + the management component).
This solved the immediate problem for the user who wanted this ability, but created thousands of problems for QA: what happens if there's a RAID that uses the "hidden" partitions? What happens if the user accidentally creates second /boot partition? What happens if the user wants whole-disk encryption? And so on. It would've been so much better if these questions didn't exist in the first place, than to try to answer them, given the "simple" solution the developers came up with.
If you programmed for just a year, I'm sure you've been in this situation at least a few times already. This is exceedingly common.
* * *
There's an enormous value to being able to restrict the possible ways a program can run. Most GUI projects? -- They don't need infinite loops! It just makes programs unnecessarily hard to verify. But it's "easy" to have a single loop language element that can be made infinite if necessary. Configuration languages exclude whole classes of errors simply by making them impossible to express.
However, I have to agree that, specifically, YAML is a piss-poor configuration language. It has way too many problems that overshadow the benefits it offers. We, collectively, decided to use it because everyone else decided to use it, making it popular... and languages are "natural monopolies". So, one could certainly do better ditching YAML, if they can afford to go unpopular. But ditching the idea of a configuration language is throwing the baby out with the bathwater.
I've used Salt, CFEngine, Chef, Puppet, Make, Bash, and many hand-rolled iterations of this approach. I finally threw in the towel and forced myself to come to terms with Ansible and it's quirks because I needed the wider community support.
Now with AI tooling, I'm not so convinced the community modules moat is an actual moat. I'm going to very seriously consider porting all my Ansible code to this and see how it feels. I anticipate I'll be much happier after the change.
Do you have any plans to integrate with/build on other communities modules? i.e. even if it's not perfect, being able to call Ansible or Salt modules from PyInfra would be one way to fill the gap.
I've been down this path, implemented my own version of PyInfra many times over the years. I've used Ansible and my own implementations in anger. The _if param is far far far from the worst offender and it's a natural addition, especially when you are laying out a bunch of unrelated checks into something that looks more like a table.
Basically a flaw of the entire model where you write code as if executing a single host which is then executed on many in parallel, forcing the two step diff and deploy that causes this.
Funny thing is since v3 this behavior (diff then execute) is even desired with the yes prompt like terraform.
I've used Pyinfra for a few years, great software.
But tbh I didn't see it much on OP's comment.
In the spirit of Saltstack with full python throughout including Mako templating. It has a very simple set of operators mostly around idempotent file management and shell commands to do things like restart services.
This enables very fast deploys - small changes on a small number of machines in < 10 seconds.
For anything dynamic and sufficiently complicated, ansible is horrible. Pyinfra is much better.
When you have 6 stanzas to perform a dynamic if/else branch, the underlying system if flawed.
Models can overcome the complexity of ansible-- I argue that they shouldn't be. Ansible is a flawed framework.