• 8 Posts
  • 323 Comments
Joined 3 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • There are terminology issues here, both in the Lemmy post title, in the article body, and in the article’s TL;DR. Basically, nothing is internally consistent except maybe the OCI Runtime spec itself, although its terminological relevancy is a separate issue.

    Lemmy title: Containers are not Linux containers

    Article title: What Is a Standard Container: Diving Into the OCI Runtime Spec

    Both titles imply the existence of non-Linux containers, yet only the latter actually describes the contents of the article, specifically naming the “other” type of container, being “Standard Containers” defined by the OCI Runtime spec. As a title, I greatly prefer the latter, whereas the former is unnecessarily antagonistic.

    That aside, the article could really be helped by a central glossary section, as it refers to all of these as containers, without prefacing that these can all validly be called “containers”:

    • OCI-compliant containers
    • Standard containers
    • Linux containers
    • Docker containers
    • Kata VM-based containers
    • Other VM-based containers that have been deprecated

    If the goal was to distinguish what each of these mean, the article doesn’t do that great of a job, other than to say “these exist and aren’t Linux containers, except Linux containers are obviously Linux containers”.

    Reframing what I think the article tried to convey, while borrowing some terminology from C++/Python, the OCI Runtime specification defines an Abstract Base Class known as a Standard Container. A Standard Container supports the most minimal functions of starting and stopping an execution runtime. For Linux, FreeBSD, Kata, etc, those containers are subclasses of the Standard Container.

    For the most part, unless your containerized application is purely computational and has zero dependencies upon the OS, your container will be one of the subclasses. There are essentially zero practical container images that can meet the zero-dependency requirements of being a Standard Container. So while it’s true that any runtime capable of running the container subclasses could also run a Standard Container, it is of little value in production. Hence why I assert that it’s an abstract base class: it cannot really be instantiated in real life.

    This is the reality of containers: none can abstract away an application’s dependency upon the OS. The container will still rely upon Win32 calls, POSIX calls, /proc, BSD sockets, or whatever else. So necessarily, all practical containers need a kernel layer. Even the case of Kata’s VM-based containers just mean that the kernel is included within the container. Portability in this context just means that the kernel version can change beneath, but you cannot take a Linux container and run it on FreeBSD, not without shims and other runtime kludges.


  • The other commenter have described the challenge, but I’d like to clarify the terminology, since the distinctions might not be obvious. For tech, we generally speak of the separate qualities of being Free (as in, use it however you want) and Open (aka being open to study, reimplement, and extend). If both qualities are had, then that’s called Free And Open.

    The most common designation is for software, which if both Free and Open, then that’s Free And Open-Source Software (FOSS). Examples include the Linux kernel (GPL license) and FreeBSD in its entirety (MIT license). This means you can remake the software and use it how you like.

    For hardware, there’s also the equivalent concept of Free And Open, and that means the PCB design can be remade and used for whatever you want. If you wish to use Free And Open hardware for war or for hobby use, that’s entirely up to you.

    But there’s also the realm of silicon, which is the most esoteric and specialized, and there’s a lot less Free And Open silicon designs available. For example, the x86 CPU architecture is not Free nor Open. It is patented and its logic is proprietary and trade secrets of Intel, AMD, and Via. They document the behavior of registers, but they never publish the silicon designs so that you could make your own at home.

    ARM is slightly different, in that they’ll gladly help you build your own ARM silicon (eg Apple Silicon system-on-chips) but you need to pay them a license. So it’s not Free nor Open because: 1) you have to pay money, and 2) the plans aren’t available for examination until you pay up.

    LoRA silicon is more akin to x86, because they just don’t publish anything except the register behavior. The license to use the LoRA design is baked into the sale price that the LoRA Alliance charges. And yet still, you at home receive no right to remix or examine that silicon design yourself, unless you do actual reverse engineering. And even then, they have patents.

    LoRA is not Open nor Free silicon. And it never claimed to be. Meshtastic and MeshCore use Free and Open hardware and software but that’s it. You do not have as many rights to the silicon as you do for the hardware and software.


  • The Linux kernel itself doesn’t really express an opinion – it’s a kernel, it enables you to do most things – but it’s Docker itself that imposes an opinion. And I say this after Docker Engine has basically delegated the runtime to containerd. At bottom, Docker has some serious baggage that needs to eventually be addressed, chiefly IMO the sorry state of networking.

    What was done to make Docker usable initially has reared its ugly head a decade later, such as a focus on only supporting Legacy IP and NAT, with very little regard for IPv6. For example, Docker does do IPv6 today but only with NAT66 and zero support for DHCP6-PD upstream routing. This makes it incompatible to how actual v6 networks are set up, where NAT is neither desirable nor necessary. Docker’s idea of networking is so very 1990s that it’s genuinely stifling any improvements beyond the server/client TCP/UDP model.

    All the meanwhile, Kubernetes is built atop sensible networking on Linux, and the BSDs have had solid networking primitives for decades. Linux is not the problem, IPv6 is not the problem, BSD is not the problem; it’s just Docker being stuck because of a lack of vision and too many users dependent on the existing behavior.

    Credit where it’s due, Docker images defined as files and stores as artifacts in a central repository are a genuine innovation, and that’s precisely what BSD Bastille brings to BSD jails. So in 2026, where the OCI specification has genericized Docker images, anything that’s Docker-specific is slowly losing relevance.


  • The short answer is that Linux did not approach namespacing from a holistic view, but rather introduced each one at the time when they were deemed useful individually or necessary. Meanwhile, the BSDs looked at what they had from UNIX (ie chroot) and then thought about the fullest logical extent of that idea. And in doing so, looked at every kernel interface and added support to namespace (or jail) them all.

    Sure, BSD jails have had their own bugs over the years, but as a design, it’s an incredible testament to building a framework that was ahead of its time by focusing on the fundamentals.

    To be clear, there are sometimes use-cases where Docker containers are ran without creating separate namespaces (eg sharing the host’s network namespace) but it’s rarer than the equivalent in BSD jails, where it’s a neutral choices between isolate or reuse namespaces. In that sense, Docker is lightly opinionated into defaulting to all-isolation and makes it hard to remove all the layers, if that’s your jam.


  • For pointers in particular, this seems like a good starting point: https://sites.cs.ucsb.edu/~mikec/cs16/misc/ptrtut12/pointers.htm

    As for compiling for old C/C++ versions, fortunately most compilers can be set to restrict what standard they will compile for. So you could turn the compiler all the way back to something like C99 and it should work, although you’ll have to avoid using modern syntax.

    That said, with regards to compiling for an old platform, be advised that complete and functional toolchains will be harder to come across. They may not even work anymore, if they haven’t been upkept. That’s another complexity that you may have to deal with, and it will no doubt be aggravating, than working with a modern platform but limiting yourself to only older C/C++ standards and graphics libraries.

    Basically, the starting effort is quite high for developing for older targets. Be certain that this is the direction you want to start with.


  • It could make sense, but what would be gained? A geographically-broader mesh sounds nice, until you realize that it means messages will go across the IP link and continue propagating on the other end, tying up the RF spectrum, even for traffic that didn’t need to cross the IP link.

    It also detracts from what a fair number of people use the mesh for: comms without reliance on fallible singular links. Single points of failure are not ideal in a mesh, and an IP link would be adding exactly that.

    Note that Reticulum has a much more developed routing structure, so that flood messages do not propagate everywhere uselessly. In that regard, Reticulum has learned what Ethernet and 802.11 WiFi have known for decades, while Meshtastic finds itself playing catch-up.

    A managed flood is still a flood, so introducing a trunk link will increase the “broadcast domain”, to use Ethernet parlance. For two quiet, small meshes, a link between them might be alright. But for two busy, small meshes, the extra floods are just noise and drown out traffic.


  • This distinction is both illogical and ahistorical. Python is a scripting language that has a compiler. Indeed, any scripting language can be translated into a compilable language and then compiled, a process called transpiling.

    There’s also Java, which definitely compiles down to bytecode, but for a machine which physically doesn’t exist. The Java Virtual Machine is an emulator that runs on supported hardware, in order to execute Java programs. Since the Java compiler does not produce, say, x86 assembly, your definition would assert that Java is not a compiled language, despite obviously having a compiler.

    As an exercise for everyone else, also have a look at Lisp, a very-clear programing language with a compiler, but some specially-built machines were constructed that optimized for Lisp programs, with hardware support to do checks that would take longer on other architectures.




  • Code has never been able to be copyrighted. You cant copyright a for loop. I cant create a car class that has properties like make, model, year and copywrite it. Thats never been a thing. Thats why projects are copyrighted. An entire piece of work.

    Every single complete sentence in this quote is factually wrong, under both USA copyright law and international copyright law.

    Copyright accrues the moment that some work is rendered into a fixed format, such as a sheet of paper but also includes a computer text file. Writing a “for” loop as a homework assignment does create copyright. Ten students writing their homework all create their own copyright, even if the result is coincidentally identical. This isn’t even a point of serious doubt in the law: copyright is very much an exercise of provenance, not of bitwise comparisons.

    From when a work is created, every transformation, edit, or addition must all occur within the parameters of some sort of license from the copyright owner, or else an infringement has occurred.

    Two people may stand at the same position at the foot of Mt Whitney in California and set up their own camera, one after another, on the same tripod to take the same frame of the scenery. And under copyright law, each owns the copyright to their own photo. One may decide to sell their photo and copyright to an East Coast newspaper, while the other has theirs committed to canvas. The newspaper may not assert a copyright claim against the canvas owner, and the canvas owner cannot assert a claim against the newspaper.


  • if properly reviewed and it works right, you can’t argue with results.

    The key word is “if”.

    This is rather the crux of the issue: most AI-generated code is not reviewed, let alone reviewed by humans, let alone reviewed by human experts within their expertise. Nor does AI-generated code have a good history of being well-tested to any particular formal standard of validation (eg ISO), against any defined criteria that isn’t itself AI-generated and unreviewed. There are outliers, no doubt, which strive to lift themselves above this low-bar, though the effort to do so often exceeds the effort to just have the experts hand-write the code instead and then formally validate it, at least as of early 2026.

    Some AI could plausibly be tolerable within an already-functioning software engineering team. But “all code in Trail Mate is 100% generated by AI under human guidance” is an abdication too far.








  • That is an opinion, but certainly isn’t settled law in any jurisdiction. Indeed, the answer to whether some, all, or none of an LLM’s output is ever copyrightable and under what terms is the billion dollar question.

    A project that incorporates code with shaky legal foundation will find it tough to convince others to contribute, if it’s possible one day that their contributions were in vain. The right answer would be to extricate such code upon discovery, like what OpenBSD had to do when the IPFilter license turned out to be incompatible with the project.




  • The only way I’m able to reconcile the author’s title and article to any applicability to software engineers (ostensibly the primary audience in this community) is to assume that the author wants software engineers to be involved further “upstream” of the software product development process.

    Code review answers: “Should this be part of my product?” That’s a judgment call, and it’s a fundamentally different question than “does it work.”

    No, but yes. Against the assertion from the title, bug-finding is very much a potential answer to “does this bug belong in the codebase?”. After all, some bugs aren’t bugs; they’re features! Snide remarks aside, I’m not sure that a code review is the time to be making broader choices about product architecture or market viability. Those should already have been done-and-settled a good while ago.

    Do software engineers make zero judgement calls? Quite the opposite! Engineers are tasked with pulling out the right tool from the toolbox to achieve the given objective. Exactly how and which tools are used is precisely a judgement call: the benefit of experience and wisdom will lean towards certain tools and away from others. But a different group of engineers with different experiences may choose differently. Such judgement calls are made in the here-and-now, and I’m not exactly keen on going back in time to berate engineers for not using tech that didn’t yet exist for them.

    If the author is asking for engineer involvement earlier, well before a code review, then that’s admirable and does in-fact happen. That’s what software architects spend their time doing, in constant (and sometimes acrimonious) negotiation with non-engineering staff such as the marketing/sales team.

    That said, some architectural problems only become apparent when the rubber meets the road, when the broader team is engaged to implement the design. And if a problem is found during their draft work or during code review, that’s precisely the right time to have found that issue, given the process described above where the architects settle on the design in advance.

    If that outcome is not desirable, as the author indicates, then it’s the process that must change. And I agree in that regard. But does that necessarily change the objective of what “code review” means? I don’t think so, because the process change would be adding architectural review ahead of implementation.

    If we’re splitting hairs about whether a broad “review” procedure does or doesn’t include “review of code”, then that’s a terminological spat. But ultimately, any product can only be as good as its process allows. See aviation for examples of excellent process that makes flying as safe as it is.

    Making the process better is obviously a positive, but it’s counterbalanced by the cost to do so, the overhead, and whether it’s worthwhile for the given product. Again, see aviation for where procedural hurdles do in-fact prevent certain experimental innovations from ever existing, but also some fatal scenarios that fortunately no longer happen.

    In closing, I’m not entirely sure what the author wants to change. A rebrand for “code reviews”? Just doing something different so that it feels like we’re “meeting the crisis” that is AI? That’s not exactly what I would do to address the conondrums presented by the rapid, near-uncontrolled adoption of LLMs.