I used to think this, until I tried it. Now I see that it effectively removes all the tedium while still letting you have whatever level of creative control you want over the output.
Just imagine that instead of having to work off of an amorphous draft in your head, it really creates the draft right in front of you in actual code. You can still shape and craft and refine it just the same, but now you have tons more working memory free to use for the actually meaningful parts of the problem.
And, you're way less burdened by analysis paralysis. Instead of running in circles thinking about how you want to implement something, you can just try it both ways. There's no sunk cost of picking the wrong approach because it's practically instantaneous.
Sure, and that goes even for myself. Like for example, on some projects maybe I'll be more interested in exploring a particular architectural choice than actually focusing on the details of the feature. It ultimately doesn't matter, the point is that you can choose where to spend your attention, instead of being forced to always go through all the motions even for things that are just irrelevant boilerplate
Fortunately, at least in Europe, there are definitely companies still around who either don't force the usage of slop machines or even have a culture of rejecting them completely (yes, that's a thing, and I'm glad to be working at such a company).
Not really anymore. Wine, Proton, VK3D, .. have improved so much that the native gaming experience on Linux is more than viable. The Steam Deck might be the best example of that.
I'm very happy that the last few years I didn't have to keep a horrid Windows install around, just to play a game every now and then.
I'm pretty excited for linux gaming, and it really does seem to work well for singleplayer games. But online games (with anticheat, Valorant for example) often times don't run at all.
This is the first draft of the Microsoft Virtual GPU (vGPU) driver. The
driver exposes a paravirtualized GPU to user mode applications running
in a virtual machine on a Windows host. This enables hardware
acceleration in environment such as WSL (Windows Subsystem for Linux)
where the Linux virtual machine is able to share the GPU with the
Windows host.
So this isn't actual "DirectX on Linux", just a driver for a virtual GPU exposed to WSL-guests to enable guests to directly use DirectX, more or less.
No, if you read the blog post on Microsoft's site it goes into more detail:
This is the real and full D3D12 API, no imitations, pretender or reimplementation here… this is the real deal. libd3d12.so is compiled from the same source code as d3d12.dll on Windows but for a Linux target. It offers the same level of functionality and performance (minus virtualization overhead).
This sounds like an attempt to win back ML segment from Linux to Windows. There are three letters back in my mind that sound like screaming, but too early to tell unfortunately.
It is an extension of the capability of WSL, giving you that sweet convenience of the DirectX API with your existing ML project. Of course, this extension makes your project incompatible with desktop Linux once adopted.
Not necessarily. It is not possible to access the GPU in WSL at all right now, so I need to dual boot which also means dealing with Linux desktop compatibility issues with my laptop.
As long as I can use the same ML framework (without DirectX API), then this poses no compatibility issues at all. It just means I can develop & run my ML code in WSL.
It's way more of a pain, due to a lot of legacy constraints in windows. It's easy to overflow pathnames and command lines that then get silently truncated. Doing ML dev work is for sure easier on a Linux env than a pure Windows env. it's not impossible on Windows, but def much nicer in Linux.
Except your ML project is accessing the DirectX API through another cross-platform API layer (CUDA). And the purpose of running WSL is that you ultimately hope to deploy to Linux servers (which Microsoft hopes will be on Azure).
The thing with WSL is that it doesn't win back the ML segment from Linux to Windows for production workloads. This is just a developer workstation friendly move that explicitly doesn't tie you to Windows itself.
It seems like Microsoft is continuing to see Linux as a production server target while positioning Windows to remain relevant as a workstation OS.
Ostensibly, this is a move to compete with Mac OS and not Linux.
I'm not too sure about that, what about the support for DX12 in WSL? Can't deploy that to production outside WSL right now as far as I'm aware, unless they add DX12 support to azure Linux vms...
Given that the main use case is stuff like running machine learning tools, the only people directly targeting DX12 on WSL will probably be framework/library developers, who might want to add DX12 as one of the graphics systems they support. If that's the case, the vast majority of developers won't notice any difference other than more software supporting graphics acceleration when run in WSL (or, another way of looking at it, when Linux is running on the WSL "hardware"/"platform").
Sure, nothing stops you from targeting DX12 directly in your application code, but why would you do that? At that point, you'd just target Windows since your users would have to be running it anyway.
> I'm not too sure about that, what about the support for DX12 in WSL?
It's an implementation detail -- they're exposing the Windows graphics driver to the Linux system with the most minimal amount of translation and overhead.
You could code directly to it in your Linux application code but it makes no sense to do that. You'd be literally writing a Linux application that can only run under Windows -- the smallest market ever proposed. Instead library/framework developers will add it as another target to improve performance in WSL for generic Linux applications.
No, because developers won't be coding to this DirectX module directly. They'll be using CUDA or OpenGL or using another library or framework for which this is just one of many backend implementations.
Unless you think developers would actually bother coding explicitly for the world smallest possible market (Linux inside of Windows).
Mentioned it in another comment, but I am not sure what stoped people from doing ML without WSL. I don't even know ML libraries, that don't work on Windows proper.
You're absolutely right, but unfortunately we can't customize HN that way; we need a general approach.
The problem is that no one, not even those of us who spend excessive time here, sees every thread or even every major thread. HN's front page is a tiny aperture given the firehose that flows through it. We added 'past' to the top bar a few years ago to give people a way to catch up on the biggest threads they missed. Unfortunately relatively few people use it. For example, this month about 4% of logged-in users who've viewed the front page have also used 'past'.
The project seems still alive and active, at least according to its recent RFC'ing from LKML [0].
For some more information, Phoronix also recently wrote an article [1] about it, also the GitHub repo [2] seems to be somewhat active (last commit was 06-02-2020).
Ah, you're right, it does appear to be still alive. If anyone from the popcorn linux team sees this, please consider updating the outdated portions of your website (and getting an HTTPS certificate). Had I never read this comment I would have gone on unaware that the project was not dead.
No, but I'm suggesting it's not a bad idea to use it on Windows. That way if there's ever an improvement made to it (imagine better hardware support or something coming along someday), you'd get it automatically.
Off-Topic:
The site horrible "jitters" when scrolling around, pretty much unreadable/unuseable.
At least for me with Chrome 74 on Fedora. Firefox 67 is fine.
It's definitely a more enjoyable world this way.
reply