• 0 Posts
  • 68 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • Enough of that crazy talk - plainly WheeledDeviceServiceFactoryBeanImpl is where the dependency injection annotations are placed. If you can decide what the code does without stepping through it with a debugger, and any backtrace doesn’t have at least two hundred lines of Spring boot, then plainly it isn’t enterprise enough.

    Fair enough, though. You can write stupid overly-abstract shit in any language, but Java does encourage it.



  • Well now. My primary exposure to Go would be using it to take first place in my company’s ‘Advent of Code’ several years ago, in order to see what it was like, after which I’ve been pleased never to have to use it again. Some of our teams have used it to provide microservices - REST APIs that do database queries, some lightweight logic, and conversion to and from JSON - and my experience of working with that is that they’ve inexplicably managed to scatter all the logic among dozens of files, for what might be done with 80 lines of Python. I suspect the problem in that case is the developers, though.

    It has some good aspects - I like how easy it is to do a static build that can be deployed in a container.

    The actual language itself I find fairly abominable. The lack of exceptions means that error handling is all through everything, and not necessarily any better than other modern languages. The lack of overloads means that you’ll have multiple definitions of eg. Math.min cluttering things up. I don’t think the container classes are particularly good. The implementation of pointers seems solely implemented to let you have null pointer exceptions, it’s a pointless wart.

    If what you’re wanting to code is the kind of thing that Google do, in the exact same way that Google do it, and you have a team of hipsters who all know how it works, then it may be a fine choice. Otherwise I would probably recommend using something else.


  • I feel that Python is a bit of a ‘Microsoft Word’ of languages. Your own scripts are obviously completely fine, using a sensible and pragmatic selection of the language features in a robust fashion, but everyone else’s are absurd collections of hacks that fall to pieces at the first modification.

    To an extent, ‘other people’s C++ / Bash scripts’ have the same problem. I’m usually okay with ‘other people’s Java’, which to me is one of the big selling points of the language - the slight wordiness and lack of ‘really stupid shit’ makes collaboration easier.

    Now, a Python script that’s more than about two pages long? That makes me question its utility. The ‘duck typing’ everywhere makes any code that you can’t ‘keep in your head’ very difficult to reason about.


  • Frezik has a good answer for SQL.

    In theory, Ansible should be used for creating ‘playbooks’ listing the packages and configuration files which are present on a server or collection of servers, and then ‘playing the playbook’ arranges it so that those servers exist and are configured as you specified. You shouldn’t really care how that is achieved; it is declarative.

    However, in practice it has input, output, loops, conditional branching, and the ability to execute subtasks recursively. (In fact, it can quite difficult to stop people from using those features, since ‘declarative’ doesn’t necessarily come easily to everyone, and it makes for very messy config.) I think those are all the features required for Turing equivalence?

    Being able to deploy a whole fleet of servers in a very straightfoward way comes as close to the ‘infinite memory’ requirement as any programming language can get, although you do need basically infinite money to do that on a cloud service.





  • There’s two kinds of motion blur, really - camera based, and model based. Camera-based requires calculating one motion vector for the whole screen, which is basically free. Model-based requires projecting the motion of each vertex of the model in the projected view; one matrix multiply per vector is not ‘expensive’ on a modern graphics card. Depth of field requires comparing the depth buffer, which you’ll have already created as part of rendering, and then taking several ‘taps’ around each point on the screen to calculate the blur for the ‘focus distance’ compared to the actual distance. The final image post-processing will generally process the whole screen anyway, so you’re just throwing a couple of extra steps in for the two effects.

    Now, what does it save you? If your engine is using TAA (temporal anti-aliasing) then that’s performed by ‘twitching’ the camera a tiny amount (less than a pixel) every frame. If nothing’s moving, then you can merge the last several frames to get a really high-quality anti-alias; all the detail that wouldn’t be caught with a ‘completely static’ camera will be captured, and the result looks great. But things do move; if you recalculate ‘where things were’ then you can get a reasonable idea of what colour ought to be at each pixel. Since we need to calculate all the movement vectors to do that, then using the same info gives us the motion blur data ‘for free’ - we can add a little blur in post-processing to hide the TAA mistakes in post processing, and when implemented well(*) then it looks pretty effective. It’s certainly much, much cheaper to calculate that ‘proper’ antialiasing like MSAA.

    (*) It is also quite easy to not implement TAA well, and earn the ire of gamers for turning everything into a blurry mess. Doom (2016) does a fantastic job of it - it’s in the engine at a low level - and I’ve never seen anyone complain about that game being blurry or smeared.

    It takes time to load high-quality textures and models from disk, and it uses up the RAM budget for each frame. Using lower-quality textures and models for distant objects greatly helps rendering speed and prevents stutter, and a bit of depth-of-field hides the low-quality rendering with a bit of a smear.

    Now, if your graphics card greatly exceeds the design requirement (which was probably some kind of console) then you can switch these effects off and the game will look even better, which might make you question why they’re there in the first place. To help consoles look better with some ‘cinematic’ effects, is why.





  • Another fantastic project that makes gaming on Linux so much easier. It’s incredibly strong in configurability and ‘robustness’. Yes, you might have to set up all of your Wine bottles and things like that, which can be a faff, but once it’s working in Lutris, it just keeps on working on Lutris.

    Great for long-running series, too. I’ve been a big fan of the XCOM series since the Amiga days; in Lutris, it’s easy to have UFO: Enemy Unknown / Terror from the Deep running in openxcom, Apocalypse in DosBox, and connected up to the Firaxis remakes in Steam. Similarly, love me a metroidvania, and have got most of the 40+ CastleVania games lined up and ready-to-go, just a double-click away.



  • You’ve got that a bit backwards. Integrated memory on a desktop computer is more “partitioned” than shared - there’s a chunk for the CPU and a chunk for the GPU, and it’s usually quite slow memory by the standards of graphics cards. The integrated memory on a console is completely shared, and very fast. The GPU works at its full speed, and the CPU is able to do a couple of things that are impossible to do with good performance on a desktop computer:

    • load and manipulate models which are then directly accessible by the GPU. When loading models, there’s no need to read them from disk into the CPU memory and then copy them onto the GPU - they’re just loaded and accessible.
    • manipulate the frame buffer using the CPU. Often used for tone mapping and things like that, and a nightmare for emulator writers. Something like RPCS3 emulating Dark Souls has to turn this off; a real PS3 can just read and adjust the output using the CPU with no frame hit, but a desktop would need to copy the frame from the GPU to main memory, adjust it, and copy it back, which would kill performance.

  • This, exactly. When we redid our bathroom, we went from “immersion tank” hot water with about three metres of pressure behind it, to central heating in a closed system, where both hot and cold have the exact same pressure, about thirty metres head. Went from being basically impossible to have a shower, to being an absolute pleasure where nearly the entire range of the tap gives a useful temperature, and it’s got a right blast of pressure behind it too.

    Another alternative would be an electric shower - since you’re just heating up cold water, the pressure is “always the same”. They tend to be a bit pathetic and crap, tho.




  • Money is an emotional thing. Do I believe that this coin / bit of paper / number on a website is something that I can exchange for goods and services? If not enough people believe that, that currency will collapse.

    Mind you, not using money is inefficient at scale. Sending the bag of potatoes that I’ve grown in my garden this month to my internet provider for continued shitposting privileges only goes so far.