The average person in finance has different flaws than this particular douchebag.
I very much agree that tyre dust is a problem, and that weight is a large issue.
However, these kinds of caveats are routinely used to downplay the level of harm reduction that transitioning ICE cars to EVs would bring. Note how right-wing media basically uses this technique - mostly with the emissions associated with making EV batteries - to justify the continued use of ICE cars.
The antidote is to require numbers for this type of claim.
Fwiw, I don’t own any kind of car, I bike and take transit everywhere, and I’m broadly against cars on account of their outsized negative impact on society. I still believe EVs represent a necessary amount of harm reduction.
This comment is not useful unless backed up with data on how much relative emissions this would contribute.
Unless provided, please refrain
I don’t think I changed the difficulty-settings.
Cellulose is generally recyclable but as I understand it degrades through each cycle, until it’s basically unfit for recycling and is more efficient to burn for energy.
How do I get into it? I’ve tried and it’s not really sticking, to be honest.
By the power of podcasts, I have become equipped to handle the Sisyphean daily tasks. I used to dread them, now I don’t mind them at all.
I don’t think DeepSeek has the capability of generating code and executing it inline in the context window to support its answers, in the way that ChatGPT does - the “used”-part of that answer is likely a hallucination, while “or would use” more accurately represents reality.
The concern is that the model doesn’t actually see the world in terms of distinct hexadecimals, but instead as tokens of variable size - you can see this using the tiktokenizer-webapp: enter some text and it will split it into the series of tokens the model actually will process.
It’s not impossible for the model to work it out anyway, but it is a reason for this type of task to be a bit harder on LLMs.
It’s not out of the question that we get emergent behaviour where the model can connect non-optimally mapped tokens and still translate them correctly, yeah.
It is a concern.
Check out https://tiktokenizer.vercel.app/?model=deepseek-ai%2FDeepSeek-R1 and try entering some freeform hexadecimal data - you’ll notice that it does not cleanly segment the hexadecimal numbers into individual tokens.
Still, this does not quite address the issue of tokenization making it difficult for most models to accurately distinguish between the hexadecimals here.
Having the model write code to solve an issue and then ask it to execute it is an established technique to circumvent this issue, but all of the model interfaces I know of with this capability are very explicit about when they are making use of this tool.
Is this real? On account of how LLMs tokenize their input, this can actually be a pretty tricky task for them to accomplish. This is also the reason why it’s hard for them to count the amount of 'R’s in the word ‘Strawberry’.
Well, it’s obviously not going to be the iPad that wins in that case
If there was a real demand for big pockets, there would be money to make in selling those, and big pocket brands would dominate.
I think you might be giving a bit too much credit to the industry here.
There used to be pockets on women’s clothes - or more accurately, you tied them on yourself as they came separately from the clothes - but they fell out of fashion as handbags became the fashion statement that said: look - I’m not poor enough to have to have pockets.
Very dumb, but it is what it is.
What baffles me now is that pockets on women’s clothes haven’t made a comeback yet. How asleep at the wheel are fashion designers?
Depends on what kind of generation of MacBook that is. Intel-series? I’m leaning ThinkPad. M-series? It’s gonna have to be the MacBook.
Heavy on the sauce