I'm not sure I get why everything above France would be rendered uninhabitable? The coldest place inhabited by humans year round is Oymyakon https://en.wikipedia.org/wiki/Oymyakon
Temperatures are generally above 0°C in summer, -50 approximately in winter.
Will an Ice Age actually be worse than that?
I would expect somewhat better, although maybe not much. I might expect Denmark and Southern Parts of Sweden and England to reach 10 degrees in Summer, and -20 in Winter. But that is of course just a guess on my part so I am certainly willing to hear that I have guessed wrong.
Yeah but I would think that is still survivable, unless it comes like that one dumb movie in which the ice age is a super quick one and everything happens in the space of 24 hours approximately.
Of course I'm thinking survivable with the magic of "technology" and maybe I'm adding wishful thinking into this science fiction scenario here, but I'm not sure if the result of the new Ice Age will be the same as the last one.
Survivable is a strong word. We can survive for a long time huddled around breeder reactors. IMO the better question is how many of the affected people would try to migrate to better areas and how much firepower they bring with them when they’re not welcome.
You'd need to pull habitations up by a couple meters each year for a few decades if that one km of ice sheet builds up gradually :D Probably survivable, but inside yurts instead of fully furnished flats with amenities.
maybe a percentage chance of solving puzzle tracker that updates a bit randomly slow so you don't necessarily know right away that you made a mistake, although it would have to be a bit weird, for example when you start you are not at 100% of solving puzzle.
The reason why it is better is that with search you have to narrow your search down to a specific part of what you are trying to do, for example if you need a unique id generating function as part of what you are trying to do you first search for that, then if you need to make sure that whatever gets output is responsive 3 columns then you might search for that, and then do code to glue the things together to what you need, with AI you can ask for all of this together, get something that is about what the searched for results would have been, do your glue code and fixes you would normally have done.
It trims the time requirement of a bit of functionality that you might have searched for 4 examples down by the time requirement of 3 of those searches.
It does however remove the benefit of having done the search which might be that you see the various results, and find that a secondary result is better. You no longer get that benefit. Tradeoffs.
Use the mole example as referring to any physical characteristic hidden by clothing that people want to remain hidden. It's an example to demonstrate that the AI is not "undressing" anybody. It is filling in an extrapolation of pixels which have no clear relationship to the underlying reality. If you have a hidden tattoo, that tattoo is still not visible.
This gets fuzzy because literally everything is correlated -- it may be possible to infer that you are the type of person who might have a tattoo there? But grok doesn't have access to anything that hasn't already been shared. Grok is not undressing anybody, the people using it to generate these images aren't undressing anybody, they are generating fake nudes which have no more relationship to reality than someone taking your public blog posts and then attempting to write a post in your voice.
sure, but if I make a fake picture of someone having sex with a horse and someone else confirms "my gosh, that's really them! I recognize that mole" then I suppose it is the same damage.
At any rate where some of this stuff is concerned, fake CSAM for example, it doesn't matter that it is "fake" as fakes of the material is also against the law in some places at least.
if the problem is just the use of the word "undressing" I suppose the usage of the word is completely analogical, as nobody expects that Grok is actually going out and undressing anyone as the robots are not ready for that task yet.
> To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
Copyright is not “you own this forever because you deserve it”, copyright is “we’ll give you a temporary monopoly on copying to give you an incentive to create”. It’s transactional in nature. You create for society, society rewards you by giving you commercial leverage for a while.
Repeatedly extending copyright durations from the original 14+14 years to durations that outlast everybody alive today might technically be “limited times” but obviously violates the spirit of the law and undermines its goal. The goal was to incentivise people to create, and being able to have one hit that you can live off for the rest of your life is the opposite of that. Copyright durations need to be shorter than a typical career so that its incentive for creators to create for a living remains and the purpose of copyright is fulfilled.
In the context of large language models, if anybody successfully uses copyright to stop large language models from learning from books, that seems like a clear subversion of the law – it’s stopping “the progress of science and useful arts” not promoting it.
(To be clear, I’m not referring to memorisation and regurgitation like the examples in this paper, but rather the more commonplace “we trained on a zillion books and now it knows how language works and facts about the world”.)
Duration of copyright is one way it was perverted, but the other direction was scope. In 1930 judge Hand said in relation to Nichols v. Universal Pictures:
> Upon any work...a great number of patterns of increasing generality will fit equally well. At the one end is the most concrete possible expression...at the other, a title...Nobody has ever been able to fix that boundary, and nobody ever can...As respects play, plagiarism may be found in the 'sequence of events'...this trivial points of expression come to be included.
And since then a litany of judges and tests expanded the notion of infringement towards vibes and away from expression:
- Hand's Abstractions / The "Patterns" Test (Nichols v. Universal Pictures)
- Total Concept and Feel (Roth Greeting Cards v. United Card Co.)
- The Krofft Test / Extrinsic and Intrinsic Analysis
- Sequence, Structure, and Organization (Whelan Associates v. Jaslow Dental Laboratory)
- Abstraction-Filtration-Comparison (AFC) Test (Computer Associates v. Altai)
The trend has been to make infringement more and more abstract over time, but this makes testing it an impossible burden. How do you ensure you are not infringing any protected abstraction on any level in any prior work? Due diligence has become too difficult now.
Actually, plenty of activists, for example Cory Doctorow, have spent a significant amount of effort discussing why the DMCA, modern copyright law, DRM, etc. are all anti-consumer and how they encroach on our rights.
It's late so I don't feel like repeating it all here, but I definitely recommend searching for Doctorow's thoughts on the DMCA, DRM and copyright law in general as a good starting point.
But generally, the idea that people are not allowed to freely manipulate and share data that belongs to them is patently absurd and has been a large topic of discussion for decades.
You've probably at least been exposed to how copyright law benefits corporations such as Disney, and private equity, much more than it benefits you or I. And how copyright law has been extended over and over by entities like Disney just so they could prolong their beloved golden geese from entering public domain as long as possible; far, far longer than intended by the original spirit of the copyright act.
As I understand it Lean is not a general purpose programming language, it is a DSL focused on formal logic verification. Bugs in a DSL are generally easier to identify and fix.
It seems one side of this argument desperately needs AI to have failed, and the other side is just saying that it probably worked but it is not as important as presented, that it is actually just a very cool working methodology going forward.
what percentage of people on Ozempic etc. are poor enough that they would be priced out by healthier food?
reply