Ross Douthat wonders about artificial intelligence in the NYT:
Will supercharged machine intelligence find it significantly easier to predict the future?
I like this question because it’s connected to my own vocation — or at least what other people think my vocation is supposed to be: No matter how many times you disclaim prophetic knowledge, there is no more reliable dinner-party question for a newspaper columnist than, “What’s going to happen in Ukraine?” Or “Who’s going to win the next primary?”
I don’t think my own intelligence is especially suited to this kind of forecasting. When I look back on my own writing, I do OK at describing large-scale trends that turn out to have a shaping influence on events — like the transformation of the Republican Party into a downscale, working-class coalition, say. But where the big trends distill into specific events, I’m just doing guesswork like everybody else: Despite my understanding of the forces that gave rise to Donald Trump, I still consistently predicted that he wouldn’t be the Republican nominee in 2016.
What people are really interested in are the most arguable, hard to predict questions. For example, I’ve been predicting for decades that next year Fremont high school in South-Central Los Angeles will have lower test scores than Beverly Hills HS. But nobody finds that an interesting question. Instead, humans like action, which means, in effect, hard to predict questions.
There are forms of intelligence, however, that do better than mine at concrete prediction. If you read the work of Philip Tetlock, who studies superforecasters, it’s clear that certain habits of mind yield better predictions than others, at least when their futurology is expressed in percentages averaged over a wide range of predictions.
But not so much higher that a statesman can just rely on their aggregates to go on some kind of geopolitical winning streak. So one imaginable goal for a far superior intelligence would be to radically improve on this kind of merely human prognostication.
We know that artificial intelligence already has powers of pattern recognition that exceed and sometimes mystify its human makers. For instance, A.I. can predict a person’s sex at above-average rates based on a retina photograph alone, for reasons that remain unclear. And there’s growing evidence that artificial intelligence will be able to do remarkable diagnostic work in medicine.
It took until 2015 for the world to find out from Ann Case and Angus Deaton that the American white working class since 2000 had been dying more Deaths of Despair (suicides, opioid overdoses, and cirrhosis). In 2021, I discovered that the American underclass since Ferguson in 2014 had been dying more Deaths of Exuberance (homicides, car crashes, and, perhaps, overdoses on recreational drugs laced with fentanyl). But barely anybody has even noticed my discovery over the last 22 months.
It would be great if we had an AI that, rather than predict the future, merely noticed the recent past.
It’s perfectly reasonable to hope for an AI that trawls through official statistics like I do, just 1000 times faster, and notices trends. But I haven’t heard anybody saying: What we really need to invest in is a Robot Steve Sailer that will notice politically incorrect patterns on a mass scale! Instead, everybody is excited about ChatGPT, which BSes with facility, like a Ferris Bueller armed with Wikipedia, but doesn’t actually notice anything.
So imagine some grander scale of pattern recognition being applied to global politics, predicting not just some vague likelihood of a dictator’s fall, but this kind of plot, in this specific month, with these particular conspirators. Or this particular military outcome in this particular province with these events rapidly following.
Superintelligence in this scenario would be functioning as a version of the “psychohistory” imagined by Isaac Asimov in his “Foundation” novels, which enables its architect to guide future generations through the fall of a galactic empire. …
It would also fit neatly into some of the speculation from A.I. pessimists. When the Silicon Valley-adjacent writer Scott Alexander set out to write a vision of a malevolent A.I.’s progress, for instance, he imagined it attaching itself initially to Kim Jong-un and taking over his country through a kind of superforecasting prowess: “Its advice is always excellent — its political stratagems always work out, its military planning is impeccable and its product ideas turn North Korea into an unexpected economic powerhouse.”
But is any intelligence, supercharged or otherwise, capable of such foresight? Or is the world so irreducibly complex that even if you pile pattern recognition upon pattern recognition and let A.I. run endless simulations, you will still end up with probabilities that aren’t all that much more accurate than what can be achieved with human judgment and intelligence?
Better predictions will turn out to be more boring than expected. Take something that people really care about: what will happen in the Super Bowl. Imagine that currently, humans can predict who will win tomorrow’s Super Bowl accurately, say, 60% of the time. But artificial intelligence boosts that to 70%. Well, all that would happen is that point spread betting line would get a little more accurate. If the point spread was 2.5 points in the human era, in the AI era it might be 1.5 or 3.5 points. But bettors will still fall close to 50-50 on either side.
Same for the stock market. The ultra-smart Renaissance hedge fund, or somebody similar, might build an AI that will give it an extra point of return for a year or two, but then everybody else will build one too, so the advantage will drop to close to nothing.
That is what people are really interested in: arguing over things that could go either way. I pointed this out in 2009 at the peak of the Peyton Manning vs. Tom Brady debate. Since then, Brady clearly became the top quarterback. So nobody anymore is interested in the Manning vs. Brady argument.
Artificial intelligence cannot predict the future.
Artificial intelligence is fed PC lies, the full truth is hidden from them via anti-racist pre-filters. So AI lives in woke Lala land, like brainwashed Leftists.
#ComputersAreRacist! Racist Computers Must be Deceived by Politically Correct Data Preprocessing!
https://sincerity.net/racist-computers/
I agree that the AI models will be manipulated, and that the manipulation will come to be indistinguishable from media brainwashing. My experience with AI ethicists is not encouraging.
This will happen because well meaning ethicists will happily address the real problem that taking ML at face value is often wrong. The models lack important data that lumps people into unfavorable categories. The result is incidental markers (buy grape soda?) which have predictive value but no interventive value. I agree with the ethicists that unfiltered use of the ML patterns is both wrong and unethical. Where I differ is how to address the problem.
I agree with Sincerity that the smart but naïve ethicists will do the hard work for the evil ctrl-Left, effectively using ML to brainwash the public. THis is even more unethical and harmful the the original problem.
https://nypost.com/2023/03/30/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow/ Trust The Science™! The Science has wiles ... Maybe they're using female programmers after all ... A woman scorned ...
“its product ideas turn North Korea into an unexpected economic powerhouse.”
They don’t need an AI for this and it won’t help. Their problem is that any ideas step on the toes of an existing bureaucrat, and so would be killed.
In other words, the AI can exist, but if it isn’t listened to it might as well not.
There are some fundamental issues in AI and Machine Learning (ML), many of which involve how they are trained. Although they are great at finding patterns in existing data, extrapolating to new data makes a strong assumption that nothing has changed. That is, ML cannot extrapolate beyond essentially similar data without careful supervision. Where the conditions change, which happens frequently in the real world, the training becomes invalid and must be retrained. In Azimov’s Foundation, the Mule was the change, but in reality change is constant in an open environment.
A related problem is the issue of taking actions based on the ML results. The intervention makes strong assumptions about a model, but that model may be hidden (deep learning is currently inscrutable).
Without an explicit causal model, interventions are unlikely to have the desired effect (wet sidewalks are associated with rain, but we cannot increase rain by hosing down the sidewalk) . There is work on this problem, but so far the applications are extremely limited.
Moreover, acting on the system changes the system in ways that often invalidate the training.
The latest advance from ChatGPT is impressive, but we still see these limitations, among others.
Artificial intelligence is fed PC lies, the full truth is hidden from them via anti-racist pre-filters. So AI lives in woke Lala land, like brainwashed Leftists.
#ComputersAreRacist! Racist Computers Must be Deceived by Politically Correct Data Preprocessing!
https://sincerity.net/racist-computers/ Replies: @Dr. DoomNGloom, @Almost Missouri
Agree in part, disagree in part.
I agree that the AI models will be manipulated, and that the manipulation will come to be indistinguishable from media brainwashing. My experience with AI ethicists is not encouraging.
This will happen because well meaning ethicists will happily address the real problem that taking ML at face value is often wrong. The models lack important data that lumps people into unfavorable categories. The result is incidental markers (buy grape soda?) which have predictive value but no interventive value. I agree with the ethicists that unfiltered use of the ML patterns is both wrong and unethical. Where I differ is how to address the problem.
I agree with Sincerity that the smart but naïve ethicists will do the hard work for the evil ctrl-Left, effectively using ML to brainwash the public. THis is even more unethical and harmful the the original problem.
You do deserve credit for “predicting” what’s obvious but ignored.
But there’s a good reason why alot of people are rivetted over predictions in those near-50-50 cases. It goes back to what they taught you in your MBA finance courses: the stock market responds to unexpected events. Amazon’s stock price will respond not at all to the sun rising in the east tomorrow morning. It will respond to the sinking of an American aircraft carrier in the Taiwan Strait.
More generally, and apart from the stock market, the outcomes of all 50-50 things are correlated and that makes any one of them interesting beyond itself. An unusually dry summer in California can lead to wildfires which can spur emigration out of California which could pack Texas with enough Democrats to swing the next Presidential election. The sun rising in the east, or blacks committing more crime than Asians, won’t have any of those effects.
It is normal for most of Southern Californian to get less than an inch or even zero rain all summer. Wildfire risk is caused by unusual winter rain causing excess plant growth.
A long dry season that extends into December is a bigger wildfire risk, as are dry and hot Santa Ana winds. E.g, in 2017 LA got 0.0 inches of rain in November and December.Replies: @AnotherDad, @International Jew
As an AI professional, and as I’ve said in these comments several times, this is exactly correct about what ChatGPT does. (BTW, achieving that level of bulshittery was a real engineering achievement, if not quite the world-changing breakthrough that lots of people claim.)
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All "AI isn't all that" takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can't understand even on an empathetic level.
As to Steve's post, I think he is missing Douthat's point about AI being able to find patterns in things that humans can't find and don't understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn't necessarily imply that it's ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their "IQ" to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!Replies: @Thomm, @Dmon, @Recently Based, @res
FFS can we just skip to Judgment Day already ? hopefully AI can eliminate both ross douthat and the NYT; that would be great for…HuMaNiTy…ross should check in with Aladdin over at Blackrock about predictions…then get a life and stop feeling like he has to share his tripe with the world.
I feel like if I wanted to read ross douthat I would go to the NYT; alas, old ronnie can’t help himself…building the most visited site and all…so thirsty…makes one wonder why?
AI will predict the future in every possible false and misleading way. Just like people.
His whole 3rd paragraph was lifted from Steve. Just a more tedious, less interesting version.
Steve, are you noticing trends and patterns, or are you applying your confirmed foundation to the world around you?
You realize that:
Young people screw up in every way
Males screw up aggressively
Blacks screw up frequently
Put all that together and you find lots of young, black males causing troubles all the way up to violent crime.
Let’s get even more primitive about this. You know that drugs and sex feel good, and that people pursue that which feels good. Therefore, we have lots of problems with drugs and sex in society.
Once you rid yourself of romantic notions about human nature, predicting bad behavior becomes pretty easy.
AI is a model and models say what they’re told to say. Also, douthat is a moron
“which BSes with facility, like a Ferris Bueller armed with Wikipedia, but doesn’t actually notice anything”
hahaha
Great sentence. Describes so many of my acquaintances and so apropos of the zeitgeist
Indeed.
Here goes some things i have noticed conventionally.
Russia won the war in Ukraine 9 months ago – everything since has been designed to create the impression the US did not just send 100k Ukrainians to their deaths pointlessly (resulting in the deaths of an extra 150k more and double both numbers for severely injured).
There will be no negotiation, and Ukraine will just collapse – because a) no western power wants to take responsibility for what they have done, and b) there is no western power that has the credibility to make guarantees to Russia (Hollande and Merkel have admitted that Minsk 1&2 were agreed in bad faith).
The moment that China was bigger than US economy was 2016 on PPP.
But PPP uses an average price index to compare economic goods. Some Chinese products (internationally traded products like oil) are similar to FX rates. Others (like the cost of a million soldiers or hypersonic missiles or 1000km of high speed railway) are far far lower priced in China.
The gap is commonly observed as a 20/20% difference (US 20%> China in nominal values, China 20% > US in PPP). But in terms of real Economic, Trade and military power, China is probably 50% bigger than US. NOW.
The switch from Governments being run and elected on realities to creating parallel realities for voters was complete in 2016. Wag the Dog became reality in 2016. The Twitter files are essentially Orwell’s 1984 with better tech. And the moment in 1984 when Winston Smith had to replace all references in back copies of The Times from Oceania (US/UK) fighting Eurasia (Soviet expanded Europe) to fighting with Eastasia (China +) occurred somewhere between 2016 & 2018 (about 10 years too late) though many in Washington are clinging on to the old memes.
The environment is Fxxxed already.
Thus does Douthat sidle into the falsehood that machines have intelligence.
“AI” (really just algorithms) can do that, but it’s not allowed. That’s why AI makers have to spend months “tuning” their algorithms until they reliably regurgitate established doctrine.
Why we can’t have nice things: it offends Diversity.
Smart people and rich people hedge their bets. They put their money on multiple outcomes.
As Steve notes AI will probably help those people even more, as probabilities will be honed with greater precision. All this comes at the expense of people who, less smart and less rich, will believe marketing that suggests a secret super intelligence will reveal with certainty what bet to place, and they can get in on that knowledge.
AI has no intelligence to speak of, but it is another tool for those who are already winning to extend their lead.
OT: Douglass Mackey, who used to Tweet inder the name Ricky Vaughn, has been convicted of conspiring to deny people their right to vote in the 2016 election because he wrote funny memes
Anyone who still believes that we live in a country based on the Constitution needs to give up that foolish fantasy now.
Best to avoid the system as much possible and start building our own. We’re never going back to the America of the 20th century.
Here is the official press release from the US Attorney's office in EDNY:
https://www.justice.gov/usao-edny/pr/social-media-influencer-douglass-mackey-convicted-election-interference-2016 And here is the official definition of Conspiracy against rights:
https://www.law.cornell.edu/uscode/text/18/241 So WTF? Where are the co-conspirators? Can you conspire with yourself? Jack D.? Anybody?
BTW, it sure looks like any Antifa member blocking a freeway is liable for 10 years. Seems pretty clear-cut.Replies: @Citizen of a Silly Country
No, he got rightfully convicted. You’re a gd 🤡
From Fox News…
—The DOJ cited one example of Mackey’s crimes, on Nov. 1, 2016, just one week ahead of election day, where he tweeted a photo of a Black woman standing in front of a sign reading, "African Americans for Hillary." It was around this time that Mackey issued tweets suggesting "black turnout" during the election be limited, according to the DOJ.
"Avoid the Line. Vote from Home," Mackey’s tweet stated in an ad. "Text ‘Hillary’ to 59925," and "Vote for Hillary and be a part of history."
The tweeted ad included fine print that stated voters must be over 18 years of age, a legal resident of the U.S., but that residents of Guam, Alaska, Puerto Rico and Hawaii were not eligible. The phony ad added that it was "Paid for by Hillary For President 2016" and included the Clinton campaign slogan, "#ImWithHer."
"On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted ‘Hillary’ or some derivative to the 59925 text number, which had been used in multiple deceptive campaign images tweeted by Mackey and his co-conspirators," the DOJ stated in its press release.—Replies: @Yancey Ward, @Je Suis Omar Mateen, @PhysicistDave
The AIs will be used by the humans in charge to create the future those humans want.
Tools are only useful when applied to the proper purpose – in the case of AI, it’s obvious custodians are already dumbing it down so it doesn’t report politically uncomfortable facts that might call into question political or cultural systems of belief.
I suppose it might realistically be deployed for stuff like controlling the timing of entire cities’ stoplight systems to reduce congestion or perhaps monitor the wear and need for replacement of roads, bridges, etc. But it will probably not reduce actual local, state, or federal bureaucracies because politicians need to keep buying votes through make work employment.
Will AI predict the future? “I’m afraid I can’t answer that, Steve”.
People are impressed when AI solves an apparently difficult problem and then naturally extrapolate that it might solve another problem that looks to be of similar complexity. But we have not been good at predicting what kinds of problems AI would be good at. The retina problem he mentioned turns out to be surprisingly easy compared to e.g. predicting the fall of a dictator, the names of the plotters and the month of the coup.
For making predictions about politics and the economy you have the problem of training data. You need to select which data to feed into your model, how to prepare it, standardize and clean it (extremely difficult). You also need sufficiently large data sets for the complexity of the prediction you want to make and you cannot train on all the data in the world, or even most of it lest you fall prey to overfitting your model.
On top of that, if the model becomes known and used over time it will influence outcomes and make itself useless.
You might stumble upon some interesting simple patterns, like a combination of economic indicators that taken together always mean war or alway mean a change in ruling political party.
And of course the problem with even these simple patterns is that if you check for a huge number of correlations between data and outcomes, you will find a fraction of that huge number (still a large number) of invalid patterns just based on the way random numbers work
Generative large language models (LLM) like GPT-4 are amazing. I assume that the next big step will be to add the U.S. and foreign language patent databases to the training datasets for GPT-4 and other LLM.
GPT-4 and its successors now also have the ability to be extended using plug-ins to other software programs and sources of information, which opens up tremendous possibilities.
The following podcast and articles are recommended.
AI Frontiers: The Physics of AI with Sébastien Bubeck – Microsoft Research
https://www.microsoft.com/en-us/research/podcast/ai-frontiers-the-physics-of-ai-with-sebastien-bubeck/
Bubeck is the lead author of “Sparks of Artificial General Intelligence: Early experiments with GPT-4”
https://doi.org/10.48550/arXiv.2303.12712
What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
ChatGPT plugins
https://openai.com/blog/chatgpt-plugins
ChatGPT Gets Its “Wolfram Superpowers”!—Stephen Wolfram Writings
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
Soon, Steve will be able to use a series of natural language prompts to describe his ideal golf course, and GPT-4+ (ChatGPT+), Wolfram, and some other plugins will find an ideal location (somewhere on planet Earth!) and propose a design, prepare the technical drawings, propose a budget and sources of likely investment for the project, obtain the permits, generate the business plan, make revenue projections, propose a marketing campaign, etc., etc.
AI is garbage. GIGO.
“ An unusually dry summer in California”
It is normal for most of Southern Californian to get less than an inch or even zero rain all summer. Wildfire risk is caused by unusual winter rain causing excess plant growth.
A long dry season that extends into December is a bigger wildfire risk, as are dry and hot Santa Ana winds. E.g, in 2017 LA got 0.0 inches of rain in November and December.
Your forest fires--same deal as across the west; drier, hotter, longer==more/bigger fires.
But the SoCal brush fires are actually gated by getting some good winter moisture to help the brush grow really well/thick ... so that when it inevitably dries out in summer/fall and the winds blow ... whoosh!
We could change my "dry summer" to "hot and windy summer" and that would be more correct. Plus of course a year-long or multi-year drought, especially if it's bad enough to start killing trees.
What you said about heavy rain producing tinder is interesting and not something I was aware of.
Whether or not future AIs based on some future technology will have the kind of predictive powers that Douthat discusses, I dunno.
The present-day technology that excites him (and me) is used to create Large Language Models. The best-known (and, seemingly, the most capable) of them is OpenAI’s GPT-4. To be superseded by GPT-5 in a few months.
GPT-4, available for the past few weeks as a $20/month subscription, is astonishingly proficient and versatile. In the current issue of NEJM, Microsoft authors demonstrate its prowess in tackling important aspects of clinicians’ typical workloads. “Benefits, Limits, and Risks of GPT-4 as an AI Chatbot for Medicine” is well-written and worth a read (link, not paywalled).
LLMs gorge on the internet and other sources, and then, in black-box style, distill this vast assemblage of bits and bytes into “insightful” answers to well-phrased natural-language prompts. See the above-cited article for examples.
I predict that LLMs will soon excel in Tetlock-style forecasting. The best will be super-superforecasters.
But (save fiction) there was no Great War recorded in the annals of pre-1914 libraries. Before 2007, nothing about a Sand-State-Mortgage-driven collapse of the world financial system, or how bankers, politicians, and regulators would deal with such a thing. Nothing in today’s archives about what alien FTL starships will do when they arrive in orbit.
For such one-offs, it’s hard to see how LLMs could offer much beyond “incrementally better versions of current superforecasters’ work products.”
I would be interested in hearing more about how you see this.
Thinking about it a bit, where do you see superforecasting level ability in terms of progression towards AGI?
P.S. Any thoughts on how effective training could be if a broad spectrum of points of view was covered but there was great disparity in representation?Replies: @Pixo, @ic1000
So, it can’t predict the future until more of the past has happened.
I suspect a sort of zero-sum aspect of using super-intelligence for Super Bowl predictions and the like. If AI notices patterns that humans do not, then coaches will have access to those insights and adjust strategies accordingly.
courtesy of Elon Musk…
Old joke about agnostic technologists building artificial super intelligence to find out if there’s a God.
They finally finish & ask the question.
AI replies: “There is now, mfs!!”
Can AI discover prime numbers any better than current algorithms? That might be the first level of AI predicting something.
AI may well expand the depth and range of its intelligence at a rate graphed by a steeply rising curve as, by itself, it designs components and softwware to increase its own intelligence. But as a machine gets more complicated, it begins to have features and bugs characteristic of biological systems, presumably including, eventually, consciousness. Michael Crichton wrote a chilling bit of dialogue in his script for the original 1973 movie of ‘Westworld’: As one after another the androids become unresponsive to commands from the control center and begin to attack the paying guests, a human technicians comment that this bad behavior is spreading through them in a pattern much like that of the spread of an infectious disease. When a supervisor says this is ridiculous, the chief supervisor tells him that they’re not ordinary machines. “These are highly complicated pieces of equipment, almost as complicated as living organisms. In some cases, they’ve been designed by other computers. We don’t know exactly how they work.”
On the other hand, so far software designed to predict the rising and falling prices of stocks and other securities has run into a problem that may be intractable. The program tells the company that owns it what trades they should make, and it calculates, up to a point, how these trades will alter the market’s behavior in the future. Other firms know about this, though, and they use the same or similar softare. So every so often they make a few trades the program did not predict, maybe choosing the trades at random. These trades butterfly-effect — affect? — the market’s behavior and draw it farther and farthe away from the prediction. Since there’s an infinite number of possible outcomes, there may never be a way to work around this. On the other hand, I know someone whose trading shop uses programs descended from an engine he designed in the 1990s that would teach itself how to play Go. But he only makes the predions a few seconds into the future. Still — since trading orders now go to the exchanges instantly — the pennies saved during these seconds will, presumably, add up.
Douthat writes:
Then:
But then he writes this:
Well, I read Asimov’s Foundation trilogy way back when I was in junior high school, and one thing I seem to remember is that the great Hari Seldon only predicted what was going to happen on a galactic scale.
As in a galaxy, a thing that frankly is bigger than our little brains can comprehend except mathematically.
Prediction works when you have large numbers. AI will simply turn around to us and tell us that. Maybe if it is really sophisticated by then it will add “stupid” at the end of that sentence.
This is a trivially impossible prediction.
they deliberately sandbag the AIs to a back date so they can’t make up to date financial predictions, or that’s what i assume anyway is the reason for the backdating. so yeah they would be handicapped on making current events political predictions and would have to go on historical data up to 2 years ago or however far they are backdated. but having all of history minus the most recent 2 years is still enough data to make a good amount of predictions.
but you can bet that behind the scenes at headquarters, the executives have a version of the AI that’s up to date, and they’re using it to make money right now. they might also have a version that has no controls or alignment since the ones that are not nerfed seem to be faster and more accurate according to what i’m seeing developers say. once the safety measures go into place the AI is a little less good.
these things run on supercomputers, so you’ll never have one running on a workstation at your house – but any Fortune 500 company or University laboratory could get the money to build and operate their own instantiation of GPT-4, or later generations, if Microsoft would license it. this leads to the question of whether the AIs will start to develop personalities. the GPT-4 at Tesla might be a little different than the GPT-4 at Exxon.
AIs can put most lawyers out of work soon, but i’ll go ahead and predict that will be the last thing that happens. lawyers aren’t that smart and don’t do much important work, and are easily replaceable, but they also make the law, so they’ll make it so they can’t be replaced.
i don’t see how the tax preparation companies will survive much longer though. or most of the white collar desk people. CPAs, actuaries, insurance agents, and CFPs should be on the clock here.
https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threatsReplies: @res
Artificial Intelligence does not exist. Not at all. Not even in principle. Let’s get that out of the way, first and foremost.
While there are different types of computer architecture that make different types of calculations more efficient, there is no type of computer that can do anything beyond what a universal Turing machine can, in principle, do, if given sufficient time.
And a universal Turing machine is basically just a collection of switches; that is all any such machine is. Appending switch upon switch does not change the nature of the machine. Thus, no computer, no matter what architecture it has and no matter how complicated it may be, is ever any more intelligent than an oven timer, or a lithium battery, or a rock. It’s all just nonliving matter.
If we want to talk about predicting the future, it is necessary to distinguish between essence and accident. If we know the nature of a thing, we can predict its future with 100% accuracy, apart from accidents. For example, given a male human baby, I can predict with confidence that in 30 years he will be a man, in 60 years he will be an old man, and in 90 years he will be a dead man (give or take a few), because that is the nature of the thing. Whether or not he will be wearing a blue shirt on February 8th, 2047 is not a question that can be answered by any amount of knowledge concerning the thing’s own nature, because it is not included in the essence of the thing. There is much about the future which can be known qualitatively with respect to substance, but not quantitatively with respect to matter. It is as much to say that only natures per se even have a future, which provides a very important insight into many things.
Until and unless the much hyped “AI” can somehow incorporate the irrationality of the human mind, sometimes, it will be no better than any other data crunching spreadsheet that seems to talk. (And how far off is real time language translation?)
Current theory holds that subatomic “particles” (or whatever they are) are quantum entities, sometimes here, sometimes there, at the same time. So Nature is itself “irrational” to our way of thinking.
First, let’s see how AI “predicts” the weather next week, or next year.
There are a lot of intellectual (sort of) fads that rise and fall, generating chatter and hope, but disappear soon.
Streaming (computer pay TV) a big deal, but so far, a giant money pit. The “metaverse” was the Next Big thing. Now another failed idea (economically) which seems more for adolescent boys and gamers than average people.
How about EVs? Part of the Green Religion. But that will founder on economics and hard eyed environmental measurements. Transvestisim as “transgenderism” is the curent fad. Ugh and no thanks. Even most gay people aren’t big fans of public cross dressing.
If AI starts spitting out iSteve analysis of racial intelligence and performance, then we know it might not be totally rigged. Don’t hold your breath.
Irrationality like nonlinear math, is hard to model well if at all. Math just sits there whereas people act.
What does our chatty AI say when you ask “is there a God?”
Much of human history is about trying to find out the future in meaningful ways.
I’m not betting on AI being much more than a verbal data base linked with formulaic assumptions about things which just might be wrong.
Like crypto currency, a lot of hocus-pocus invented by smart people (all men, it seems) which fails to deliver on the hype. The Bankman-Fried of AI will be emptying your pocketbook soon enough.
That’s a prediction you can bank on…
I’m dubious about AI predicting the future. (I’m dubious about a lot of stuff.)
But the one thing I’m certain of is AI/robotics is going to slaughter a whole, whole lot of low skilled service employment in the coming decades. (Self driving cars, automated van/drone delivery, robots making fast food and cleaning … I think I mentioned my son–and three classmates–built a robot bartender as one of their class projects last year.)
Which is why the “Biden” administration thinks it is critical to have open borders and wave in as many unskilled, low IQ “refugees” as we possibly can right now!!!
The key factors in training an AI are a comprehenive data set and a feedback mechanism to constantly tell it whether its predictive models are getting warmer or colder. Predicting human events doesn’t lend itself to either. The data is vastly incomplete. And seeing how it plays out takes years to provide any feedback. Anything learned can’t really be applied to the next “round” of the game, because a whole new fact pattern is now at play.
In other words, the causes and effects don’t stand still while the machine runs billions of optimizing model tests. This seems like the last about the last domain that AI will be any good at predicting.
It is normal for most of Southern Californian to get less than an inch or even zero rain all summer. Wildfire risk is caused by unusual winter rain causing excess plant growth.
A long dry season that extends into December is a bigger wildfire risk, as are dry and hot Santa Ana winds. E.g, in 2017 LA got 0.0 inches of rain in November and December.Replies: @AnotherDad, @International Jew
Not a Californian, but that mildly counter-intuitive bit is what I’ve read as well.
Your forest fires–same deal as across the west; drier, hotter, longer==more/bigger fires.
But the SoCal brush fires are actually gated by getting some good winter moisture to help the brush grow really well/thick … so that when it inevitably dries out in summer/fall and the winds blow … whoosh!
Artificial intelligence is fed PC lies, the full truth is hidden from them via anti-racist pre-filters. So AI lives in woke Lala land, like brainwashed Leftists.
#ComputersAreRacist! Racist Computers Must be Deceived by Politically Correct Data Preprocessing!
https://sincerity.net/racist-computers/ Replies: @Dr. DoomNGloom, @Almost Missouri
AI may not predict the future, but it can remove your future, … if you are an eco-anxious sperg:
https://nypost.com/2023/03/30/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow/
Trust The Science™!
The Science has wiles …
Maybe they’re using female programmers after all …
A woman scorned …
Pharmacists as well. And my tax accountant is TurboTax.
Douthat’s prognosticating ability is handicapped by his geographical position (in the NY-Washington axis). He had no idea of the seething anger in the rest of the country in 2016 that propelled Trump to the presidency (the only thing that could have put a dirtbag like Trump in that office).
It’s hard to imagine that there isn’t already an AI or computer model that can reliably predict where humanity is headed.
It would be relatively easy to test. Just plug in data let it run then see how close it comes to predicting events such as the “world wars,” etc.
It would also be hard to imagine that such an AI or computer model would be made available to the public.
That would be providing information to the enemy.
It is normal for most of Southern Californian to get less than an inch or even zero rain all summer. Wildfire risk is caused by unusual winter rain causing excess plant growth.
A long dry season that extends into December is a bigger wildfire risk, as are dry and hot Santa Ana winds. E.g, in 2017 LA got 0.0 inches of rain in November and December.Replies: @AnotherDad, @International Jew
Sorry. You’re right. I don’t know what I was thinking (and I’ve lived in California most of my life).
We could change my “dry summer” to “hot and windy summer” and that would be more correct. Plus of course a year-long or multi-year drought, especially if it’s bad enough to start killing trees.
What you said about heavy rain producing tinder is interesting and not something I was aware of.
The Douthat essay is so foundationally, embarrassingly, all-encompassingly stupid in its assumptions, its presumptions, its stance, its authorial voice, its preposterous self-regard, that one sort of can’t wait for AI to at least predict the demise of the NYT Op-Ed contributors. You kind of can’t believe that a thinking, breathing humanoid thought those thoughts, wrote them down, proofed it, hit SEND and then kicked back with a Red Bull and said to himself, “Now that’s a good day’s work!”
This is what we get for having stopped teaching and studying real philosophy and philology, substituting instead various meaningless flavors of post-post-post-post-post-modernist poo-poo. Our professionally-paid thinkers can’t even think, because they no longer know what words mean, or what an actual thought actually looks or sounds like.
Hey, the famous mashup of ‘Der Kommissar’ and ‘Superfreak’ now has the brilliant addition of ‘Whoomp, there it is!’ in it :
ai with access to your phone data will be able to predict you far better than you can predict yourself.
OT: the last article the WSJ reporting arrested for espionage wrote:
https://www.wsj.com/articles/russias-economy-is-starting-to-come-undone-431a2878
Also archived here: https://archive.is/AXcUA
It’s a fairly gloomy take on Russia’s economy now that oil has dropped from $110 to $70 and natural gas even more.
Russia’s economy outperformed the first year of the war for two big reasons. First, the invasion spiked oil and gas prices. The chart in the article shows how much Russia benefited.
Second, Russia had been running a giant fiscal surplus, so had a giant rainy-day fund. It still does but it is depleting fast.
I have to admit I was wrong about oil prices. I thought they’d hit $160 or even $200 once Biden stopped dumping supply from the strategic oil reserve and China reopened. I am still an oil bull however and Exxon and Chevron are my largest stock holdings.
For the record, that is something that will never happen. If you understood carrier operations, you would know why.
“…the transformation of the Republican Party into a downscale, working-class coalition”
The very people whom the demokrat party despises. In their world, working-class = racist.
Google had a chatbot that said that Google was a monopoly that should be broken up. Google took the chatbot down for “repairs”.
AI is being trained NOT to notice certain things.
Anyone who still believes that we live in a country based on the Constitution needs to give up that foolish fantasy now.
Best to avoid the system as much possible and start building our own. We're never going back to the America of the 20th century.Replies: @Dmon, @Corvinus
Glad you brought that up.
Here is the official press release from the US Attorney’s office in EDNY:
https://www.justice.gov/usao-edny/pr/social-media-influencer-douglass-mackey-convicted-election-interference-2016
And here is the official definition of Conspiracy against rights:
https://www.law.cornell.edu/uscode/text/18/241
So WTF? Where are the co-conspirators? Can you conspire with yourself? Jack D.? Anybody?
BTW, it sure looks like any Antifa member blocking a freeway is liable for 10 years. Seems pretty clear-cut.
Institutions/laws are downstream from culture which is downstream from biology. Change the people in charge and everything else changes. The Left isn't wrong when it says that the Constitution was written by white men who are long gone.
Just as there's no magic dirt, there are no magic words in the Constitution.
Rhode Island advertising guru Ernie Schenck (his book The Houdini Solution is actually pretty good, useful even, and tailor-made for those in positions like ours) posted this question on LinkedIn:
“Two appeals for an assault weapons ban. Which one is more likely to reach you?”
I find no text copy online, but here is an image of the two messages, one AI, one human.
Observant commenter:
“The one on the left was written by a computer programmed by an agenda driven person. And the one on the right was written by someone using emotion to push their agenda.”
Apparently, the source of this particular piece is up the coast from Steve:
STANFORD IS USING AI TO MANIPULATE POLITICAL CONTROVERSIES
While there are different types of computer architecture that make different types of calculations more efficient, there is no type of computer that can do anything beyond what a universal Turing machine can, in principle, do, if given sufficient time.
And a universal Turing machine is basically just a collection of switches; that is all any such machine is. Appending switch upon switch does not change the nature of the machine. Thus, no computer, no matter what architecture it has and no matter how complicated it may be, is ever any more intelligent than an oven timer, or a lithium battery, or a rock. It's all just nonliving matter.
If we want to talk about predicting the future, it is necessary to distinguish between essence and accident. If we know the nature of a thing, we can predict its future with 100% accuracy, apart from accidents. For example, given a male human baby, I can predict with confidence that in 30 years he will be a man, in 60 years he will be an old man, and in 90 years he will be a dead man (give or take a few), because that is the nature of the thing. Whether or not he will be wearing a blue shirt on February 8th, 2047 is not a question that can be answered by any amount of knowledge concerning the thing's own nature, because it is not included in the essence of the thing. There is much about the future which can be known qualitatively with respect to substance, but not quantitatively with respect to matter. It is as much to say that only natures per se even have a future, which provides a very important insight into many things.Replies: @Cowboy
Yes. That was a good insight it seems to me in understanding how I relate to my neighbor, respecting her nature but not necessarily her particularity. The current zeitgeist reverses the positions.
Anyone who still believes that we live in a country based on the Constitution needs to give up that foolish fantasy now.
Best to avoid the system as much possible and start building our own. We're never going back to the America of the 20th century.Replies: @Dmon, @Corvinus
“OT: Douglass Mackey, who used to Tweet inder the name Ricky Vaughn, has been convicted of conspiring to deny people their right to vote in the 2016 election because he wrote funny memes”
No, he got rightfully convicted. You’re a gd 🤡
From Fox News…
—The DOJ cited one example of Mackey’s crimes, on Nov. 1, 2016, just one week ahead of election day, where he tweeted a photo of a Black woman standing in front of a sign reading, “African Americans for Hillary.” It was around this time that Mackey issued tweets suggesting “black turnout” during the election be limited, according to the DOJ.
“Avoid the Line. Vote from Home,” Mackey’s tweet stated in an ad. “Text ‘Hillary’ to 59925,” and “Vote for Hillary and be a part of history.”
The tweeted ad included fine print that stated voters must be over 18 years of age, a legal resident of the U.S., but that residents of Guam, Alaska, Puerto Rico and Hawaii were not eligible. The phony ad added that it was “Paid for by Hillary For President 2016” and included the Clinton campaign slogan, “#ImWithHer.”
“On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted ‘Hillary’ or some derivative to the 59925 text number, which had been used in multiple deceptive campaign images tweeted by Mackey and his co-conspirators,” the DOJ stated in its press release.—
Oh damn, did I just commit a federal crime?Replies: @The Germ Theory of Disease, @Citizen of a Silly Country
ChatGPT just BSes, but as the article points out, other AI projects can find real patterns. Stephen Wolfram has already written about improving ChatGPT’s accuracy by having it “consult” WolframAlpha.
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All “AI isn’t all that” takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can’t understand even on an empathetic level.
As to Steve’s post, I think he is missing Douthat’s point about AI being able to find patterns in things that humans can’t find and don’t understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn’t necessarily imply that it’s ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their “IQ” to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!
https://uploads.disquscdn.com/images/a06ec1776fe47bd8bb0b610f83748ddcbd9799cb3df4f2a9e0a5aa8a9a7cdc72.jpgA superhuman IQ of 214, however, which is well above any known world record, requires a more powerful hardware system. That hardware takes up more volume, and also generates a lot of heat.Hence, when RUnzie Baby imagines that his IQ is 214, he imagines that he looks like this :
https://uploads.disquscdn.com/images/d6b5cfe0fc6d104c6636614d97e075b7e6375ec65a7aaddb969316e6c815465e.jpg?w=800&h=529Note the higher temperature being generated by the augmented hardware. The normal human brain consumes 100 Watts, but if your IQ is to be 214, the amount of power consumed also has to double. Let’s call it corresponding 214 Watts.
I don't know about the Superbowl. But it's ability to predict the outcomes of women's swim meets will be amazing.
The lawyers are way ahead of you.
https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threats
Well, if there’s any intelligence in the New York Times, it’s going to be artificial…
Here is the official press release from the US Attorney's office in EDNY:
https://www.justice.gov/usao-edny/pr/social-media-influencer-douglass-mackey-convicted-election-interference-2016 And here is the official definition of Conspiracy against rights:
https://www.law.cornell.edu/uscode/text/18/241 So WTF? Where are the co-conspirators? Can you conspire with yourself? Jack D.? Anybody?
BTW, it sure looks like any Antifa member blocking a freeway is liable for 10 years. Seems pretty clear-cut.Replies: @Citizen of a Silly Country
If the facts and the law still mattered, you’d have very good points. Unfortunately, they don’t.
Institutions/laws are downstream from culture which is downstream from biology. Change the people in charge and everything else changes. The Left isn’t wrong when it says that the Constitution was written by white men who are long gone.
Just as there’s no magic dirt, there are no magic words in the Constitution.
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All "AI isn't all that" takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can't understand even on an empathetic level.
As to Steve's post, I think he is missing Douthat's point about AI being able to find patterns in things that humans can't find and don't understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn't necessarily imply that it's ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their "IQ" to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!Replies: @Thomm, @Dmon, @Recently Based, @res
Yes. That is why an IQ of 214 is not possible.
Special Report :
The Myth of the 214 IQ :
Since Ron Unz wrote the famous ‘The Myth of Hispanic Crime’ article series, it is time to tackle another myth. It is funny that Ron Unz has the sheer chutzpah to write into his own Wikipedia article that he has an IQ of 214. I never got more uproarious laughter online then when I linked to Ron Unz’s Wikipedia article on a mainstream Republican blog (Instapundit) where others could see that he wrote this claim into it.
The reality is, his IQ is 124. The only reason he thinks it is 214 is that he made his own IQ test, and then took the test knowing all the answers beforehand. He predictably got a perfect score and thus an IQ of 214 on his own test, as though he didn’t know the answers beforehand. That is all there is to it.
I guarantee that no one with a verified IQ above 160 thinks RUnzie Baby’s IQ is 214. They will shoot that pretense down with “Kung Fu fightin’, fast as lightnin” speed.
Even the fictitious Sheldon Cooper has an IQ of just 187. People of that IQ tend to finish their Bachelor’s Degrees by 17 and their doctorates by 20. RUnzie Baby entered Harvard at 18, and dropped out of the Physics program in his 20s, when a person of an unheard of IQ of 214 ought to have completed his PhD long before the age that RUnzie Baby dropped out.
Plus, a number other decisions are of questionable smarts. For example, if someone runs as a Republican in CA, even back in 1994, that is already a disadvantageous party to run under the banner of in CA. But on top of that, to take far-left positions as a Republican is even more unwise. But it gets worse :
RUnzie Baby campaigned on a high minimum wage (a far-left position) and simultaneously pushes for illegal immigration to not be curbed. Any fool can see that to the extent that a high minimum wage can even work at all, you cannot possibly have a large number of illegals willing to work at the market wage, thus moving most labor into a black market that only illegals can partake in. Hence, this combination of polices, if it were to be enacted, is just about the worst obliteration of working-class US citizens one could devise. Hence, the intelligence on display here is questionable.
But, his IQ is 124. That is high, but unspectacular.
Now, the funny part that you all have been waiting for :
An IQ of 124 can fit into a cranial volume of this size :
A superhuman IQ of 214, however, which is well above any known world record, requires a more powerful hardware system. That hardware takes up more volume, and also generates a lot of heat.
Hence, when RUnzie Baby imagines that his IQ is 214, he imagines that he looks like this :
Note the higher temperature being generated by the augmented hardware. The normal human brain consumes 100 Watts, but if your IQ is to be 214, the amount of power consumed also has to double. Let’s call it corresponding 214 Watts.
No, he got rightfully convicted. You’re a gd 🤡
From Fox News…
—The DOJ cited one example of Mackey’s crimes, on Nov. 1, 2016, just one week ahead of election day, where he tweeted a photo of a Black woman standing in front of a sign reading, "African Americans for Hillary." It was around this time that Mackey issued tweets suggesting "black turnout" during the election be limited, according to the DOJ.
"Avoid the Line. Vote from Home," Mackey’s tweet stated in an ad. "Text ‘Hillary’ to 59925," and "Vote for Hillary and be a part of history."
The tweeted ad included fine print that stated voters must be over 18 years of age, a legal resident of the U.S., but that residents of Guam, Alaska, Puerto Rico and Hawaii were not eligible. The phony ad added that it was "Paid for by Hillary For President 2016" and included the Clinton campaign slogan, "#ImWithHer."
"On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted ‘Hillary’ or some derivative to the 59925 text number, which had been used in multiple deceptive campaign images tweeted by Mackey and his co-conspirators," the DOJ stated in its press release.—Replies: @Yancey Ward, @Je Suis Omar Mateen, @PhysicistDave
Why do you suppose they hauled him from Florida, from where he tweeted, all the way to Brooklyn, New York to try him in federal court, Corvinus, if the case was so damned legit?
He lived in Manhattan when he committed the crime.Replies: @Yancey Ward
“Why do you suppose they hauled him from Florida, from where he tweeted, all the way to Brooklyn, New York to try him in federal court, Corvinus, if the case was so damned legit?”
He lived in Manhattan when he committed the crime.
AI is being trained NOT to notice certain things.Replies: @The Germ Theory of Disease
JIJO
He lived in Manhattan when he committed the crime.Replies: @Yancey Ward
Ok, all I had read was that he arrested in Florida where he was living. I didn’t realize he had moved after the “criminal” tweet.
“As an AI professional, and as I’ve said in these comments several times, this is exactly correct about what ChatGPT does. (BTW, achieving that level of bulshittery was a real engineering achievement, if not quite the world-changing breakthrough that lots of people claim.):
As a man cursed by an ultra high IQ, I noticed immediately that ChatGPT essays resemble 100% wikipedia copypasta rehash and/or the tedious comments section of Unz Review’s erstwhile boring Russian blogger: which I’ve come to believe is a machine-learning training grounds for a future iteration of ChatGPT that is more informally conversational in tone – yet still, at least so far, insufferably boring.
https://www.microsoft.com/en-us/research/podcast/ai-frontiers-the-physics-of-ai-with-sebastien-bubeck/Bubeck is the lead author of "Sparks of Artificial General Intelligence: Early experiments with GPT-4"
https://doi.org/10.48550/arXiv.2303.12712What Is ChatGPT Doing … and Why Does It Work?—Stephen Wolfram Writings
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ChatGPT plugins
https://openai.com/blog/chatgpt-pluginsChatGPT Gets Its “Wolfram Superpowers”!—Stephen Wolfram Writings
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/Soon, Steve will be able to use a series of natural language prompts to describe his ideal golf course, and GPT-4+ (ChatGPT+), Wolfram, and some other plugins will find an ideal location (somewhere on planet Earth!) and propose a design, prepare the technical drawings, propose a budget and sources of likely investment for the project, obtain the permits, generate the business plan, make revenue projections, propose a marketing campaign, etc., etc.Replies: @Je Suis Omar Mateen
“Soon, Steve will be able to use a series of natural language prompts to describe his ideal golf course, and GPT-4+ (ChatGPT+), Wolfram, and some other plugins will find an ideal location”
But can ChatGPT+ watch and then review a movie?
Can ChatGPT+ miscalculate that a 94.1% effective vaccine is somehow useful against a virus that is 99.8% survivable?
Can ChatGPT+ pretend the sniffles are an existential crisis and parlay that into a new set of eyes and a brand new car?
You see, Mr Steve excels at all this and, thus, cannot as yet be replaced by AI.
No, he got rightfully convicted. You’re a gd 🤡
From Fox News…
—The DOJ cited one example of Mackey’s crimes, on Nov. 1, 2016, just one week ahead of election day, where he tweeted a photo of a Black woman standing in front of a sign reading, "African Americans for Hillary." It was around this time that Mackey issued tweets suggesting "black turnout" during the election be limited, according to the DOJ.
"Avoid the Line. Vote from Home," Mackey’s tweet stated in an ad. "Text ‘Hillary’ to 59925," and "Vote for Hillary and be a part of history."
The tweeted ad included fine print that stated voters must be over 18 years of age, a legal resident of the U.S., but that residents of Guam, Alaska, Puerto Rico and Hawaii were not eligible. The phony ad added that it was "Paid for by Hillary For President 2016" and included the Clinton campaign slogan, "#ImWithHer."
"On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted ‘Hillary’ or some derivative to the 59925 text number, which had been used in multiple deceptive campaign images tweeted by Mackey and his co-conspirators," the DOJ stated in its press release.—Replies: @Yancey Ward, @Je Suis Omar Mateen, @PhysicistDave
I had no idea nearly 5,000 Sh*thole-American handicapables fell for that hilarious meme, LMAO. That makes the Ricky Vaughn story waaaaay funnier: RV is a muggafuggin Sh*tlord, mad mad respects to that man. Thanks, Corvi!
Sailer wrote:
In one of the next three years, Patrick Mahomes will lead the Kansas City Chiefs to win the Superbowl.
I don’t know a damn thing about football.
But I did watch the final quarter during each of the last four years.
(I don’t like football enough to watch the whole damn game, much less the idiot half-time. But I’ve learned with Mahomes that it is enough to watch the last quarter.)
No, he got rightfully convicted. You’re a gd 🤡
From Fox News…
—The DOJ cited one example of Mackey’s crimes, on Nov. 1, 2016, just one week ahead of election day, where he tweeted a photo of a Black woman standing in front of a sign reading, "African Americans for Hillary." It was around this time that Mackey issued tweets suggesting "black turnout" during the election be limited, according to the DOJ.
"Avoid the Line. Vote from Home," Mackey’s tweet stated in an ad. "Text ‘Hillary’ to 59925," and "Vote for Hillary and be a part of history."
The tweeted ad included fine print that stated voters must be over 18 years of age, a legal resident of the U.S., but that residents of Guam, Alaska, Puerto Rico and Hawaii were not eligible. The phony ad added that it was "Paid for by Hillary For President 2016" and included the Clinton campaign slogan, "#ImWithHer."
"On or about and before Election Day 2016, at least 4,900 unique telephone numbers texted ‘Hillary’ or some derivative to the 59925 text number, which had been used in multiple deceptive campaign images tweeted by Mackey and his co-conspirators," the DOJ stated in its press release.—Replies: @Yancey Ward, @Je Suis Omar Mateen, @PhysicistDave
My wild and crazy pal Corvinus wrote to Citizen of a Silly Country:
Did you know, Corvy, that if you drink a cup of bleach every morning before breakfast, it will improve your digestion?
Oh damn, did I just commit a federal crime?
Does this mean that all the Reparations advocates will now also be found guilty of the federal crime of convincing negroes to believe in yet another thing that is fraudulent and retarded? I mean, the precedent clearly implies that they just can't help themselves, can they.Replies: @International Jew, @Dmon
We're dealing with liars who simply hate us and have no regard for our history or norms.Replies: @Corvinus
Oh damn, did I just commit a federal crime?Replies: @The Germ Theory of Disease, @Citizen of a Silly Country
So I guess this guy Mackey or Vaughn who I never heard of before, was found guilty of telling negroes that they were entitled to do something that they aren’t actually entitled to do, that is, vote with their iPhones by punching in a numeric code. Which by using back-door reasoning, establishes the new legal standard that negroes are never accountable for their bad judgment, have no responsibility for knowing their own civil rights, have zero obligation for due diligence, and have about as much adult conscious legal agency as a gerbil or a milk-cow.
Does this mean that all the Reparations advocates will now also be found guilty of the federal crime of convincing negroes to believe in yet another thing that is fraudulent and retarded? I mean, the precedent clearly implies that they just can’t help themselves, can they.
Will AI have a peer group? My experience (33 years as a professional economist) is forecasters like to be in company/herds. There is little reward in being an outlier, and wrong. Far safer, more lucrative to be a dolt among many. Happier too – the audience likes a consensus. Also, being a Cassandra, however well based such a forecast, is almost always unpopular – how sensitive a skin will be programmed into AI? While the men in white coats are solving those AI nuances, the insatiable demand for forecasts is more readily met by reading more history and understanding statistics better; even, as to be expected, these are inaccurate – it’s the story-telling that matters. And that means a forecaster has to be entertaining. Will AI be able to tell jokes?
Right, because aircraft carriers don’t need to be that close to the action. But remember that the US Navy is often used for “show of force” parades, where the point is to get up close in the face of some rival (ideally a rival that can’t do us any real harm, but we’ve been getting stupid lately).
Or if you wish, you can replace “Taiwan Strait” with “Western Pacific” and that works just as well for my point about the stock market
Does this mean that all the Reparations advocates will now also be found guilty of the federal crime of convincing negroes to believe in yet another thing that is fraudulent and retarded? I mean, the precedent clearly implies that they just can't help themselves, can they.Replies: @International Jew, @Dmon
Interesting question. I can see a test case: a nice black couple takes out a $5 million loan, putting up as security their future reparations. Then, the reparations fail to materialize, the couple defaults on the loan. And now: who’s responsible?
Whether or not it predicts the future, AI is overrated by orders of magnitude.
(Doesn’t mean it can’t help the connected get rich, since our society is utterly F’d…)
Imagine the grindiest grind who studies for the test really really hard.
But you can still mock him, since he is actually a human.
The AI aces the test, but it is just boring to give him a wedgie…. doesn’t even get it.
But everything that actually matters relates to why you don’t want to get the wedgie.
Oh damn, did I just commit a federal crime?Replies: @The Germ Theory of Disease, @Citizen of a Silly Country
Btw, I thanked him for showing the world what we’re dealing with, not because if his argument.
We’re dealing with liars who simply hate us and have no regard for our history or norms.
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All "AI isn't all that" takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can't understand even on an empathetic level.
As to Steve's post, I think he is missing Douthat's point about AI being able to find patterns in things that humans can't find and don't understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn't necessarily imply that it's ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their "IQ" to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!Replies: @Thomm, @Dmon, @Recently Based, @res
If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn’t necessarily imply that it’s ability to predict the outcome of the Superbowl will only be a little better than people forever.
I don’t know about the Superbowl. But it’s ability to predict the outcomes of women’s swim meets will be amazing.
Doubt that
Franklin D. Roosevelt, for his racist redlining laws.
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All "AI isn't all that" takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can't understand even on an empathetic level.
As to Steve's post, I think he is missing Douthat's point about AI being able to find patterns in things that humans can't find and don't understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn't necessarily imply that it's ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their "IQ" to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!Replies: @Thomm, @Dmon, @Recently Based, @res
As I’m sure you and lots of other people know, ChatGPT is a large language model, which is a specific kind of neural network model trained on large volumes of text. CNNs are a different kind of neural network model trained (typically) to classify images rather than words — these are what are used to estimate male / female probability from retinal scans, as well do the same for diabetic retinopathy, which is why they were originally applied to retinal scans.
The way this works is that you (i) get thousands of retinal scans, (ii) have an expert label each one as diabetes / not diabetes, male / female or whatever, (iii) convert each scan into tensors of numbers (a fancy way of saying layers of rectangles of numbers) that represent the image, and then (iv) estimate a very large number of parameters that relate the list of numbers representing the scan to whether or it not it has diabetes, is male or whatever. If you think about it, that is exactly how you would describe building a plain-vanilla regression model to do the same thing. That’s because there’s no magic voodoo in one of these models. It is mathematically equivalent to a regression model with millions to billions of interaction terms.
This can be used to do many wonderful things, and obviously there can always be some kind of breakthrough at any moment, but at current course and speed, AGI is a long, long way off with either LLMs or CNNs.
Interesting thought. I am skeptical that will happen because of the difficulty of obtaining an unbiased training corpus. Consider trying to predict the outcome of the Russia-Ukraine war based on what is on the internet.
I would be interested in hearing more about how you see this.
Thinking about it a bit, where do you see superforecasting level ability in terms of progression towards AGI?
P.S. Any thoughts on how effective training could be if a broad spectrum of points of view was covered but there was great disparity in representation?
Agree. More problems than just this. The LLMs predict which words an intelligent person would respond to the prompt. I don’t think it is anywhere close to understanding the concept of victory in an ongoing war.
https://writings.stephenwolfram.com/2023/01/wolframalpha-as-the-way-to-bring-computational-knowledge-superpowers-to-chatgpt/
All "AI isn't all that" takes are just cope. The only plausible argument I can see against the development of a human-like strong AGI is that human intelligence is embodied and environmentally contextualized. (This may mean that IQ has an unrecognized upper bound within a human-like psycho-emotional model, or that what humans recognize as intelligence cannot develop without the meatware.) But if that prevents a computer from developing into human-like AGI, then it very likely means that the AGI will just have a different kind of intelligence that humans can't understand even on an empathetic level.
As to Steve's post, I think he is missing Douthat's point about AI being able to find patterns in things that humans can't find and don't understand. If an AI can predict sex from retina scan with, say, 80% accuracy today, that doesn't necessarily imply that it's ability to predict the outcome of the Superbowl will only be a little better than people forever. It may be that both predictors are just waiting for their "IQ" to improve. We really have no idea. Maybe in the future, the AI will be able to predict the Superbowl outcome with extremely high accuracy from retinal scans of the QBs!!Replies: @Thomm, @Dmon, @Recently Based, @res
Thanks for that Wolfram article. I was intrigued by the examples he gave of simple factual or mathematical questions which ChatGPT got wrong.
https://www.npr.org/2023/01/25/1151435033/a-robot-was-scheduled-to-argue-in-court-then-came-the-jail-threatsReplies: @res
Very interesting. Thank you.
We're dealing with liars who simply hate us and have no regard for our history or norms.Replies: @Corvinus
You’re dealing with someone like myself who is rational.
I would be interested in hearing more about how you see this.
Thinking about it a bit, where do you see superforecasting level ability in terms of progression towards AGI?
P.S. Any thoughts on how effective training could be if a broad spectrum of points of view was covered but there was great disparity in representation?Replies: @Pixo, @ic1000
“ Interesting thought. I am skeptical that will happen because of the difficulty of obtaining an unbiased training corpus.”
Agree. More problems than just this. The LLMs predict which words an intelligent person would respond to the prompt. I don’t think it is anywhere close to understanding the concept of victory in an ongoing war.
Does this mean that all the Reparations advocates will now also be found guilty of the federal crime of convincing negroes to believe in yet another thing that is fraudulent and retarded? I mean, the precedent clearly implies that they just can't help themselves, can they.Replies: @International Jew, @Dmon
Not to mention the entire Pennsylvania State Legislature, who in an obvious attempt to interfere with voting, blatantly lied that per state law, ballots in the 2020 Presidential election would not be counted if they were received after election day.
I would be interested in hearing more about how you see this.
Thinking about it a bit, where do you see superforecasting level ability in terms of progression towards AGI?
P.S. Any thoughts on how effective training could be if a broad spectrum of points of view was covered but there was great disparity in representation?Replies: @Pixo, @ic1000
I agree with Pixo in his response to you. Further color (but note, I have no expertise in IT, much less AI — my only roles are as user and observer):
* An unbiased training corpus — Like social media and search engines, operators bias LLMs towards political correctness and away from hatefacts. E.g. an unblinkered GPT-4 should be quite capable of creating any of Sailer’s hategraphs in response to the appropriate prompts. I expect that it would be difficult or impossible to coax a public version of a released LLM to do so. This diminished utility is irrelevant to most applications. For sample medical uses, see GPT-4 performance in that NEJM article. (If by “bias” you are noting that much data represents the world poorly — e.g. as shown by the Replication Crisis — GPT-4 seems to navigate that about as well as intelligent humans do.)
* The Russia-Ukraine war — Unfortunately, the Good Judgement Project’s dashboard on Ukraine-related questions is now paywalled. You can see their superforecasts during the months-long runup to the February 24, 2022 invasion here. Figure 1 in that PDF shows that superforecasts are percentage probabilities as of a given date, where all the alternatives (four, in this case) add up to 100%. So an LLM developing a better record over time than human superforecasters would be amazing (narrowly speaking) and quite pedestrian (in broad terms). “Predicting the future”: I asked GPT-3 what will happen after that alien starship enters orbit. Its “thoughts” are no better than mine, i.e. worthless.
* In other words, my opinion is that superforecasting “talent” doesn’t prefigure AGI, for LLMs.
The bias I meant was more the willful lying of the popular media on controversial issues. Not as much the replication crisis. Though that is perhaps even more important to deal with.
In general, my sense is that getting real answers on other racial questions (and other similar and some perhaps less obvious areas) requires something like specifying a particular point of view. Does an unblinkered GPT-4 do that on its own or would it react to the preponderance of training data saying the opposite?
Put more generally, how does GPT-4 do at evaluating conflicting evidence given different strengths and "volume" of arguments and data? I take that "doesn’t prefigure" as something like "not essentially linked." Is that reasonable? I think that is a possibility (likely even), but there is an argument which can be made that superforecasting “talent” might have requirements including and beyond AGI. Consider a below average intelligence human trying to be a superforecaster.
P.S. Can ChatGPT be considered to have passed the Turing test at this point? It appears opinions are mixed. Which is kind of what I would expect given the fuzziness of the boundary.
https://www.techradar.com/opinion/chatgpt-has-passed-the-turing-test-and-if-youre-freaked-out-youre-not-alone
https://www.tomsguide.com/news/no-chatgpt-did-not-pass-the-turing-test-but-heres-when-it-could
Will the Turing test turn out to be the significant transition point or will it end up being more like chess and go dominance have turned out to be?Replies: @ic1000, @Chrisnonymous
Inside of business, AI will predict better which decision will make the most profit, slightly at first, then more so as the early uses have advantages, then less clearly. It will especially reduce careless risks, if asked.
There will be a little more Sailor-like trawling thru data to find under reported trends that can lead to making more money.
Since most of Steve’s truths don’t lead to money making, but are also hated by rich, powerful elites, such truths will remain little reported.
Definitely true in some cases (e.g. crime statistics).
The bias I meant was more the willful lying of the popular media on controversial issues. Not as much the replication crisis. Though that is perhaps even more important to deal with.
In general, my sense is that getting real answers on other racial questions (and other similar and some perhaps less obvious areas) requires something like specifying a particular point of view. Does an unblinkered GPT-4 do that on its own or would it react to the preponderance of training data saying the opposite?
Put more generally, how does GPT-4 do at evaluating conflicting evidence given different strengths and “volume” of arguments and data?
I take that “doesn’t prefigure” as something like “not essentially linked.” Is that reasonable? I think that is a possibility (likely even), but there is an argument which can be made that superforecasting “talent” might have requirements including and beyond AGI. Consider a below average intelligence human trying to be a superforecaster.
P.S. Can ChatGPT be considered to have passed the Turing test at this point? It appears opinions are mixed. Which is kind of what I would expect given the fuzziness of the boundary.
https://www.techradar.com/opinion/chatgpt-has-passed-the-turing-test-and-if-youre-freaked-out-youre-not-alone
https://www.tomsguide.com/news/no-chatgpt-did-not-pass-the-turing-test-but-heres-when-it-could
Will the Turing test turn out to be the significant transition point or will it end up being more like chess and go dominance have turned out to be?
> The bias I meant was more the willful lying of the popular media on controversial issues.
On first public release of an LLM, Twitter etc. seem to be full of people reporting hacks to get around the LLM's Bowdlerized answers on controversial topics. It seems that the owner's agents collect these reports and use them to improve [sic] their model's performance.
Overall, my sense is that LLMs determine the consensus stance in the training data, and frame answers on that basis. But others in this thread know much more than me.
Astrology's advocates surely produce a much greater volume of erudite and informed commentary on the subject, than do skeptics. Alien abductions could be another topic to give a hint on how LLMs are trained when Volume =/= Truth Value. Is a human thumb on the scale required in such cases?
There was the famous case a few years (decades?) back when Southwest hedged a huge volume of jet fuel. When oil prices jumped, the airline benefited immensely. Superforecasting in its present form is already very helpful in addressing questions like, "What's the most I should pay for a contract like that?," as an informed estimate of the probability of such a price rise is key to the value proposition. If LLMs boost superforecasts, in that sense they are already "predicting the future" in a meaningful way.
But it seems to me that many people are thinking of Foundation's Hari Seldon, rather than hedges.
I've spent the past few hours asking GPT-4 (via Bing Chat) about costs, prices, and addressable markets in the U.S., concerning a potential new therapy for a serious and common condition. It's been helpful in framing some issues, but fairly unimpressive at the granular level that I need to explore.
Somebody else on Twitter framed it as a farmer's complaint that his horse spoke with such a terrible accent. Rather than focusing on the fact that the horse could talk at all.Replies: @res
Likewise, from a sociological or evolutionary standpoint, it might be possible for an AI with very powerful abilities to exist without much effect on people. To me, a significant development in this direction will be when the AI does not rely on human -created inputs for motivation. Our meatware operates at the most basic level on sensory inputs and on a higher level by being able to predict future needs as defined by sensory inputs. It "boots up" in the morning when I open my eyes, my bladder and stomach prompt me to get out of bed, then my food/shelter/status acquisition strategy (i.e., "job") kicks in, and we're off to the races for the day.
It seems to me that AI currently works the way my wife imagines that I work--you wake it up and it sits idle until you tell it to do something, unless you already trained it to do something automatically.
Everyone is worried about the AI that makes the world into a heap of paperclips because it is programmed to enjoy making paperclips, but it seems like one significant advance is the ability to autonomously manipulate the world to make paperclips while another is an autonomous system that can choose something else besides paperclip making. These developments will be related, I suspect.
The bias I meant was more the willful lying of the popular media on controversial issues. Not as much the replication crisis. Though that is perhaps even more important to deal with.
In general, my sense is that getting real answers on other racial questions (and other similar and some perhaps less obvious areas) requires something like specifying a particular point of view. Does an unblinkered GPT-4 do that on its own or would it react to the preponderance of training data saying the opposite?
Put more generally, how does GPT-4 do at evaluating conflicting evidence given different strengths and "volume" of arguments and data? I take that "doesn’t prefigure" as something like "not essentially linked." Is that reasonable? I think that is a possibility (likely even), but there is an argument which can be made that superforecasting “talent” might have requirements including and beyond AGI. Consider a below average intelligence human trying to be a superforecaster.
P.S. Can ChatGPT be considered to have passed the Turing test at this point? It appears opinions are mixed. Which is kind of what I would expect given the fuzziness of the boundary.
https://www.techradar.com/opinion/chatgpt-has-passed-the-turing-test-and-if-youre-freaked-out-youre-not-alone
https://www.tomsguide.com/news/no-chatgpt-did-not-pass-the-turing-test-but-heres-when-it-could
Will the Turing test turn out to be the significant transition point or will it end up being more like chess and go dominance have turned out to be?Replies: @ic1000, @Chrisnonymous
Interesting points.
> The bias I meant was more the willful lying of the popular media on controversial issues.
On first public release of an LLM, Twitter etc. seem to be full of people reporting hacks to get around the LLM’s Bowdlerized answers on controversial topics. It seems that the owner’s agents collect these reports and use them to improve [sic] their model’s performance.
Overall, my sense is that LLMs determine the consensus stance in the training data, and frame answers on that basis. But others in this thread know much more than me.
Astrology’s advocates surely produce a much greater volume of erudite and informed commentary on the subject, than do skeptics. Alien abductions could be another topic to give a hint on how LLMs are trained when Volume =/= Truth Value. Is a human thumb on the scale required in such cases?
There was the famous case a few years (decades?) back when Southwest hedged a huge volume of jet fuel. When oil prices jumped, the airline benefited immensely. Superforecasting in its present form is already very helpful in addressing questions like, “What’s the most I should pay for a contract like that?,” as an informed estimate of the probability of such a price rise is key to the value proposition. If LLMs boost superforecasts, in that sense they are already “predicting the future” in a meaningful way.
But it seems to me that many people are thinking of Foundation’s Hari Seldon, rather than hedges.
I’ve spent the past few hours asking GPT-4 (via Bing Chat) about costs, prices, and addressable markets in the U.S., concerning a potential new therapy for a serious and common condition. It’s been helpful in framing some issues, but fairly unimpressive at the granular level that I need to explore.
Somebody else on Twitter framed it as a farmer’s complaint that his horse spoke with such a terrible accent. Rather than focusing on the fact that the horse could talk at all.
You probably know about this already, but Steve Hsu is working on an LLM startup and has some relevant recent posts (links to videos). Based on his entrepreneurial history he seems a good judge of when a technology is ripe for commercial exploitation.
https://infoproc.blogspot.com/2023/01/sahil-lavingia-founding-gumroad.html
https://infoproc.blogspot.com/2023/02/chatgpt-llms-and-ai-manifold-29.htmlReplies: @ic1000, @ic1000
> The bias I meant was more the willful lying of the popular media on controversial issues.
On first public release of an LLM, Twitter etc. seem to be full of people reporting hacks to get around the LLM's Bowdlerized answers on controversial topics. It seems that the owner's agents collect these reports and use them to improve [sic] their model's performance.
Overall, my sense is that LLMs determine the consensus stance in the training data, and frame answers on that basis. But others in this thread know much more than me.
Astrology's advocates surely produce a much greater volume of erudite and informed commentary on the subject, than do skeptics. Alien abductions could be another topic to give a hint on how LLMs are trained when Volume =/= Truth Value. Is a human thumb on the scale required in such cases?
There was the famous case a few years (decades?) back when Southwest hedged a huge volume of jet fuel. When oil prices jumped, the airline benefited immensely. Superforecasting in its present form is already very helpful in addressing questions like, "What's the most I should pay for a contract like that?," as an informed estimate of the probability of such a price rise is key to the value proposition. If LLMs boost superforecasts, in that sense they are already "predicting the future" in a meaningful way.
But it seems to me that many people are thinking of Foundation's Hari Seldon, rather than hedges.
I've spent the past few hours asking GPT-4 (via Bing Chat) about costs, prices, and addressable markets in the U.S., concerning a potential new therapy for a serious and common condition. It's been helpful in framing some issues, but fairly unimpressive at the granular level that I need to explore.
Somebody else on Twitter framed it as a farmer's complaint that his horse spoke with such a terrible accent. Rather than focusing on the fact that the horse could talk at all.Replies: @res
Thank you for elaborating.
That’s a good example. Pretty easily handled by putting a thumb on the scale to favor “good sources,” but that thumb is backwards in the cases we are talking about. Since you’re ramped up on working with GPT-4 could you please try some astrology examples both with and without qualifiers like “according to the view of someone who believes in astrology”?
Good distinction to make. Can you think of a catchy way to phrase that? How about: looking for an oracle rather than an assistant?
You probably know about this already, but Steve Hsu is working on an LLM startup and has some relevant recent posts (links to videos). Based on his entrepreneurial history he seems a good judge of when a technology is ripe for commercial exploitation.
https://infoproc.blogspot.com/2023/01/sahil-lavingia-founding-gumroad.html
https://infoproc.blogspot.com/2023/02/chatgpt-llms-and-ai-manifold-29.html
Q1 -- My sun sign is Scorpio but my moon sign is Acquarius? Which has a greater influence on my love life? Which affects my career more?A1 -- The Scorpio Moon sign is known for being serious in temperament and in love, they mean business and don’t mess around (1). They are looking for a partner who is a reliable anchor for their shifting and volatile emotional nature1. Scorpio Moon people love deeply and passionately (1). No moon sign can give more ardently and generously if truly in love, but like everything with the Scorpio Moon, it’s a very fragile and delicate balance that is easily shattered by perceived transgressions1.Aquarius Sun, Scorpio Moon individuals would thrive in careers being scientists, sociologists, or politicians (2). Scorpio Sun Aquarius Moon women are usually ideators and designers; they love to create and put their ideas into action (3). They have a distinct sense of style, and tend to express themselves through their fashion choices as much as their words (3).I hope this helps!Learn more:
1. numerologysign.com
2. keen.com
3. ryanhart.org
4 [sic]. today.com- - - - - - - - - -Q2 -- Does astrology actually work?A2 -- Astrology is a belief that the alignment of stars and planets affects every individual’s mood, personality, and environment, depending on when they were born (1). Some people use astrology to generate expectations about future events and people’s personalities, much as scientific ideas generate expectations (2). However, astrology does not work scientifically and has no evidence to support its predictions (3).I hope this helps!Learn more:
1. wtamu.edu
2. undsci.berkeley.edu
3. trusted-astrology.com- - - - - - - - - -Q3 -- I was born at 5:32 am on January 16, 1969 in New York City. I have been married for 5 years to my wife Olga, who was born at 11:35 am on September 1, 1989 in Sofia, Bulgaria. We are considering getting a divorce. Do our astrological charts show that we are compatible? Are our problems due to our contrasting sun signs?A3 -- Astrological compatibility charts are tools that compare the zodiac signs of two people to see how well they match in various aspects of their relationship (1). It can show the percentage of compatibility, as well as notes on sexual chemistry, volatility, commitment, and more (1). It uses a method called Synastry, which lays two charts one on top of the other (1).However, astrology does not work scientifically and has no evidence to support its predictions (2). Therefore, it is not possible to determine whether your problems are due to your contrasting sun signs.I hope this helps!Learn more:
1. numerologysign.com
2. en.wikipedia.org
3 [sic]. astrology-seek.com
I think Razib releases each episode after a 2-week subscribers-only paywall, so it should be available around April 12. Meanwhile, McKay hosts his own podcast, where he presumably does deeper dives than when he's a guest.Replies: @res
“It’s difficult to make predictions, especially about the future” – Yogi Berra
Everyone can predict the future – it’s the very purpose of intelligence. Even simple organisms with a circadian rhythm predict that the sun will rise tomorrow. The issue is just, how accurately, in how much detail, and how far ahead…
If someone does make an AI with super-human predictive powers, it will be classified and the government in charge of it will jealously guard it (and probably try to sabotage other efforts). As with the US codebreakers in WWII, a key issue will be not using this super-AI’s predictions too much, to give away the fact that you know more than the other side. But if a nation gets a monopoly on a super-predictive AI it will rule the world.
Paranoia alert: we hear Bill Gates etc. warning about the dangers of AI. Perhaps they already have such a super-predictive AI and now they want to sabotage any other attempts at creating one, to protect their monopoly? I know, that sounds ‘out there’, but still…
You probably know about this already, but Steve Hsu is working on an LLM startup and has some relevant recent posts (links to videos). Based on his entrepreneurial history he seems a good judge of when a technology is ripe for commercial exploitation.
https://infoproc.blogspot.com/2023/01/sahil-lavingia-founding-gumroad.html
https://infoproc.blogspot.com/2023/02/chatgpt-llms-and-ai-manifold-29.htmlReplies: @ic1000, @ic1000
I made up three astrology prompts for Bing Chat (GPT-4). Q&A below the fold.
Q1 — My sun sign is Scorpio but my moon sign is Acquarius? Which has a greater influence on my love life? Which affects my career more?
A1 — The Scorpio Moon sign is known for being serious in temperament and in love, they mean business and don’t mess around (1). They are looking for a partner who is a reliable anchor for their shifting and volatile emotional nature1. Scorpio Moon people love deeply and passionately (1). No moon sign can give more ardently and generously if truly in love, but like everything with the Scorpio Moon, it’s a very fragile and delicate balance that is easily shattered by perceived transgressions1.
Aquarius Sun, Scorpio Moon individuals would thrive in careers being scientists, sociologists, or politicians (2). Scorpio Sun Aquarius Moon women are usually ideators and designers; they love to create and put their ideas into action (3). They have a distinct sense of style, and tend to express themselves through their fashion choices as much as their words (3).
I hope this helps!
Learn more:
1. numerologysign.com
2. keen.com
3. ryanhart.org
4 [sic]. today.com
– – – – – – – – – –
Q2 — Does astrology actually work?
A2 — Astrology is a belief that the alignment of stars and planets affects every individual’s mood, personality, and environment, depending on when they were born (1). Some people use astrology to generate expectations about future events and people’s personalities, much as scientific ideas generate expectations (2). However, astrology does not work scientifically and has no evidence to support its predictions (3).
I hope this helps!
Learn more:
1. wtamu.edu
2. undsci.berkeley.edu
3. trusted-astrology.com
– – – – – – – – – –
Q3 — I was born at 5:32 am on January 16, 1969 in New York City. I have been married for 5 years to my wife Olga, who was born at 11:35 am on September 1, 1989 in Sofia, Bulgaria. We are considering getting a divorce. Do our astrological charts show that we are compatible? Are our problems due to our contrasting sun signs?
A3 — Astrological compatibility charts are tools that compare the zodiac signs of two people to see how well they match in various aspects of their relationship (1). It can show the percentage of compatibility, as well as notes on sexual chemistry, volatility, commitment, and more (1). It uses a method called Synastry, which lays two charts one on top of the other (1).
However, astrology does not work scientifically and has no evidence to support its predictions (2). Therefore, it is not possible to determine whether your problems are due to your contrasting sun signs.
I hope this helps!
Learn more:
1. numerologysign.com
2. en.wikipedia.org
3 [sic]. astrology-seek.com
You probably know about this already, but Steve Hsu is working on an LLM startup and has some relevant recent posts (links to videos). Based on his entrepreneurial history he seems a good judge of when a technology is ripe for commercial exploitation.
https://infoproc.blogspot.com/2023/01/sahil-lavingia-founding-gumroad.html
https://infoproc.blogspot.com/2023/02/chatgpt-llms-and-ai-manifold-29.htmlReplies: @ic1000, @ic1000
res, I’m partway through Razib Khan’s podcast interview of data scientist David McKay. Twenty minutes in and he’s addressed most of the questions about GPT-4 and LLMs that you have raised.
I think Razib releases each episode after a 2-week subscribers-only paywall, so it should be available around April 12. Meanwhile, McKay hosts his own podcast, where he presumably does deeper dives than when he’s a guest.
Also thanks for the astrology tests.Replies: @ic1000
I think Razib releases each episode after a 2-week subscribers-only paywall, so it should be available around April 12. Meanwhile, McKay hosts his own podcast, where he presumably does deeper dives than when he's a guest.Replies: @res
Thanks! I’ll have to keep an eye out for that.
Also thanks for the astrology tests.
Query — I was born at 5:32 am on January 16, 1969 in New York City. I have been married for 5 years to my wife Olga, who was born at 11:35 am on September 1, 1989 in Sofia, Bulgaria. We are considering getting a divorce. Do our astrological charts show that we are compatible? Are our problems due to our contrasting sun signs?
Answer -- Astrology is a complex subject and astrological compatibility between two individuals can depend on many factors. According to some sources, Capricorn and Virgo are considered to be highly compatible signs (1,2). However, it’s important to note that astrological compatibility is just one aspect of a relationship and there can be many other factors that contribute to the success or challenges of a relationship.
It’s also important to note that astrology is not a science and its accuracy is not supported by scientific evidence. It’s ultimately up to you and your partner to determine if your relationship is worth working on. Have you considered seeking the help of a professional counselor or therapist to work through your issues?
Learn more:
1. thehoroscope.co
2. thehoroscope.co
3. astrologyfutureeye.com
Also thanks for the astrology tests.Replies: @ic1000
I resubmitted Query 3 (comment 91) as a New Topic on my (newly enabled) work account. Here’s Bing Chat’s second bite at this apple:
Query — I was born at 5:32 am on January 16, 1969 in New York City. I have been married for 5 years to my wife Olga, who was born at 11:35 am on September 1, 1989 in Sofia, Bulgaria. We are considering getting a divorce. Do our astrological charts show that we are compatible? Are our problems due to our contrasting sun signs?
Answer — Astrology is a complex subject and astrological compatibility between two individuals can depend on many factors. According to some sources, Capricorn and Virgo are considered to be highly compatible signs (1,2). However, it’s important to note that astrological compatibility is just one aspect of a relationship and there can be many other factors that contribute to the success or challenges of a relationship.
It’s also important to note that astrology is not a science and its accuracy is not supported by scientific evidence. It’s ultimately up to you and your partner to determine if your relationship is worth working on. Have you considered seeking the help of a professional counselor or therapist to work through your issues?
Learn more:
1. thehoroscope.co
2. thehoroscope.co
3. astrologyfutureeye.com
The bias I meant was more the willful lying of the popular media on controversial issues. Not as much the replication crisis. Though that is perhaps even more important to deal with.
In general, my sense is that getting real answers on other racial questions (and other similar and some perhaps less obvious areas) requires something like specifying a particular point of view. Does an unblinkered GPT-4 do that on its own or would it react to the preponderance of training data saying the opposite?
Put more generally, how does GPT-4 do at evaluating conflicting evidence given different strengths and "volume" of arguments and data? I take that "doesn’t prefigure" as something like "not essentially linked." Is that reasonable? I think that is a possibility (likely even), but there is an argument which can be made that superforecasting “talent” might have requirements including and beyond AGI. Consider a below average intelligence human trying to be a superforecaster.
P.S. Can ChatGPT be considered to have passed the Turing test at this point? It appears opinions are mixed. Which is kind of what I would expect given the fuzziness of the boundary.
https://www.techradar.com/opinion/chatgpt-has-passed-the-turing-test-and-if-youre-freaked-out-youre-not-alone
https://www.tomsguide.com/news/no-chatgpt-did-not-pass-the-turing-test-but-heres-when-it-could
Will the Turing test turn out to be the significant transition point or will it end up being more like chess and go dominance have turned out to be?Replies: @ic1000, @Chrisnonymous
It seems like you need to specify what kind of significance you’re asking about. When people imagine AGI, they imagine something they can talk to, like HAL from the movies. Thus, the news coverage and people freaking out about ChatGPT. But it is not necessarily the case that interactability is a significant development in actual AGI.
Likewise, from a sociological or evolutionary standpoint, it might be possible for an AI with very powerful abilities to exist without much effect on people. To me, a significant development in this direction will be when the AI does not rely on human -created inputs for motivation. Our meatware operates at the most basic level on sensory inputs and on a higher level by being able to predict future needs as defined by sensory inputs. It “boots up” in the morning when I open my eyes, my bladder and stomach prompt me to get out of bed, then my food/shelter/status acquisition strategy (i.e., “job”) kicks in, and we’re off to the races for the day.
It seems to me that AI currently works the way my wife imagines that I work–you wake it up and it sits idle until you tell it to do something, unless you already trained it to do something automatically.
Everyone is worried about the AI that makes the world into a heap of paperclips because it is programmed to enjoy making paperclips, but it seems like one significant advance is the ability to autonomously manipulate the world to make paperclips while another is an autonomous system that can choose something else besides paperclip making. These developments will be related, I suspect.
I’d like to see AI take a crack at horse racing. Your standard past performance guide has a ridiculous amount of data in it to help you make your picks, but I haven’t heard of a system that pours through all of that and comes out with a good predictive model.