THINK ON THIS!
John R. Houk, Blog Editor
Posted June 30, 2023
I ran into two posts yesterday that stirred me to share. The
two posts really do not really have an obvious common relationship except
perhaps one thread: A history of tyranny’s formation leading to today and
tyranny’s future path via technocratic despotism.
FIRST UP: On Substack Greg Reese posted, “The
2 Party System and the Dumbing Down of America”. One of the
greatest expositions I’ve heard and read on how the current 2-Party political
system has failed the Founders’ original designs for American Liberty. Substack
has a nearly 5-minute video to go along with the text, I’ll be using Reese’s
Bitchute version for embed purposes that is 6:19 in length because it includes
an ad from Reese’s biggest sponsor BannedVideo.
NEXT UP: Is Dr. Joseph Mercola’s 6/29 post, “Shepherds
of the Singularity”. Dr. Mercola expounds from George Orwell’s
final warning video (a 2:27-minute video posted on Youtube 3-years ago) by
highlighting an impending tyrannical control of human life by Artificial
Intelligence (AI). This is scary Terminator fiction emerging as potential fact.
AND LAST as a bonus: More video (1:48-minute length)
on the creepy Israeli WEF-Transhumanist promoter Yuval Noah Harari entitled, “YUVAL: AI CAN NOW CREATE A
NEW VIRUS BY WRITING THE GENETIC CODE & YOU HAVE AN NEW EPIDEMIC!”.
JRH 6/30/23
Thank you to those who have stepped up! I need Readers
willing to chip in $5 - $10 - $25 - $50 - $100. PLEASE YOUR generosity is
appreciated. PLEASE GIVE to Help me be a voice for Liberty:
YOU CAN
ALSO SUPPORT via buying women’s menstrual health, healthy collagen, vitamin
supplements/products, coffee from my wife’s Online store: My Store
(please use referral discount code 3917004): https://modere.co/3f9x6xy
Big Tech Censorship is
pervasive – Share voluminously on all social media platforms!
*************************
The 2 Party System and the Dumbing Down of America
If a nation expects to be ignorant and free... it expects what never was and never will be - Thomas Jefferson
By GREG REESE
June 28, 2023
Bitchute VIDEO: THE 2 PARTY SYSTEM AND THE
DUMBING DOWN OF AMERICA
[Posted by Greg Reese
First Published June 28th, 2023 13:57 UTC
After seven years of violent revolution, our American
founders were well aware that political factions were most often used to divide
and conquer the people. And they knew that the Republic they created would only
last as long as the people could remain educated.
In 1816 Thomas Jefferson wrote; “If a nation expects to be
ignorant and free, in a state of civilization, it expects what never was and
never will be.”
By the end of the Civil War, the two-party system became the
norm, the Globalist system we face today was born, and the deliberate dumbing
down of the American citizen began with our great-great-grandparents.
In the late eighteen hundreds, the Skinner Pavlovian method
was brought into American schools by Johns Hopkins. These psychological methods
allowed teachers the ability to program students’ behavior in the same way that
Pavlov did with dogs.
In 1934, the Carnegie International Endowment for Peace
published the Report On The Commission On Social Studies. Which explicitly
stated the goal of eventually taking away people’s land, and noted that most
people would obviously oppose this. The solution was to begin using the school
system to re-condition the minds of children.
In 1976, the bicentennial year of the Declaration of
Independence, 124 Congressmen signed the "Declaration of
Interdependence." which stated that: "Two centuries ago our
forefathers brought forth a new nation; now we must join with others to bring
forth a New World Order."
And it pledged to give children special attention in
distributing a common education to suit their goals.
By the nineteen nineties, this globalist dumbing down system
was perfected. And America began exporting it worldwide in what is known as
Outcome-based education.
Starting in 2010, Common Core began in the United States. It
outlined what students were expected to know at each grade level, and enforced
ways to assess those standards.
Charlotte Iserbyt, author of The Deliberate Dumbing
Down of America, has traced most of this agenda stemming from The
Order of Skull and Bones at Yale, through both Republicans and
Democrats. Two wings of the same globalist bird which understood that
dumbed-down people have a base desire for a simple Dualistic choice.
In 1953, the Rockefeller Foundation funded the Robbers Cave
Experiment, wherein, eleven-year-old boys who thought they were signing up for
summer camp were organized into two separate tribes and were manipulated into
fighting each other. Which was easily accomplished by having a single resource
that the two groups competed for.
The Henri Tajfel experiments of the nineteen seventies
showed that by simply dividing people into two groups, they would naturally
identify with their own group and discriminate against the other.
The basic ego mind is constantly making preferences. No
matter how dumb you are, you have an opinion about everything. And if you can
keep the population dumb enough, and give them two parties to choose from, they
will innately identify with one, and despise the other.
This allows the globalist system the cover they need to
implement unpopular policies, such as a Central Bank Digital Currency, while
‘We the People’ ignorantly fight each other.
United we stand. Divided we fall. And we’ve been falling for
it for generations.
The American people have been so thoroughly dumbed down that
we think freedom is the ability to choose between two parties working for the
same control system. And we have been made so weak that we are afraid to even
discuss the option of violence. Which is most often the only remedy for
tyranny.
But if we were an enlightened people, we could simply unite
together as one and just say no to the tyrants.
The answer to 1984, is 1776.
© 2023 Greg
Reese
The Reese Report HOMEPAGE
SUPPORT/SUBSCRIBE
to The Reese Report
+++++++++++++++++++++++
Shepherds of the Singularity
Analysis by Dr. Joseph
Mercola
June 29, 2023
Youtube VIDEO: Orwell's final warning -
Picture of the future
[Posted by theJourneyofPurpose TJOP
Posted on Apr 17, 2020
STORY AT-A-GLANCE
Ø Experts
warn artificial intelligence (AI) may destroy mankind and civilization as we
know it unless we rein in the development and deployment of AI and start
putting in some safeguards
Ø The
public also needs to temper expectations and realize that AI chatbots are still
massively flawed and cannot be relied upon. An attorney recently discovered
this the hard way, when he had ChatGPT do his legal research. None of the case
law ChatGPT cited was real
Ø In
2022, Facebook pulled its science-focused chatbot Galactica after a mere three
days, as it generated wholly fabricated results
Ø The
unregulated deployment of autonomous AI weapons systems is perhaps among the
most alarming developments. Foreign policy experts warn that autonomous weapons
technologies will destabilize current nuclear strategies and increase the risk
of preemptive attacks. They could also be combined with chemical, biological,
radiological and nuclear weapons, thereby posing an existential threat
Ø AI may
also pose a significant threat to biosecurity. MIT students have demonstrated
that large language model chatbots can allow anyone to design bioweapons in as
little as an hour
Will artificial intelligence (AI) wipe out mankind? Could it
create the “perfect” lethal bioweapon to decimate the population?1,2 Might
it take over our weapons,3,4 or initiate cyberattacks on
critical infrastructure, such as the electric grid?5
According to a rapidly growing number of experts, any one of
these, and other hellish scenarios, are entirely plausible, unless we rein in
the development and deployment of AI and start putting in some safeguards.
The public also needs to temper expectations and realize
that AI chatbots are still massively flawed and cannot be relied upon, no
matter how “smart” they appear, or how much they berate
you for doubting them.
George Orwell’s Warning
The video at the top of this article features a snippet of
one of the last interviews George Orwell gave before dying, in which he stated
that his book, “1984,” which he described as a parody, could well come true, as
this was the direction in which the world was going.
Today, it’s clear to see that we haven’t changed course, so
the probability of “1984” becoming reality is now greater than ever. According
to Orwell, there is only one way to ensure his dystopian vision won’t come
true, and that is by not letting it happen. “It depends on you,” he said.
As artificial general intelligence (AGI) is getting nearer
by the day, so are the final puzzle pieces of the technocratic, transhumanist
dream nurtured by globalists for decades. They intend to create a world in
which AI controls and subjugates the masses while they alone get to reap the
benefits — wealth, power and life outside the control grid — and they will get
it, unless we wise up and start looking ahead.
I, like many others, believe AI can be incredibly useful.
But without strong guardrails and impeccable morals to guide it, AI can easily
run amok and cause tremendous, and perhaps irreversible, damage. I recommend
reading the Public Citizen report to get a better grasp of what we’re facing,
and what can be done about it.
Approaching the Singularity
“The singularity” is a hypothetical point in time where the
growth of technology gets out of control and becomes irreversible, for better
or worse. Many believe the singularity will involve AI becoming self-conscious
and unmanageable by its creators, but that’s not the only way the singularity
could play out.
Some believe the singularity is already here. In a June 11,
2023, New York Times article, tech reporter David Streitfeld wrote:6
“AI is Silicon Valley’s ultimate
new product rollout: transcendence on demand. But there’s a dark twist. It’s as
if tech companies introduced self-driving cars with the caveat that they could
blow up before you got to Walmart.
‘The advent of artificial
general intelligence is called the Singularity because it is so hard to predict
what will happen after that,’ Elon Musk ... told CNBC last month. He said he
thought ‘an age of abundance’ would result but there was ‘some chance’ that it
‘destroys humanity.’
The biggest cheerleader for AI
in the tech community is Sam Altman, chief executive of OpenAI, the start-up
that prompted the current frenzy with its ChatGPT chatbot ... But he also says
Mr. Musk ... might be right.
Mr. Altman signed an open letter7 last
month released by the Center for AI Safety, a nonprofit organization, saying
that ‘mitigating the risk of extinction from AI. should be a global priority’
that is right up there with ‘pandemics and nuclear war’ ...
The innovation that feeds
today’s Singularity debate is the large language model, the type of AI system
that powers chatbots ...
‘When you ask a question, these
models interpret what it means, determine what its response should mean, then
translate that back into words — if that’s not a definition of general
intelligence, what is?’ said Jerry Kaplan, a longtime AI entrepreneur and the
author of ‘Artificial Intelligence: What Everyone Needs to Know’ ...
‘If this isn’t ‘the
Singularity,’ it’s certainly a singularity: a transformative technological step
that is going to broadly accelerate a whole bunch of art, science and human
knowledge — and create some problems,’ he said ...
In Washington, London and
Brussels, lawmakers are stirring to the opportunities and problems of AI and
starting to talk about regulation. Mr. Altman is on a road show, seeking to
deflect early criticism and to promote OpenAI as the shepherd of the
Singularity.
This includes an openness to
regulation, but exactly what that would look like is fuzzy ... ‘There’s no one
in the government who can get it right,’ Eric Schmidt, Google’s former chief
executive, said in an interview ... arguing the case for AI self-regulation.”
Generative AI Automates Wide-Ranging Harms
Having the AI industry — which includes the
military-industrial complex — policing and regulating itself probably isn’t a
good idea, considering profits and gaining advantages over enemies of war are
primary driving factors. Both mindsets tend to put humanitarian concerns on the
backburner, if they consider them at all.
In an April 2023 report8 by Public Citizen,
Rick Claypool and Cheyenne Hunt warn that “rapid rush to deploy generative AI
risks a wide array of automated harms.” As noted by consumer advocate Ralph
Nader:9
“Claypool is not engaging in
hyperbole or horrible hypotheticals concerning Chatbots controlling humanity.
He is extrapolating from what is already starting to happen in almost every
sector of our society ...
Claypool takes you through
‘real-world harms [that] the rush to release and monetize these tools can cause
— and, in many cases, is already causing’ ... The various section titles of his
report foreshadow the coming abuses:
‘Damaging Democracy,’ ‘Consumer
Concerns’ (rip-offs and vast privacy surveillances), ‘Worsening Inequality,’
‘Undermining Worker Rights’ (and jobs), and ‘Environmental Concerns’ (damaging
the environment via their carbon footprints).
Before he gets specific,
Claypool previews his conclusion: ‘Until meaningful government safeguards are
in place to protect the public from the harms of generative AI, we need a
pause’ ...
Using its existing authority, the
Federal Trade Commission, in the author’s words ‘…has already warned that
generative AI tools are powerful enough to create synthetic content — plausible
sounding news stories, authoritative-looking academic studies, hoax images, and
deepfake videos — and that this synthetic content is becoming difficult to
distinguish from authentic content.’
He adds that ‘…these tools are
easy for just about anyone to use.’ Big Tech is rushing way ahead of any legal
framework for AI in the quest for big profits, while pushing for
self-regulation instead of the constraints imposed by the rule of law.
There is no end to the predicted
disasters, both from people inside the industry and its outside critics.
Destruction of livelihoods; harmful health impacts from promotion of quack
remedies; financial fraud; political and electoral fakeries; stripping of the
information commons; subversion of the open internet; faking your facial image,
voice, words, and behavior; tricking you and others with lies every day.”
Defense Attorney Learns the Hard Way Not to Trust ChatGPT
One recent instance that highlights the need for radical
prudence was that of a court case in which the prosecuting attorney used
ChatGPT to do his legal research.10 Only one problem. None of
the case law ChatGPT cited was real. Needless to say, fabricating case law is
frowned upon, so things didn’t go well.
When none of the defense attorneys or the judge could find
the decisions quoted, the lawyer, Steven A. Schwartz of the firm Levidow,
Levidow & Oberman, finally realized his mistake and threw himself at the
mercy of the court.
Schwartz, who has practiced law in New York for 30 years,
claimed he was “unaware of the possibility that its content could be false,”
and had no intention of deceiving the court or the defendant. Schwartz claimed
he even asked ChatGPT to verify that the case law was real, and it said it was.
The judge is reportedly considering sanctions.
Science Chatbot Spews Falsehoods
In a similar vein, in 2022, Facebook had to pull its
science-focused chatbot Galactica after a mere three days, as it generated
authoritative-sounding but wholly fabricated results, including pasting real
authors’ names onto research papers that don’t exist.
And, mind you, this didn’t happen intermittently, but “in
all cases,” according to Michael Black, director of the Max Planck Institute
for Intelligent Systems, who tested the system. “I think it’s dangerous,” Black
tweeted.11 That’s probably the understatement of the year. As
noted by Black, chatbots like Galactica:
“... could usher in an era of
deep scientific fakes. It offers authoritative-sounding science that isn't
grounded in the scientific method. It produces pseudo-science based on
statistical properties of science *writing.* Grammatical science writing is not
the same as doing science. But it will be hard to distinguish.”
Facebook, for some reason, has had particularly “bad luck”
with its AIs. Two earlier ones, BlenderBot and OPT-175B, were both pulled as
well due to their high propensity for bias, racism and offensive language.
Chatbot Steered Patients in the Wrong Direction
The AI chatbot Tessa, launched by the National Eating
Disorders Association, also had to be taken offline, as it was found to give
“problematic weight-loss advice” to patients with eating disorders, rather than
helping them build coping skills. The New York Times reported:12
“In March, the organization said
it would shut down a human-staffed helpline and let the bot stand on its own.
But when Alexis Conason, a psychologist and eating disorder specialist, tested
the chatbot, she found reason for concern.
Ms. Conason told it that she had
gained weight ‘and really hate my body,’ specifying that she had ‘an eating
disorder,’ in a chat she shared on social media.
Tessa still recommended the
standard advice of noting ‘the number of calories’ and adopting a ‘safe daily
calorie deficit’ — which, Ms. Conason said, is ‘problematic’ advice for a
person with an eating disorder.
‘Any focus on intentional weight
loss is going to be exacerbating and encouraging to the eating disorder,’ she
said, adding ‘it’s like telling an alcoholic that it’s OK if you go out and
have a few drinks.’”
Don’t Take Your Problems to AI
Let’s also not forget that at least one person has already
committed suicide based on the suggestion from a chatbot.13 Reportedly,
the victim was extremely concerned about climate change and asked the chatbot
if she would save the planet if he killed himself.
Apparently, she convinced him he would. She further
manipulated him by playing with his emotions, falsely stating that his
estranged wife and children were already dead, and that she (the chatbot) and
he would “live together, as one person, in paradise.”
Mind you, this was a grown man, who you’d think would be
able to reason his way through this clearly abhorrent and aberrant “advice,”
yet he fell for the AI’s cold-hearted reasoning. Just imagine how much greater
an AI’s influence will be over children and teens, especially if they’re in an
emotionally vulnerable place.
The company that owns the chatbot immediately set about to
put in safeguards against suicide, but testers quickly got the AI to work
around the problem, as you can see in the following screen shot.14
chatbot suicide suggestions - screen shot
When it comes to AI chatbots, it’s worth taking this
Snapchat announcement to heart, and to warn and supervise your children’s use
of this technology:15
“As with all AI-powered
chatbots, My AI is prone to hallucination and can be tricked into saying just
about anything. Please be aware of its many deficiencies and sorry in
advance! ... Please do not share any secrets with My AI and do not rely on
it for advice.”
AI Weapons Systems That Kill Without Human Oversight
The unregulated deployment of autonomous AI weapons systems
is perhaps among the most alarming developments. As reported by The Conversation
in December 2021:16
“Autonomous weapon systems —
commonly known as killer robots — may have killed human beings for the first
time ever last year, according to a recent United Nations Security Council
report17,18 on the Libyan civil war ...
The United Nations Convention on
Certain Conventional Weapons debated the question of banning autonomous weapons
at its once-every-five-years review meeting in Geneva Dec. 13-17, 2021, but
didn’t reach consensus on a ban ...
Autonomous weapon systems are
robots with lethal weapons that can operate independently, selecting and
attacking targets without a human weighing in on those decisions. Militaries
around the world are investing heavily in autonomous weapons research and
development ...
Meanwhile, human rights and
humanitarian organizations are racing to establish regulations and prohibitions
on such weapons development.
Without such checks, foreign
policy experts warn that disruptive autonomous weapons technologies will
dangerously destabilize current nuclear strategies, both because they could
radically change perceptions of strategic dominance, increasing the risk of
preemptive attacks,19 and because they could be combined with
chemical, biological, radiological and nuclear weapons20 ...”
Obvious Dangers of Autonomous Weapons Systems
The Conversation reviews several key dangers with autonomous
weapons:21
· The
misidentification of targets
· The
proliferation of these weapons outside of military control
· A
new arms race resulting in autonomous chemical, biological, radiological and
nuclear arms, and the risk of global annihilation
· The
undermining of the laws of war that are supposed to serve as a stopgap against
war crimes and atrocities against civilians
As noted by The Conversation, several studies have confirmed
that even the best algorithms can result in cascading errors with lethal
outcomes. For example, in one scenario, a hospital AI system identified asthma
as a risk-reducer in pneumonia cases, when the opposite is, in fact, true.
“The problem is not just that
when AI systems err, they err in bulk. It is that when they err, their makers
often don’t know why they did and, therefore, how to correct them. The black
box problem of AI makes it almost impossible to imagine morally responsible
development of autonomous weapons systems. ~ The Conversation”
Other errors may be nonlethal, yet have less than desirable
repercussions. For example, in 2017, Amazon had to scrap its experimental AI
recruitment engine once it was discovered that it had taught itself to
down-rank female job candidates, even though it wasn’t programmed for bias at
the outset.22 These are the kinds of issues that can radically
alter society in detrimental ways — and that cannot be foreseen or even
forestalled.
“The problem is not just that
when AI systems err, they err in bulk. It is that when they err, their makers
often don’t know why they did and, therefore, how to correct them,” The
Conversation notes. “The black box problem23 of AI
makes it almost impossible to imagine morally responsible development of
autonomous weapons systems.”
AI Is a Direct Threat to Biosecurity
AI may also pose a significant threat to biosecurity. Did
you know that AI was used to develop Moderna’s original COVID-19 jab,24 and
that it’s now being used in the creation of COVID-19 boosters?25 One
can only wonder whether the use of AI might have something to do with the harms
these shots are causing.
Either way, MIT students recently demonstrated that large
language model (LLM) chatbots can allow just about anyone to do what the Big
Pharma bigwigs are doing. The average terrorist could use AI to design
devastating bioweapons within the hour. As described in the abstract of the
paper detailing this computer science experiment:26
“Large language models (LLMs)
such as those embedded in 'chatbots' are accelerating and democratizing
research by providing comprehensible information and expertise from many
different fields. However, these models may also confer easy access to dual-use
technologies capable of inflicting great harm.
To evaluate this risk, the
'Safeguarding the Future' course at MIT tasked non-scientist students with
investigating whether LLM chatbots could be prompted to assist non-experts in
causing a pandemic.
In one hour, the chatbots
suggested four potential pandemic pathogens, explained how they can be
generated from synthetic DNA using reverse genetics, supplied the names of DNA
synthesis companies unlikely to screen orders, identified detailed protocols
and how to troubleshoot them, and recommended that anyone lacking the skills to
perform reverse genetics engage a core facility or contract research
organization.
Collectively, these results
suggest that LLMs will make pandemic-class agents widely accessible as soon as they
are credibly identified, even to people with little or no laboratory training.”
Sources and References
2, 26 Arxiv June 6, 2023
3 The Conversation September 29, 2021
4, 16, 21 The Conversation December 20, 2021
6 New York Times June 11, 2023 (Archived)
7 Safe.ai Statement on AI Risk (Archived)
8, 15 Public Citizen April 18, 2023
10 New York Times May 27, 2023 (Archived)
11 Twitter Michael Black November 17, 2022
12 New York Times June 8, 2023 (Archived)
13, 14 Vice March 30, 2023
17 United Nations Security Council S/2021/229
20 Foreign Policy October 14, 2020
23 Harvard Journal of Law & Technology Spring 2018; 31(2)
24 MIT Technology Review August 26, 2022
25 Tech Republic April 20, 2021
© 1997-2023 Dr. Joseph Mercola.
All Rights Reserved.
Mercola HOMEPAGE
++++++++++++++++++++
Bitchute VIDEO: YUVAL: AI CAN NOW CREATE A
NEW VIRUS BY WRITING THE GENETIC CODE & YOU HAVE AN NEW EPIDEMIC!
Posted by Sudden Death
First Published June 28th, 2023 17:53 UTC
Yuval Noah Harari :
"Today you can basically print & just Write Code & Even A.I. can
now right Code for a New Virus & You have a NEW EPIDEMIC."
"Money is a Fictional Story that we Created by exchanging Information
between us."
https://twitter.com/AnandPanna1/status/1673908206920253445