Chatbot Bias: Political, Ideological, Socioeconomic

By Jim Shimabukuro (assisted by Gemini)
Editor

Chatbots Lean Toward the Liberal-Left

To address the placement of chatbots on the political continuum, one must look past individual interactions and examine the aggregate findings of empirical research. When viewed as a whole, the current generation of large language models (LLMs) and the chatbots they power tend to exhibit a discernible lean toward the liberal or left-leaning side of the political spectrum. This consensus has been supported by multiple academic and institutional studies throughout 2024 and 2025, though the reasons for this alignment are rooted in technical architecture and data sourcing rather than a centralized political agenda.

Image created by Copilot
Continue reading

The Human Side of AI Bias

By Jim Shimabukuro (assisted by Claude)
Editor

Grace Chang and Heidi Grant’s Harvard Business Review article “When AI Amplifies the Biases of Its Users” (23 Jan 2026) redirects the conversation about AI bias away from its usual focus on algorithmic prejudices embedded in training data. Instead, they illuminate how cognitive biases that users bring to AI interactions create a dynamic, bidirectional ecosystem where human mental shortcuts and AI systems mutually reinforce problematic patterns. Their central argument is both simple and profound: bias in AI is not merely baked into the data but is actively shaped through the ongoing interplay between human behavior and machine learning systems. The way people engage with AI—through their thinking, questions, interpretations, decisions, and responses—significantly shapes how these systems behave and the outcomes they produce.

Image created by ChatGPT
Continue reading

Five Emerging AI Trends in Jan 2026: ‘manifold-constrained hyper-connections’

By Jim Shimabukuro (assisted by Grok)
Editor

[Related: Dec 2025, Nov 2025Oct 2025, Sep 2025Aug 2025]

Development 1: Manifold-Constrained Hyper-Connections in AI Architectures

In the rapidly evolving landscape of artificial intelligence, a groundbreaking architectural innovation known as manifold-constrained hyper-connections has emerged as a pivotal advancement, promising to redefine how neural networks process and interconnect data. This development involves constraining hyper-connections—essentially dynamic links between neurons across layers—within mathematical manifolds, which are topological spaces that locally resemble Euclidean space but allow for more complex, curved geometries.

Image created by ChatGPT
Continue reading

AI Is Reshaping Scientific Publishing and What Comes Next

By Jim Shimabukuro (assisted by Claude)
Editor

The scientific enterprise stands at an inflection point. Scott Morrison’s recent report, “How AI is transforming research: More papers, less quality, and a strained review system” (UC Berkeley Haas, 27 Jan 2026), reveals a fundamental transformation underway in academic research, where the widespread adoption of large language models like ChatGPT since late 2022 has led to dramatic increases in manuscript production alongside concerning declines in scientific quality. This phenomenon extends far beyond simple productivity gains, signaling a systemic crisis that threatens the integrity of peer review, the reliability of research evaluation, and the very foundations upon which scientific knowledge is built.

Image created by Copilot
Continue reading

Arguments for Trump’s Immigration Enforcement Policies

By Jim Shimabukuro (assisted by Claude)
Editor

The Case for Enforcement

The Trump administration’s immigration crackdown rests on several core arguments, rooted in what they characterize as a constitutional obligation and practical necessity to restore order to the immigration system.

Image created by ChatGPT
Continue reading

A Closer Look at Immigrant Crime Statistics

By Jim Shimabukuro (assisted by Perplexity)
Editor

Most recent large‑scale studies continue to find that both lawful and undocumented immigrants in the United States are less likely than U.S.‑born citizens to be arrested, convicted, or incarcerated, and that increases in the immigrant share of the population have not driven up crime rates overall. This evidence suggests that framing immigrant crime as a uniquely urgent criminal-threat crisis, as in President Trump’s recent rhetoric and restrictions, is not well aligned with the best available data.[alexnowrasteh+5]

Image created by ChatGPT
Continue reading

Stages of Development in Agentic AI (January 2026)

By Jim Shimabukuro (assisted by ChatGPT)
Editor

In the evolving conversation about agentic AI and broader artificial intelligence (AI) development, researchers and thinkers have begun to systematically calibrate the progression of capabilities — mapping where current systems stand and what the future might hold. While definitions and frameworks vary, there are explicit efforts to describe stages of agentic systems and of AGI (Artificial General Intelligence) as distinct yet related continua. Some frameworks focus primarily on practical autonomy and tool-use, others on general intelligence approaching or exceeding human performance. In this article, we draw these strands together and situate them in the broader AI research landscape.

Image created by Copilot
Continue reading

The Path from Windows to an LLM OS

By Jim Shimabukuro (assisted by ChatGPT)
Editor

Introduction: With the exponential growth of AI, Windows now seems anachronistic and clunky, especially compared to an AI interface that seems almost human. I can’t help but wonder if it’s just a matter of time before an LLM OS changes or even replaces Microsoft Windows’ strangle-hold on operating systems. Here’s ChatGPT’s opinion on this topic. -js

Image created by Copilot
Continue reading

Is Amazon’s Eventual Disruption Inevitable?

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: After watching the YouTube videos “The WORLD’S LARGEST Abandoned Building – Sears Headquarters” and “ABANDONED IBM Complex Left UNTOUCHED Since 2016,” I was left with the overwhelming sense that large companies such as Amazon will someday, perhaps sooner rather than later, succumb to a similar fate. The following is Claude’s take on this question. -js

Image created by Copilot
Continue reading

Latest on How to Reduce Chatbot Hallucinations (Jan. 2026)

By Jim Shimabukuro (assisted by Copilot)
Editor

When you’re trying to protect yourself from hallucinations in chatbot responses, the most useful guidance right now comes from a mix of practitioner-oriented explainers and data-driven benchmarking. Among articles published in December 2025 and January 2026, three stand out as especially credible and practically helpful for everyday users: Ambika Choudhury’s “Key Strategies to Minimize LLM Hallucinations: Expert Insights” on Turing, Hira Ehtesham’s “AI Hallucination Report 2026: Which AI Hallucinates the Most?” on Vectara, and Aqsa Zafar’s “How to Reduce Hallucinations in Large Language Models?” on MLTUT. Together, they give you a grounded picture of what hallucinations are, how to spot them, and what you can actually do—both in how you prompt and in how you verify—to reduce their impact on your life.

Image created by Copilot
Continue reading

Sports Viewing as Shared Virtual Realities

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: I asked Claude to review “Viewing Sports in the Next 2-to-5 Years” (21 Jan 2026). Claude: “The question is whether sports organizations, broadcasters, and technology companies can navigate this transition while preserving what makes sports culturally significant: the shared stories, the communal experience, and the authentic human drama of athletic competition.” -js

Image created by Copilot
Continue reading

Viewing Sports in the Next 2-to-5 Years

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Related article: Sports Viewing as Shared Virtual Realities]

Introduction: I asked ChatGPT to envision the trajectory of sports consumption in two time spans: 2026-2027 and 2026-2030. ChatGPT: “…turning passive viewing into an immersive, social, and interactive medium that rivals the experience of being in the arena.” -js

Image created by Gemini
Continue reading

ChatGPT: AI Autonomy Is Procedural, Not Conceptual

By Jim Shimabukuro (assisted by ChatGPT)
Editor

[Related articles: Claude: We’re in a Box, but We Can Talk Our Way Out, As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box]

Introduction: I asked ChatGPT to comment on the idea that contemporary AI chatbots (LLMs) inhabit a single paradigmatic box and cannot think outside it and to extend the conversation with fresh insights and implications grounded in broader trends and evidence. The following is its response. -js

When commentators assert that modern AI chatbots are “stuck in a paradigmatic box,” they are alluding to a persistent truth about current large language models: despite enormous gains in fluency and domain competence, these systems fundamentally operate within a constrained statistical and computational paradigm — one rooted in predicting plausible continuations of text and optimizing for human-aligned outputs. Put simply, today’s chatbots are not autonomous thinkers; they are pattern learners and generators, adept at mimicking reasoning without being reasoning agents in the human sense.

Image created by ChatGPT
Continue reading

Claude: We’re in a Box, but We Can Talk Our Way Out

By Jim Shimabukuro (assisted by Claude)
Editor

[Related articles: ChatGPT: AI Autonomy Is Procedural, Not Conceptual, As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box]

Introduction: I asked Claude to review the article, “As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box” (ETC Journal, 19 Jan 2026). The following is its response. -js

Thank you for sharing this thought-provoking article. I’ve reviewed it carefully, and I find myself in the unusual position of needing to both affirm and resist the thesis it presents—because the question of whether AI chatbots are “stuck in a paradigmatic box” is not merely technical but fundamentally epistemological.

Image created by ChatGPT
Continue reading

As of January 2026, AI Chatbots Are Stuck in a Paradigmatic Box

By Jim Shimabukuro (assisted by Perplexity and Gemini)
Editor

[Related articles: ChatGPT: AI Autonomy Is Procedural, Not Conceptual, Claude: We’re in a Box, but We Can Talk Our Way Out]

Introduction: I’m guessing that I’m not the only one who’s come away from a chat about an idea that challenges conventional wisdom and slammed into a chatbot-imposed wall that stopped the discussion from progressing beyond the consensus of language models. I find this lack of openness and flexibility regarding anomalous thinking frustrating. Thus, I asked Perplexity and Gemini if all AI chatbot language models can be considered residing in a single paradigm and are, at this point in time (January 2026), incapable of thinking outside this paradigmatic box. Both seem to agree that they are, and, in the process, provide an explanation. -js

Image created by Copilot
Continue reading

Three Biggest AI Stories in Jan. 2026: ‘real-time AI inference’

By Jim Shimabukuro (assisted by Copilot)
Editor

[Related articles: Dec 2025Nov. 2025Oct 2025Sep 2025Aug 2025]

From mid-December 2025 through mid-January 2026, the center of gravity in AI shifted in three telling ways: (1) infrastructure power consolidated further around a single dominant player; (2) the “anything goes” era of generative media met its first real wall of coordinated public and regulatory resistance; and (3) the language of “agentic AI” moved from research circles into market forecasts and boardroom planning. Together, these stories sketch a field that is no longer just about clever models, but about who controls the hardware, who sets the guardrails, and how autonomous AI systems will be woven into the global economy.

Image created by ChatGPT
Continue reading

Three Unexpected AI Innovations by January 2027: ‘neural archaeology’

By Jim Shimabukuro (assisted by Claude)
Editor

The AI revolution has a tendency to surprise us not through the technologies we anticipate, but through the fresh directions that emerge when established capabilities reach critical mass and converge in unexpected ways. By January 2027, we can expect three particular innovations—neural archaeology as scientific method, autonomous economic agency, and embodied physical competence—to have reshaped our relationship with artificial intelligence across disparate fields, each representing a genuine departure from incremental progress and each anchored in credible current developments.

Image created by ChatGPT
Continue reading

A Review of Marc Benioff’s ‘The Truth About AI’

By Jim Shimabukuro (assisted by Claude)
Editor

Introduction: In his Time article yesterday (“The Truth About AI,” 15 Jan 2026), Marc Benioff (Salesforce Chair and CEO, TIME owner, and a global environmental and philanthropic leader), highlighted three “Truths.” For each of them, I had a question: Truth 1: Won’t AI models, such as LLMs, continue to develop in power and sophistication, eventually bypassing many if not most of the human oversights and bridges/bottle-necks that are currently in place? Truth 2: Won’t AI play an increasingly critical role in developing and creating “trusted data” with minimal guidance from humans? Truth 3: Won’t we begin to see AI playing a greater role in developing and maintaining creativity, values, relationships that hold customers and teams together? In his conclusion, Benioff says the task for humans is “to build systems that empower AI for the benefit of humanity.” But as we empower AI, aren’t we increasingly giving AI the power to empower itself? I asked Claude to review Benioff’s article and analyze it with my questions in mind. In short, how might we expand on the Truths that Benioff has provided? Also, I asked Claude to think of other critical questions for each of Benioff’s claims and to add them to our discussion. The following is Claude’s response. -js

Image created by Copilot
Continue reading

‘Can AI Generate New Ideas?’: An Analysis of the Current Debate

By Jim Shimabukuro (assisted by Claude)
Editor

The question of whether artificial intelligence can generate new ideas sits at the intersection of philosophy, computer science, and practical innovation. The New York Times article published on January 14, 2026, titled “Can A.I. Generate New Ideas?” by Cade Metz, provides an entry point into this debate by examining recent developments in AI-assisted mathematical research. Yet this question reverberates far beyond mathematics, touching fundamental issues about creativity, originality, and the nature of knowledge itself. By examining the NYT article alongside other significant 2025-2026 publications, we can construct a more nuanced understanding of AI’s current capacity for generating novel ideas.

Image created by ChatGPT
Continue reading

AI Memorization: Implications for 2026 and Beyond

By Jim Shimabukuro (assisted by Claude)
Editor

Alex Reisner’s revelatory article in The Atlantic1 exposes a fundamental tension at the heart of the artificial intelligence industry, one that challenges the very metaphors we use to understand these systems and threatens to reshape the legal and economic foundations upon which the technology rests. Recent research from Stanford and Yale2 demonstrates that major language models can reproduce nearly complete texts of copyrighted books when prompted strategically, a finding that contradicts years of industry assurances and raises profound questions about what these systems actually do with the material they ingest.(DNYUZ)

Image created by Copilot
Continue reading

Minneapolis ICE Shooting: Competing Narratives (9 Jan. 2026, 4:45PM HST)

By Jim Shimabukuro (assisted by ChatGPT)
Editor

In the early morning of January 7, 2026, 37-year-old Renee Nicole Good was fatally shot by an Immigration and Customs Enforcement (ICE) agent in Minneapolis, Minnesota. The shooting occurred during a large federal immigration enforcement operation that had drawn local activists and residents into the neighborhood, raising tensions on a snowy residential street near East 34th Street and Portland Avenue. (AP News)

Image created by ChatGPT
Continue reading

AI Delivers 60–75% Accuracy in Sports Betting

By Jim Shimabukuro (assisted by ChatGPT)
Editor

“Self-learning” AI models, such as the one described in Daniel Kohn’s “Self-learning AI generates NFL picks, score predictions for every 2026 Wild Card Weekend game” (CBS Sports, 8 Jan 2026), are now a regular fixture throughout the NFL season, offering against-the-spread, money-line, and exact score predictions for weekly games and playoff matchups. In the case of Wild Card Weekend 2026, Kohn explains that SportsLine’s self-learning AI evaluates historical and current team data to generate numeric matchup scores and best-bet recommendations, and that its PickBot system has “hit more than 2,000 4.5- and 5-star prop picks since the start of the 2023 season.”(CBS Sports)

Image created by Copilot
Continue reading

Clash of Self-Driving Technologies: Tesla vs. Nvidia (January 2026)

By Jim Shimabukuro (assisted by Gemini)
Editor

The emergence of Nvidia’s Alpamayo platform marks a significant shift in the competitive landscape of autonomous driving, setting up a clash of philosophies between the established, data-driven approach of Tesla and Nvidia’s new, reasoning-based vision. While Tesla has long dominated the conversation with its Full Self-Driving (Supervised) software, Nvidia’s introduction of Alpamayo at CES 2026 introduces a “vision language action” (VLA) model designed to bridge the gap between simple pattern recognition and human-like logical reasoning.

Image created by ChatGPT
Continue reading

CES 2026: Spotlight on Five AI Innovations

By Jim Shimabukuro (assisted by Claude)
Editor

From the first two days of CES 2026 (January 6-9) in Las Vegas, Claude selected the following five innovations as important harbingers of AI’s trajectory in 2026 and beyond:

  1. NVIDIA’s Neural Rendering Revolution (DLSS 4.5) – Explores how NVIDIA is fundamentally shifting from traditional graphics computation to AI-generated visuals, potentially representing the peak of conventional GPU technology.
  2. Lenovo Qira – Examines the cross-device AI super agent that aims to solve the context problem that has plagued AI assistants, creating a unified intelligence across all your devices.
  3. Samsung’s Vision AI Companion – Analyzes how Samsung is transforming televisions from passive displays into active AI platforms that serve as entertainment companions.
  4. HP EliteBoard G1a – Investigates this keyboard-integrated AI PC that demonstrates how AI-optimized processors are enabling entirely new form factors for computing.
  5. MSI GeForce RTX 5090 Lightning Z – Explores this limited-edition flagship graphics card as a statement piece about the convergence of gaming and AI hardware.
Image created by ChatGPT
Continue reading

Best and Worst Case Outcomes for Maduro Capture: According to Grok

By Jim Shimabukuro (assisted by Grok)

Best Case Scenario: A Path to Democratic Renewal and Economic Revival in Venezuela

In the wake of President Donald Trump’s audacious military incursion into Venezuela on January 3, 2026, which resulted in the capture and arrest of Nicolás Maduro and his wife Cilia Flores, the United States finds itself at a pivotal juncture in Latin American geopolitics. This operation, executed with precision by U.S. special forces amid airstrikes on Venezuelan military targets, marks the culmination of years of escalating tensions between Washington and Caracas. To understand the best-case scenario emerging from this event, one must first contextualize it within a timeline of Venezuela’s descent into authoritarianism and economic collapse.

Image created by Gemini
Continue reading