Skip to main content

2024-06-18

Chat Control Must Be Stopped – Now

  • The EU Commission's "Chat Control" proposal aims to implement mass surveillance, potentially compromising citizens' privacy and data security.
  • If passed, it would require service providers to scan messages for child sexual abuse material (CSAM), but critics argue it is ineffective against criminals and harmful to democracy.
  • Threema, a secure communication service, opposes the proposal and may leave the EU to avoid compliance, highlighting the potential misuse and opposition from privacy advocates.

Reactions

  • Implementing a global system to regulate internet privacy would face substantial resistance from privacy advocates and tech companies.
  • Enforcing such a system globally is nearly impossible due to varying levels of commitment to privacy and internet freedom across different countries.

Chat Control: Incompatible with Fundamental Rights (2022)

  • The EU Commission's draft Chat Control Regulation aims to combat child sexual violence but raises significant concerns about fundamental rights.
  • Key issues highlighted include privacy violations, chilling effects on free expression, error-prone filtering obligations, website blocking, and mandatory age verification.
  • The GFF argues that these measures violate the EU Charter of Fundamental Rights and calls for a reconsideration of the draft regulation.

Reactions

  • The European Parliament is debating a "Chat Control" legislation that could infringe on fundamental rights, requiring users to opt-in to send images and videos.
  • Critics argue the proposal contradicts the EU's GDPR principles and could lead to coerced consent, raising concerns about privacy and government overreach.
  • The legislation might soon be passed by the European Council, sparking fears of mass surveillance and questioning the EU's commitment to protecting individual rights.

EU to greenlight Chat Control tomorrow

  • The EU Council is set to vote on Chat Control, which involves bulk searches of private communications, on 20 June 2024.
  • The timing of the vote, shortly after the European Elections, is seen as an attempt to avoid public scrutiny.
  • Civil society is urged to act immediately by contacting their governments, raising awareness online, and organizing protests, as the current draft is considered unacceptable.

Reactions

  • The EU is poised to approve "Chat Control," a regulation requiring the scanning of all direct messages on platforms like Reddit, Twitter, Discord, and Steam for CSAM (child sexual abuse material).
  • Critics argue the measure is unprecedented and likely ineffective, as offenders might migrate to private services, and it raises significant privacy and overreach concerns.
  • Signal Foundation has announced it would exit the EU if the regulation is enforced, highlighting the contentious nature of the proposal.

Htmx 2.0.0 has been released

  • htmx 2.0.0 has been released, ending support for Internet Explorer and tightening some defaults without altering core functionality or the API.
  • Major changes include moving extensions to a new repository, removing deprecated attributes, and modifying HTTP DELETE request handling.
  • The release will not be marked as the latest in NPM until January 1, 2025, to avoid forcing upgrades; version 1.x will remain the latest until then.

Reactions

  • Htmx 2.0.0 has been released, featuring cleanups and dropping support for Internet Explorer (IE), rather than major new features.
  • Developers are praising htmx for simplifying web development, with one user replacing 500 lines of JavaScript (JS) with a few htmx attributes, enhancing efficiency and enjoyment.
  • The release has sparked discussions on potential improvements and comparisons with other tools, highlighting htmx's role in reducing reliance on complex JS frameworks.

Cyber Scarecrow

  • Scarecrow is a cybersecurity tool currently in its alpha phase, designed to run in the background of your computer to deter viruses and malware.
  • It is available for download on Windows 10 and 11.

Reactions

  • Cyber Scarecrow is a tool that creates fake processes and registry entries to deceive malware into thinking it's under analysis, thereby stopping it from executing.
  • Users have expressed concerns about the tool's transparency, including the absence of an "about us" page, a GitHub link, and a code signing certificate.
  • The author has acknowledged these issues, citing the high cost of certificates, and there are suggestions to make the tool open source to build trust and validate its effectiveness through real-world testing.

“Attention assault” on Fandom

  • Fandom, a popular wiki website, is criticized for intrusive ads, including auto-playing videos and constant interruptions, prioritizing profit over user experience.
  • In 2023, Fandom controversially replaced user content with McDonald's Grimace Shake ads, leading to a mass migration of wikis to independent domains like Runescape, Minecraft, and Hollow Knight.
  • Users are encouraged to support independent wikis by using tools like Indie Wiki Buddy, employing ad blockers, and migrating their wikis off Fandom.

Reactions

  • Communities are migrating their wikis from Fandom to self-hosted or alternative platforms due to intrusive ads and outdated content.
  • Notable examples include the Runescape and Minecraft wikis, which have successfully transitioned away from Fandom.
  • Tools like Indie Wiki Buddy and LibRedirect assist users in avoiding Fandom by redirecting them to more user-friendly sources, underscoring the adverse effects of venture capital on user-driven content platforms.

Getting 50% (SoTA) on Arc-AGI with GPT-4o

Reactions

  • Ryan's work on GPT-4o achieving 50% on the Arc-AGI public evaluation set is considered novel and interesting in the field of "LLM reasoning" research.
  • The approach involves generating around 8,000 Python programs to implement transformations, selecting the correct one, and applying it to test inputs, showcasing a hybrid of deep learning (DL) and program synthesis.
  • While the result is promising, it is based on the public evaluation set, and similar results on the private set have not yet been validated, indicating the need for further scrutiny and verification.

A new RISC-V Mainboard from DeepComputing

  • DeepComputing has introduced a new RISC-V Mainboard for the Framework Laptop 13, featuring a JH7110 processor from StarFive with four U74 RISC-V cores from SiFive.
  • This development enhances the Framework ecosystem by allowing users to select different processor architectures, promoting flexibility and personalization.
  • The Mainboard, aimed at developers and hobbyists, will be demoed at the RISC-V Summit Europe and is supported by collaborations with Canonical and Red Hat for robust Linux compatibility.

Reactions

  • DeepComputing has launched a new RISC-V Mainboard for Framework laptops, featuring the JH7110 processor and microSD storage, resembling a RISC-V Single Board Computer (SBC) in a Framework form-factor.
  • The mainboard targets developers and tinkerers, offering modularity and the potential to swap between x86 and RISC-V boards, though it comes with a notable performance drop compared to x86.
  • This collaboration between Framework and DeepComputing is viewed as a move to diversify and expand Framework's ecosystem, increasing visibility for RISC-V technology.

Sam Altman is not on YC's board. So why claim to be its chair?

  • Sam Altman, former president and CEO of Y Combinator, claims to be its board chair in SPAC (Special Purpose Acquisition Company) filings.
  • Y Combinator denies Altman's claim, stating he was never on its board despite his significant role in the company.

Reactions

  • Sam Altman, former CEO and President of Y Combinator (YC), has been inaccurately listed as the chairman of YC in multiple official documents, including SEC filings and a SPAC website.
  • The misstatement has sparked debate, with some arguing it is a minor clerical error while others emphasize the legal implications of inaccuracies in SEC filings.
  • Critics highlight that such errors, if intentional, could be seen as misleading and undermine trust, though proving intent and material harm is complex.

Humans began to rapidly accumulate technological knowledge 600k years ago

  • Researchers from Arizona State University suggest that humans began rapidly accumulating technological knowledge through social learning around 600,000 years ago, marking the origin of cumulative culture.
  • The study, published in the Proceedings of the National Academy of Sciences, analyzed stone tool manufacturing techniques over 3.3 million years, noting a significant increase in complexity around 600,000 years ago.
  • This period, likely in the Middle Pleistocene epoch, also saw advancements like controlled use of fire and construction of wooden structures, indicating cumulative culture predates the divergence of Neanderthals and modern humans.

Reactions

  • Humans started gathering technological knowledge around 600,000 years ago, with multiple Homo species possibly sharing and exchanging technology.
  • The term "human" can refer to both modern humans and the entire genus Homo, but "hominin" is more precise; debates exist on whether Neanderthals and Denisovans are considered human.
  • The rapid accumulation of knowledge is linked to advancements in communication, potentially including early forms of language, highlighting the role of language in technological transfer.

Token price calculator for 400+ LLMs

  • Tokencost is a utility library designed to estimate costs associated with Large Language Models (LLMs) by counting tokens in prompts and completions and applying model-specific pricing.
  • It addresses the challenge of tracking costs across various models and pricing schemes, helping users avoid unexpected bills by providing real-time cost estimates.
  • Developed by AgentOps, Tokencost is now open source, allowing developers to integrate it into their projects for better cost management.

Reactions

  • Tokencost is a utility library designed to estimate costs for over 400 Large Language Models (LLMs) by counting tokens in prompts and completions and multiplying by model costs.
  • Developed by AgentOps and open-sourced, it helps developers track spending and avoid unexpected bills, using a simple cost dictionary and utility functions.
  • Users have suggested improvements such as adding support for Rust, normalizing costs, and including image and function call costs, though there are concerns about accuracy for models without public tokenizers.

Sei pays out $2M bug bounty

  • In April 2024, two critical bugs were reported in Sei Network's layer-1 blockchain, affecting chain availability and integrity.
  • The Sei Foundation awarded $75,000 and $2,000,000 for the respective bug reports, which were identified and fixed before production release, ensuring no funds were at risk.
  • The proactive measures and quick response by the Sei Foundation prevented potential jeopardy to the Sei token market cap, demonstrating a strong commitment to user protection.

Reactions

  • Sei Network has paid out a $2 million bug bounty, highlighting the significant financial incentives in the cryptocurrency sector for identifying security vulnerabilities.
  • The bug bounty was processed through Immunefi, a platform specializing in crypto bug bounties, which often sees payouts exceeding $1 million.
  • This payout underscores the critical importance of security in the crypto industry, where the cost of potential breaches can be astronomical compared to traditional finance.

Google DeepMind shifts from research lab to AI product factory

Reactions

  • Google DeepMind is shifting from a research lab to an AI product factory, raising debates about the challenges and potential pitfalls of this transition.
  • Critics suggest that integrating experienced product teams from Google with DeepMind's research might be more effective than converting the research organization into a product-focused entity.
  • Concerns include the impact on fundamental research and the risk of producing rushed, underdeveloped products, though some believe this shift could lead to significant advancements in AI products.

Every Way to Get Structured Output from LLMs

  • The post addresses the challenge of obtaining structured output, such as JSON, from Large Language Models (LLMs), which typically return responses in natural language.
  • It provides a detailed comparison of various frameworks designed to convert LLM outputs into structured formats, evaluating them based on criteria like language support, JSON handling, prompt control, and supported model providers.
  • The frameworks compared include BAML, Instructor, TypeChat, Marvin, Outlines, Guidance, LMQL, JSONformer, Firebase Genkit, SGLang, and lm-format-enforcer, each with unique features and capabilities for handling structured data extraction.

Reactions

  • BAML's article explores methods for obtaining structured output from Large Language Models (LLMs), emphasizing BAML's unique parsing approach for handling malformed JSON.
  • BAML offers both open-source and paid features, with paid options focusing on monitoring and enhancing AI pipelines.
  • The article compares various frameworks and discusses the challenges and trade-offs in enforcing structured output, noting some users prefer simpler methods like Pydantic for JSON validation.

A Note on Essential Complexity

  • Software engineers have multiple overlapping and sometimes conflicting goals, such as writing code, managing complexity, and satisfying customer needs.
  • Essential complexity is inherent to the problem, while accidental complexity arises from performance issues or suboptimal tools; reducing both is crucial.
  • Senior engineers can redefine problems by challenging assumptions and negotiating with stakeholders, potentially simplifying requirements and minimizing complexity.

Reactions

  • Software engineers sometimes embrace complexity to justify their roles, as seen in communities like Enterprise Java, .NET, and JavaScript (JS).
  • The article humorously references the Stroustrup C++ satire to highlight intentional complexity in programming languages.
  • It argues that minimizing complexity is crucial for good engineering, balancing short-term and long-term decisions, and ensuring consistency to avoid unnecessary complications.