Geoffrey Hinton, a prominent figure in artificial intelligence (AI) development and known as 'The Godfather of A.I.,' has left Google and warns of potential dangers with the AI technology he helped develop.
Hinton expresses concerns about the possibility of the algorithms behind chatbots, like ChatGPT, becoming too sophisticated and eventually leading to unpredictable behavior that could be harmful.
Hinton's departure from Google suggests a shift towards prioritizing ethical considerations and caution when it comes to further AI development.
Language Model AI (LLM) technology poses a potential risk from bad actors manipulating information, causing falsehoods and manipulating people's opinions.
Liability frameworks need to be established to hold individuals and companies accountable for harm caused by AI, and there must be competent supervision across both AI and human-generated media.
Opt-out of global data surveillance programs like PRISM, XKeyscore, and Tempora to protect your privacy rights.
Take action by encrypting your communication and stop relying on proprietary services.
Using recommended projects does not guarantee 100% protection against surveillance states; do research before trusting these projects with sensitive information.
Some argue that data privacy regulations with criminal penalties for corporations and governments engaging in surveillance may be the only solution to stopping mass surveillance.
The post discusses various privacy-focused tools and alternatives, but some commenters express skepticism about their efficacy and accuracy of certain claims.
The funding sources for Krita in 2016 revealed that purchases on stores like Steam have made significantly more than donations.
Users praise Krita's user-friendly interface, powerful brush engine, and suitability for painting and drawing, and some suggest it as a replacement for Paint.NET.
Discussions on the impact of government subsidies on the banking industry and the need for regulation or a core banking functionality to prevent future crises.
Concerns on the risks of consolidating banks, concentration of corporate power, inflation spikes, and inter-generational wealth disparities were also discussed.
The definitions for AI and ML in science are not consistent, causing confusion for some.
There are resources recommended for building knowledge of deep learning and neural networks, including courses and practical exercises like fast.ai, Neural Networks Zero to Hero, and Neural Networks from Scratch.
A new biological test, called the alpha-synuclein seed amplification assay (αSyn-SAA), can now identify Parkinson's pathology by examining spinal fluid from living patients, with over 90% sensitivity in people with typical Parkinson's pathology, even before onset of symptoms.
The test detects synuclein pathology, one of the two biological hallmarks of Parkinson's disease (alongside dopaminergic transport dysfunction), providing a novel tool for precision medicine approaches, earlier intervention, and improved clinical trial design.
With the ability to establish objective endpoints for clinical trials of Parkinson's treatments, the test will decrease the risk for industry to invest in potential blockbuster therapies, including preventive agents, and increase the speed and efficiency with which these therapies can be developed, tested, and brought to market.
Bay Area residents should be aware of groundwater contamination risks associated with computer manufacturing and dry cleaners
Exposure to toxins such as pesticides and tobacco products have been linked to developing Parkinson's; a study funded by the foundation is underway to find early biomarkers
Researchers have identified a strain of Subdoligranulum bacteria that may drive the development of Rheumatoid Arthritis (RA).
Identifying this bacterium was the result of extensive research as researchers screened blood from people at risk for RA or with early- stage RA for RA- related autoantibodies.
Mice given this bacterium developed a condition similar to human RA, and scientists have found that this bacterium produced RA- like symptoms in mice without the addition of another immune insult, making it unique among associated bacteria.
Geoffrey Hinton, one of the "Godfathers of AI" who won the 2018 Turing Award for his work on neural networks, has left his job at Google to speak freely about his concerns of AI risks.
Hinton is concerned about the spread of fake imagery and text, as well as the potential elimination of rote jobs, and believes that AI could eventually write and run its own code, possibly resulting in the end of humanity.
Hinton was largely happy with Google's stewardship of technology until Microsoft launched Bing, compelling Google to respond in kind, which he believes could lead to a world where "nobody will be able to tell what is true anymore.".
There is a debate about the accuracy of the reporting between The Verge article and the NYT article it was based on, with some arguing that the latter was misleading.
Hinton's departure and comments seem to be linked to concerns over the increasingly intense competition between tech giants in the development of AI technology.