Skip to main content
The AI Dilemma
- 50% of researcher say there is a 10% chance of humans going extinct from our inability to control Ai
- rubber band effect- expand our minds then looking somewhere else everything snaps back
- 3 rules of tech
- when you invent new tech, you uncover a new class of responsibilities
- if the tech confers powers, it forms a race
- if you don't coordinate the race it forms a tragedy
- social media is humanities first contact with AI ( race to engagement, we lost this)
- second contact in 2023 with Ai, (race to intimacy)
- maximize management
- did we fix this misalignment?
- GPT-3
- -lots of AI postives
- -negatives and weird bias, opinions
- 2017 new engine was created
- AI used to be separate fields- Speech recognition, robotics, computer vision, speech synthesis, etc.
- Gollem-Class Ai- they suddenly have emergent capabilities
- The AI only sees what your thinking
- TikTok filters (cat-fishing)
- total decoding and synthesizing of reality
- what nukes are to the physical word, Ai is to the virtual and symbolic world
- 2024 will be the last human election
- Gollem AI have emergent capabilities their programmers didnt program
- programmers say they dont know why, how, what happens for them to be able to go through this emergent process.
- new capabilities suddenly emerge
- Ai develops theory of mind
- only noticed that it was growing its mind a month ago
- RLHF- advanced clipper training
- Gollems silently taught themselves research grade chemistry
- Ai can make themselves stronger
- how do you feed your gollem if you run out of data
- use Ai to feed itself, youtube, podcasts, radio
- prediction: Ai will reach >50% in four years
- reality: Ai reached >50% in one year
- Ai is beating tests as fast as they are made
- getting increasingly hard for programmers/directors
- democratization is dangerous
- Racing to deploy Gollem AIs into world infrastructure as fast as possible
- armies of gollem Ais pointed at our brains, strip-mining us of everything that isnt protected by 19th century law
- a lot of safety researchers by venue
- at least AI has substantial research
- the people who are creating AI feel that is not moving at a safe pace
Comments
Post a Comment