Reason: None provided.
Paper PDF: https://files.catbox.moe/f21jn8.pdf
His Notes:
2/ from gpt4 to AGI: counting the OOMs
- ai progress is rapid. gpt-2 to gpt-4 went from preschooler to smart high schooler in 4 years
- we can expect another jump like that by 2027. this could take us to agi
- progress comes from 3 things: more compute, better algorithms, and "unhobbling" (making models less constrained)
- compute is growing ~0.5 orders of magnitude (OOMs) per year. that's about 3x faster than moore's law
- algorithmic efficiency is also growing ~0.5 OOMs/year. this is often overlooked but just as important as compute
- "unhobbling" gains are harder to quantify but also huge. things like RLHF and chain-of-thought reasoning
- we're looking at 5+ OOMs of effective compute gains in 4 years. that's another gpt-2 to gpt-4 sized jump
- by 2027, we might have models that can do the work of ai researchers and engineers. that's agi (!!)
- we're running out of training data though. this could slow things down unless we find new ways to be more sample efficient
- even if progress slows, it's likely we'll see agi this decade. the question is more "2027 or 2029?" not "2027 or 2050?"
175 days ago
1 score
Reason: Original
2/ from gpt4 to AGI: counting the OOMs
- ai progress is rapid. gpt-2 to gpt-4 went from preschooler to smart high schooler in 4 years
- we can expect another jump like that by 2027. this could take us to agi
- progress comes from 3 things: more compute, better algorithms, and "unhobbling" (making models less constrained)
- compute is growing ~0.5 orders of magnitude (OOMs) per year. that's about 3x faster than moore's law
- algorithmic efficiency is also growing ~0.5 OOMs/year. this is often overlooked but just as important as compute
- "unhobbling" gains are harder to quantify but also huge. things like RLHF and chain-of-thought reasoning
- we're looking at 5+ OOMs of effective compute gains in 4 years. that's another gpt-2 to gpt-4 sized jump
- by 2027, we might have models that can do the work of ai researchers and engineers. that's agi (!!)
- we're running out of training data though. this could slow things down unless we find new ways to be more sample efficient
- even if progress slows, it's likely we'll see agi this decade. the question is more "2027 or 2029?" not "2027 or 2050?"
175 days ago
1 score