“That is the most important danger I see in the way forward for AI: seize of knowledge by a small variety of corporations by proprietary methods.”
For states, it is a nationwide safety concern. For funding managers and corporates, it’s a dependency danger. If analysis and decision-support workflows are mediated by a slim set of proprietary platforms, belief, resilience, information confidentiality, and bargaining energy weaken over time.
LeCun recognized “federated studying” as a partial mitigant. In such methods, centralized fashions keep away from needing to see underlying information for coaching, relying as a substitute on exchanged mannequin parameters.
In precept, this permits a ensuing mannequin to carry out “…as if it had been skilled on your complete set of knowledge…with out the info ever leaving (your area).”
This isn’t a light-weight resolution, nevertheless. Federated studying requires a brand new sort of setup with trusted orchestration between events and central fashions, in addition to safe cloud infrastructure at nationwide or regional scale. It reduces data-sovereignty danger, however doesn’t take away the necessity for sovereign cloud capability, dependable power provide, or sustained capital funding.
