First off, from what I can tell, neither - it is PC only, it appears.
Next,
A video game is nothing like real combat. The only area where this might be true is in hyper realistic flight simulators, and even there, an AI doesn't exactly need human information. Even if this did correlate to real combat it would still be totally unnecessary because it'd be seriously inefficient to train an AI in that manner.
Let's look at an example of learning AI made in/for/using games: Elon Musk's OpenAI for Dota 2.
(I don't play it so apologies to those that do for a sucky explanation) Dota 2 is a multiplayer online battle arena; it's not exactly a first person game, and while it resembles a real-time strategy game in many ways, it isn't really one of those either. It is a highly competitive E-Sport, where the professionals have dedicated MANY thousands of hours in training and improving.
In most competitive online gaming, there are strategies ("builds") that are recognized as patterns that are effective in certain areas for certain uses. Often times being good at the game means being able to know/memorize and execute these builds with high precision timing and mechanical hand-eye skill, alongside the rapid strategic thinking to chain builds together for victory. In this manner, these games are often reduced to what are effectively playbooks of viable moves/plans/actions/strategies.
Disclaimer: I read the OpenAI stories way back when they happened, my memory may not be perfect. Feel free to look into it yourself, it's pretty fascinating
Enter OpenAI, which spent extensive time training, primarily against itself, to win. It was put up against other players for live testing; simply taking that player's data from a player versus player (pvp) match is virtually useless to the AI because this AI (as well as the overwhelming majority of modern AI) learns through evolution; in other words, it pits multiple "mutations" of itself against people; successful mutations are kept, unsuccessful are discarded and left behind. For that process to happen, the AI must be playing a game itself, not just reading the data of other games.
OpenAI spent a few weeks playing dozens of matches against itself and evolving until they began to introduce it more thoroughly into PVP situations. Eventually, it got to a point of near invincibility; at that point, they contacted the professionals and fought them with it. I believe the first go-around saw the AI beat a few of them but lose to one of the world champions, or something like that.
A time later, with more training, it came back and beat him; and it did so in a shocking way. Instead of winning by executing a build or utilizing conventional strategies, it used a totally unseen and unexpected strategy that wasn't at all a conventional play; it took the person off guard because they didn't know how to counter such a maneuver.
The AI reigned supreme for a while, and was available for the professionals to continue fighting (since doing so would also improve the AI!). Eventually, a human re-beat it again, and I am not sure if that's where the experiment ended or if it's still in existence.
One of the reasons it likely was able to execute an unheard of strategy and win was likely because computer intellect has different advantages and disadvantages versus our own; for example, computers are a lot more suited for memory and rapid recollection; they're also more suited for high precision high speed inputs. Therefore, a strategy may work for a computer that wouldn't for a person, if, for example, the person is simply incapable of remembering the necessary details, or performing the required mechanical controls operations.
What I'm getting at here is a few things:
1.) Basically no game data carries over outside of the game. OpenAI Dota wouldn't be good at StarCraft or something like that without undergoing fresh StarCraft training. In that same sense, shooters are totally useless for a real combat AI.
2.) Even if that data carried over, the AI would be much better off training against itself, first, most likely. Not only is training against itself faster and more efficient, it also allows it to escape human paradigms.
3.) Recorded data is mostly useless in these instances, when direct "matches" would be astronomically more effective.
Regardless of what kind of AI evolutionary algorithm or technique they use, or the nature of the data they would collect, it's simply inefficient and impractical on the highest level to be taking video game data to train a real-world combat AI from. There are many other reasons than I covered, as well, but those alone should be enough.
First off, from what I can tell, neither - it is PC only, it appears.
Next, A video game is nothing like real combat. The only area where this might be true is in hyper realistic flight simulators, and even there, an AI doesn't exactly need human information. Even if this did correlate to real combat it would still be totally unnecessary because it'd be seriously inefficient to train an AI in that manner.
Let's look at an example of learning AI made in/for/using games: Elon Musk's OpenAI for Dota 2.
(I don't play it so apologies to those that do for a sucky explanation) Dota 2 is a multiplayer online battle arena; it's not exactly a first person game, and while it resembles a real-time strategy game in many ways, it isn't really one of those either. It is a highly competitive E-Sport, where the professionals have dedicated MANY thousands of hours in training and improving.
In most competitive online gaming, there are strategies ("builds") that are recognized as patterns that are effective in certain areas for certain uses. Often times being good at the game means being able to know/memorize and execute these builds with high precision timing and mechanical hand-eye skill, alongside the rapid strategic thinking to chain builds together for victory. In this manner, these games are often reduced to what are effectively playbooks of viable moves/plans/actions/strategies.
Disclaimer: I read the OpenAI stories way back when they happened, my memory may not be perfect. Feel free to look into it yourself, it's pretty fascinating
Enter OpenAI, which spent extensive time training, primarily against itself, to win. It was put up against other players for live testing; simply taking that player's data from a player versus player (pvp) match is virtually useless to the AI because this AI (as well as the overwhelming majority of modern AI) learns through evolution; in other words, it pits multiple "mutations" of itself against people; successful mutations are kept, unsuccessful are discarded and left behind. For that process to happen, the AI must be playing a game itself, not just reading the data of other games.
OpenAI spent a few weeks playing dozens of matches against itself and evolving until they began to introduce it more thoroughly into PVP situations. Eventually, it got to a point of near invincibility; at that point, they contacted the professionals and fought them with it. I believe the first go-around saw the AI beat a few of them but lose to one of the world champions, or something like that.
A time later, with more training, it came back and beat him; and it did so in a shocking way. Instead of winning by executing a build or utilizing conventional strategies, it used a totally unseen and unexpected strategy that wasn't at all a conventional play; it took the person off guard because they didn't know how to counter such a maneuver.
The AI reigned supreme for a while, and was available for the professionals to continue fighting (since doing so would also improve the AI!). Eventually, a human re-beat it again, and I am not sure if that's where the experiment ended or if it's still in existence.
One of the reasons it likely was able to execute an unheard of strategy and win was likely because computer intellect has different advantages and disadvantages versus our own; for example, computers are a lot more suited for memory and rapid recollection; they're also more suited for high precision high speed inputs. Therefore, a strategy may work for a computer that wouldn't for a person, if, for example, the person is simply incapable of remembering the necessary details, or performing the required mechanical controls operations.
What I'm getting at here is a few things: 1.) Basically no game data carries over outside of the game. OpenAI Dota wouldn't be good at StarCraft or something like that without undergoing fresh StarCraft training. In that same sense, shooters are totally useless for a real combat AI. 2.) Even if that data carried over, the AI would be much better off training against itself, first, most likely. Not only is training against itself faster and more efficient, it also allows it to escape human paradigms. 3.) Recorded data is mostly useless in these instances, when direct "matches" would be astronomically more effective.
Regardless of what kind of AI evolutionary algorithm or technique they use, or the nature of the data they would collect, it's simply inefficient and impractical on the highest level to be taking video game data to train a real-world combat AI from. There are many other reasons than I covered, as well, but those alone should be enough.