Building a Best-in-Class Game Streaming Experience on Project xCloud | Game Stack Live

– Hello, my name is Shawn Farkas, and I’m a developer in
the Project xCloud team. I work primarily in the developer hooks that we’re building so your Xbox games can be enhanced for game streaming. With Project xCloud and
Xbox console streaming, your Xbox games can now be played in ways that were never before possible. Today, I’d like to spend
some time talking about the tools and APIs we make available if you’d like to enhance your game when it’s being streamed from Xbox. First, we’ll talk about
getting your dev kit set up to build a Cloud Aware game. We’ll talk quite a bit about
the Touch Adaptation Kit which allows you to build
touch controls for your game without modifying its input stack. And we’ll spend a bunch of time discussing the Cloud Aware APIs which enable your game to detect that they’re being streamed, and enhance in any number of ways that they would like to do. I like to think about enhancing
your game for streaming in three different phases. First is a console native game. And console native games
are just standard Xbox games that run in our data centers
or from console streaming. And this works because the data centers are effectively hosting Xboxes in racks. As far as the game knows, it’s
running on any other Xbox. In the Xbox console streaming case, it actually is running on an
Xbox in someone’s living room. The Touch Adaptation
Kit allows you to build a set of touch controls without
modifying your game’s code. You describe declaratively the way you want those controls to look. The client application will render them and send XInput-style
inputs back to your game. Once you become Cloud Aware your game can detect that it’s streaming and do all sorts of things. You might wanna adjust its
font size and UI layouts to match the device it’s
streaming to, for instance. And this will work both
with Project xCloud and with Xbox console streaming. Before you get started,
we’ll need to set up your development environment. Our goal in building this
development environment was to be as minimally
impactful as possible. We don’t want to impact
the tools and scripts that you already had
when you build your game. To that end, what we’ve simply done is enable the new mode on your dev kit, which enables it to act
as a streaming server. Of course, you’ll need
something to stream too. And that’s where the
content test application, or what I’ll call the
CTA throughout this talk, comes into play. The CTA is effectively just a version of the public game streaming app, but it has one special
mode called direct connect that allows you to connect
it straight to your dev kit. The goal here is that you don’t have to change your workflow. You have your existing tools and scripts, you have Visual Studio and PICs and all of that’s gonna continue to work. The only thing that we wanted to change was enable you to make
modifications to your game, to detect that it’s streaming, and just check how that
works right at your desk in your normal course of work. So let’s take a look at what you would get your dev kit setup. You can see here we’ve
got a new streaming tab on Dev Home on a dev kit on the left, and a direct connect page
on the CTA on the right. The first thing you need to do is check that the streaming protocol
version from the CTA matches the version that
the dev kit is set up for. And typically this is
already gonna be the case. However, when we roll out
new versions of the protocol what we’ll typically do is
leave the older versions available on the dev kit
so that you’re not required to upgrade both your Xbox and
your app in complete lockstep. The next step is to enter
your Xbox’s IP address in the direct connect’s
frame of the CTA and connect. Now one implied requirement here is that the dev kit and the CTA need to be on the same network so they can communicate with one another. Depending upon your
studio’s network setup, you might need to work
with your IT department to get a network which can house both the Android device running the CTA and the dev kit hosting your game. Now with that out of the way, let’s get started making some
modifications to our game. At its core, the Touch Adaptation
Kit is just a technology that allows your users to
provide Xbox controller, inputs your game without using
the physical Xbox controller. Instead, we’ll render onscreen controls that the player interacts
with using touch. And those are converted to XInput-style controls for your game. That is, the Touch Adaptation Kit allows you to build customized
touch controls for your game without modifying your
game’s input or render stack. Now, of course, an Xbox controller has 18 distinct inputs on it,
including four analog inputs and that’s an awful lot to
place on the screen at once. Many games also require multiple buttons to be pressed at the same time, which may not be as easy to do using onscreen controls that are mapped to individual Xbox controller buttons. Now we could do it, and
this layout does represent every possible button that
a player can interact with, but it’s obviously not an
ideal way to play your game. And this helps to highlight why we’ve made touch controls opt-in. We don’t wanna allow
players to play your game in a way that’s not the most fun. So instead we’re gonna have game studios figure out the best way to
play their games with touch, and until they do that
we’re gonna have the game be Bluetooth only. Before we move on, however, we can see here that a fundamental way the Touch Adaptation Kit’s designed is around player ergonomics
and customization. The places where a player’s
thumbs are most likely to be are highlighted by the gray bands here, and that’s where we
place the touch controls. That’s sort of a fundamental feature of how the Touch
Adaptation Kit is designed. When you’re building a
touch-adapted layout, what you’re doing is you’re
basically selecting a control, mapping it to some actions,
and placing that into a socket described in JSON. Sockets can go in a variety of places. We’ve got primary wheels
for left and right thumbs, we’ve got sockets at the
top and bottom of the screen for lesser used actions. And what this really does
is it allows the player to build up a muscle memory. They can figure out where
the control is going to be for each individual touch-enabled
game that they play. It also allows them to customize how they want the controls
to be on their devices so that they can play in a way that’s most comfortable for them. Now when you picked where
your controls are gonna go, primary actions are really gonna be mapped to the two thumb wheels,
because that’s where the player’s thumbs are gonna be, that’s gonna be the most natural place for them to interact with your game. Lesser used actions go in
the top and the bottom. Typically putting the menu
and view button on the top makes a lot of sense. Putting secondary actions
like checking your inventory, something that you don’t
necessarily have to do in the heat of action on
the bottom also makes sense. You’ll also see that we reserved three sockets for system use. The Nexus button is always
in the center of the screen. The upper left of the screen has a button to interact with the gamesharing app. And we’ve reserved the upper
right button for future use. By placing controls into wheels, it allows the players to customize exactly how they want the
controls to be on their device, in a place that’s most
comfortable for them on each different way that they play. They can adjust the size, the rotation, even the relative
position of the controls. You can see here that this layout has, being customized by the player, but there’s two layouts in play here. And once they figure out exactly where they wanna put those controls the app’s gonna remember
where those wheels are. And then when they switch layouts it’s gonna be exactly where
they expect them to be for all future games. And this is really one
of the main benefits of grouping controls and abstractions like left and right wheels. It lets the player put the controls where they want them to be and not have to have the app worry about or the game worry about exactly where the most comfortable spot for
control to be is for a player. The Touch Adaptation Kit
supports a variety of controls, each of which can be
customized for your game. The most basic control is the button. And a button can be assigned
to one or more Xbox buttons. From the face buttons, A, B, X, Y, to even an analog button like a trigger. If you map a button to an analog, what will happen is it will report back that that trigger is fully
pulled down, for example. If you map it to a face button, we have a little special rendering where we’ll put a subscript that gives the name of that face button and the color of that button,
as an addition to the icon. And that really helps us bridge a gap. For example, imagine your
game is pressing some, or displaying some onscreen help. Press A to jump. Well, the player doesn’t have
an A on their touch controls, but they do have the
subscript that lets them know this is the button that’s
being talked about. You can also map a button
to chords of buttons. So you could have a
single onscreen control, that’s the combination of, for
example, right button plus X, so players don’t have to use two thumbs to enter complicated
chords into your game. Next stop is the arcade control. And this is mapped out like
four to six buttons would be on an arcade cabinet. The advantage here is it lets you press two or three buttons at the
same time fairly easily. If you press between any two buttons, both of them get selected
at the same time. If you press the little
dot at the top or bottom, it selects three at the same time. And you can imagine this is really useful when you build the layout
for a fighting game. We support two different
types of joystick controls which are typically used
to map to the thumbsticks. At the top is just a standard joystick. And at the bottom is
what we call a touchpad which feels to a player a little bit more like a mouse trackpad might. And that makes sense,
because on a touch screen it’s a little more akin
to doing mouse-type input than it is to doing joystick-type input. Joysticks can also be
mapped to a single axis. So at the top here we
have a driving control which is mapped to be in
the horizontal axis only. Typically, if you’re using
a thumbstick to drive, you’re not wanting to go up and down. And by mapping this to a single axis, we’ve made it so that the player is more likely to be able to be giving the input to your game
that they want to give. At the bottom, we have a throttle control. And this typically maps really
well to an analog input. You can map it to, say, a trigger. And if you slide it up,
to 70% up, and let go, it’ll report to your
game that, for instance, the trigger is pulled in
and locked in at that spot. We also support a D-pad
for games that really just wanna have a whole D-pad onscreen. Each of these buttons may have different icons associated with them. And this really helps bridge the gap between players that are used
to playing on mobile devices and may not be as familiar
with the A, B, X, Y of an Xbox controller. You can see here a subset of
the icons that are available. It lets you do things like have a jumping icon for the A button, have a steering wheel
icon for the thumbstick, that really helps players understand what the onscreen controls might do. Let’s take a look at some examples of what you could build with this next. A 2D Platformer typically
plays well with a D-pad. So we’ve mapped that to the left wheel. On the right wheel we’ve placed run, jump and attack
buttons mapped to the face, as well as buttons for a trigger pull. An arcade fighter is
also gonna use the D-pad, but here we wanna use the
arcade buttons control for the punches and kicks
that are typically used in a fighting game. For a driving layout, we
can use a steering joystick on the left, gas and
brake pedals on the right. In an adventure game, we might wanna put some auxiliary controls
around the primary wheels; using a touchpad to handle looking around while we have a joystick on
the left to control movement. In a first-person shooter, however, we can use those same
joystick and touchpads, but we might wanna move
all the auxiliary actions to the right hand since
we’re probably gonna be at our movement stick fairly
consistently during gameplay. Similarly, an action game might have a set of controls to use when
you’re locked on a target that puts movement on the left stick and a series of attack
face buttons on the right forgoing any sort of
analog input on that side. Let’s put this all together
and build a simple layout for the first-person shooter game from the Unreal Engine demo. On the left thumb wheel,
we’re gonna want a joystick to control moving around. And we’ll configure that to
map to the left thumbstick. If a player is moving their thumbs to the far outside of
the joystick control, we’ll wanna configure
it to additionally send the thumbstick click so
the player starts running. Shooter games controls, like a lot of first-person shooters, transition a player to running when it sees that thumbstick click. By enabling the click
to happen by a player, simply moving their finger outside of the radius of the control, what we’ve done is we’ve enabled them to not move their hand
off the control joystick, to have to press another button. Instead, it sort of naturally happens. As they move their hand a little bit further and further away from
the center of the control, the game reports, or
the game gets a report that the thumbstick is
moving further and further. That’s gonna enhance how the player moves until it goes so far outside the control that will report to the game
the player should be running. And that’s sort of a really natural touch paradigm for players. In the upper left corner,
we’re gonna add an icon mapped to the Y button to allow our player to switch their weapons. And with that in place, the
left wheel will look like this. On the right wheel, we’re
gonna want a touchpad to use for looking around. We’re gonna configure it with
a dead zone that the game uses so that movement on the touchpad translates directly into
movement on the game. We’re also going to increase
sensitivity slightly so we don’t have to move
our fingers very far to look around in the game. Around that touch wheel, we’ll add some basic buttons to fire. (audio cuts) Reload, jump, and aim down our sights. And since we’ll frequently want to aim down our sights
for a period of time, we’re gonna make that
button a toggle button so the player only has to tap
it in order to start aiming and tap it again to stop. They won’t have to keep pressing the touch point on their screen while they’re targeting at their enemy. And with that, we’ve now got
both thumb wheels configured. All that’s left now is to put the menu and view buttons at the top of the screen, and we should have the start of a touch-adapted layout
to play shooter game with. This one’s really basic. And you’ll wanna do some play testing to make sure you’re building a fun set of touch controls for your game. But with just a few minutes of work and without modifying
any of the game’s code, we were able to build a
really simple touch layout to play a shooter game with. The Touch Adaptation Kit allowed us to let our game receive touch inputs without any modification. However, we can do many more
interesting things to our game once we crack open its code and make use of the Cloud Aware APIs. So let’s talk about those next. These APIs are designed so that you can enhance your gameplay when your game is being streamed, either from Project xCloud or
with Xbox console streaming. Some of the modifications you
might want to make include detecting that you’re
streaming to a phone, and adjusting your font
sizes and UI layout to work better on a smaller screen. When a player disconnects from your game, you might want to trigger
a save automatically. For example, if I’m commuting somewhere. When I get where I’m going,
if I’m not at a safepoint, I’m unlikely to be able
to continue to play and it’s frustrating if
I lose all my progress. If your game is multiple
touch-adapted layouts, maybe one for shooting
and one for driving, you can ask the streaming
client to toggle between them. You can even build some
native touch interfaces to respond the mobile
gestures more naturally. You might consider having
two sets of settings: one for when I’m playing on my TV and one for when I’m streaming. This could allow me to change things like my auto aim sensitivity or turn on steering assist
when I’m playing on a phone, but playing with my
preferred difficulty settings when I’m playing at home. You might also wanna consider saving different gamma settings. My TV at home is unlikely to have the same display properties as my phone. There’re basically four main feature areas that are available in the API set today. The first one is touch controls, which allow you to both
build touch-adapted layouts and also add native touch to your game. Streaming client management
allows you to know when someone is
connected/disconnected from your game. And the streaming client properties allow you to find out more
information about that device. Now the latency measurement
APIs let you measure how far your client device
is away from your game. Since our Project xCloud
servers are hosted in the Azure data centers
which has a global footprint, we’re able to take advantage
of that to place your game as physically close to
your players as possible. However, the speed of
light, being what it is, means that the time
between when your player presses a button when your game sees it, the time between when you render a frame and when your player sees it, it’s gonna be slightly longer than if it’s on a living room Xbox. Typically, games work really well. But if you wanna apply some algorithms that you know for your multiplayer work, the latency measurement APIs
will let you do just that. Typically, when a game has
different touch-adapted layouts there are gonna be a whole set of them that need to be applied
throughout the game. If we think of a game
like Halo, for example, the layouts that I’m using
when I’m running around as Master Chief are
likely gonna be different than the layouts that I use
when I’m in the Warthog. There might be a whole different layout when I’m in the back of the
Warthog using the cannon. Well, how do I make this work? The first thing I need to do is build all of the layouts that are
necessary for my game to work. I would build layouts
from first-person mode, for the driving mode and
any other auxiliary modes. I’ll bundle those up into
a touch adaptation bundle and upload them to the
Project xCloud servers. Next, there’s a set of APIs
that are used at runtime, to select between the different layouts. So this way the client device
knows exactly what layout matches the state of your game
at any given point in time. Earlier, I gave an example about switching between the layouts if
you’re going between driving and action portions of your game. However, it’s also possible
you might wanna switch layouts for more subtle state changes. Let’s go back to shooter game. Now here we have a touch-adapted layout that has both a reload and a shoot button, because the player has extra ammo clips and their current clip is not full. But if the player runs out
of all their extra clips, they can no longer reload. Now we could keep the
reload button on the screen and allow the game logic to reject the request to reload the weapon. Or we can provide a
visual hint to the player by switching to a similar layout which simply removes the
reload button altogether. Similarly, when the player
runs out of ammunition we might just wanna remove the fire button to provide another visual hint that they need to go find more ammunition if they wanna continue playing the game. Now when they pick up another clip we’ll probably choose a layout which supports shooting again, putting the fire button
back on the screen. If we’ve built a
touch-enabled menu system, we might wanna dismiss the
onscreen controls entirely, allowing the player to interact with the menu more naturally. To build this, it’s actually
fairly straightforward. We’ve hooked a few key
points in the game logic. Things like, “I’ve gotten more ammo. “I’ve run out of ammo. “I’ve reloaded my weapon,” or, “‘I’ve picked up a new weapon.” And at those points we send
the message to the client app that lets it know the touch control layout which most reflects our current state. Now you wanna strike a balance here. While it’s okay to send
a message to the client that reflects that same touch controls that they currently have
that’s effectively a no-op, we really don’t wanna
send an update every frame because that is sending a network message across to the client. Instead, the balance that
we decided to go with here is every time a state change occurs we’ll send a message to the client. And what we can observe here is that we’re simply
informing the client device as to the best set of
controls that matches our current state right now. The client device does the rest for us. So imagine the case that the player is using a Bluetooth controller instead of touch controls for the game. In that case we don’t wanna be popping touch controls on and off the screen, ’cause it’s gonna be very distracting. Instead, the client app notices the player is not using touch controls, or maybe they’re on a device that doesn’t even support touch. In that case they’re
not gonna display them in response to the message. Instead, they’ll say, “Okay. “The player is not using
touch controls right now,” “I’ll make a note as to
which of the controls “that make the most sense to put up “if they decided to switch
to them in the future, “but for now I won’t display those.” This way your game
doesn’t have to keep track of the state of the input
on the client device. Instead, you just inform us
if the player is using touch, this is the best set of touch controls for them to use right now. With those changes in
place, you can see how we can provide contact-sensitive
controls to the player, providing them with
hints as to what actions are available to them in the game by adding and removing controls that match the current game state. Now we’ve been talking quite a bit about using the Touch Adaptation Kit to create a set of touch
controls to your game. And this works great for
the most part of your game which really just wants
to see controller inputs. Now, for example, a racing game could use the Touch Adaptation Kit, to build an interface
that has a steering wheel and gas pedal sliders for the majority of its gameplay. However, our user research
tells us time and again there are some things
that happen during a game that players just really
wanna interact with with natural touch motions. When a menu pops up on the screen, players wanna tap it to interact. When a map comes on the screen, players wanna use natural
mobile phone gestures. They wanna pinch the zoom
and swipe to scroll around. Our SDK allows you to intercept these natural mobile
phone inputs and provide mobile-first UIs or
touch-first UIs for your game. So, for instance, you can intercept that the player has tapped the screen and use that to build a touchable menu. You can get multiple touchpoints and use that to build a map
that’s interactable with swipe. So let’s take a look
at how we might do this with the Unreal Engine shooter game demo. Now on the ERA and the
XDK the inputs to touch look exactly like the inputs for mouse. So we’ll start by hooking
the OnPointerPress, OnPointerMove and OnPointerReleaseEvents. And inside of those
we’ll make a note to see, “Did I just get a touch?” And if so, we’ll make track
of where that touch happened. Now when we’re processing the input queue we can notice we have a touch event. And if we do, we’ll just move the cursor to where the player touched the screen and hand off to the existing mouse handling code in the game. And in fact with no
changes to the game code, we have a menu system that responds to touch events from the player. And because we have no
changes to the game code in order to make this work, we had a little bit of free time to make a few other options. So, for instance, we were able to enhance the size of the menu to make it look bigger on a mobile device. We were able to add buttons to interact with the different menu
options to change Booleans and to increase and
decrease numeric values. We’ll see more about how to
do this in a few minutes. The most fundamental thing
you’re gonna wanna do when you’re playing with the cloud or SDK is detect that someone is
connected to your stream and they’re now streaming your game. And that’s where the Streaming
Client Management APIs come into play. The most fundamental of these is XGameStreamingIsStreaming which returns true if anyone is currently streaming your game. But let’s think about the
console streaming scenario. With Xbox console streaming, I might fire up a game
on my couch at home, play it for a little while and
realize I’m late to dinner. So now I’m gonna grab my phone, hop in the back of a ride share, and connect back to my Xbox to continue playing on my layout. Well, in that case my
game has transitioned from not streaming to streaming. Now, similarly, when
I get home from dinner I’m probably gonna wanna keep
playing in my living room. And in that case I might
disconnect my phone and the game has transitioned from not streaming to streaming, from streaming to not streaming. So that is to say that
XGameStreamingIsStreaming is not a constant; it’s a
point in time statement. XGameStreamingRegisterConnectionStateChanged can be used to get real-time notifications for when these connect and
disconnect events happen so that your game can
internally monitor its state and know someone has
just started streaming and somebody has just stopped streaming. So let’s talk through typical flow of how a game might
monitor for these devices to connect and disconnect. The first step is just
register for notifications when a device has connected to and disconnected from your game. Here, we’ll shuttle those off to game implementation functions that handle the connect
and disconnect logic. When a device connects we’ll
want to keep track of it. In this case we allocate a
game specific data structure to track the streaming client, stash them to a map index by its ID. So in this game that means
XGameStreamingIsStreaming is equivalent to checking to see if there’s any data in this map. In either case, we know
that someone is streaming. The ID is an interesting
thing to talk about, ’cause what happens is
we will make that that ID is identical for the identical
client playing your game. So if I’m playing on my phone
and I play your game once, it’s gonna have some ID, I disconnect and I connect tomorrow I’m
gonna have the same ID. But if I connect from a different phone, I will have a different ID. And this allows you to
save per device settings. So if I have two different phones with two different gamma settings, you could potentially have
two different sets for that and know which one I’ve connected with. The next thing we’re gonna do
is register for notification when the properties of
the client have changed. Here, we’ll understand
what’s the size of the device that just connected to me. And finally, we’ll update the
device that just connected with what the current touch
control layout should be. Now if a race is about to
start and I connect in, I’d like for the game to tell me to put up the touch controls
for driving, for instance. Now when a client device disconnects what we’re gonna wanna
do is take some action to clean up our state that’s
associated with that device. We’re also gonna get a
controller disconnect associated with that client. And that works really great
for console native games that aren’t connected
to this console state or client state management system. However, if you wanna do something extra, like maybe trigger a save game, this is an excellent place to do that. On Project xCloud, the
game is gonna stay alive for a few minutes after
the disconnect happened so the player who hits a
temporary disconnect state, maybe they’ve driven through a tunnel, they’re gonna be able to
recover their session. So there’s plenty of
time for you to do things like trigger saves. You should also be aware
that you might receive more disconnects than you
might initially think. For instance, if I receive a
text or I answer a phone call, that’s gonna trigger a
disconnect from the game and then a reconnect when
I open the app back up. There isn’t anything special
you need to do about this. You should just be aware that
you might see more disconnects than you would initially think when you’re walking
through your game logic. After a streaming device
connects to the stream your probably gonna wanna find out some more information about it. And that’s where the Client Properties API has come into play. In the API set that’s currently shipping there are two properties for you to query. You can get the physical
size of the screen that’s streaming your device. And that allows you to do things like change your UI layout to
better match the device size. You can also find out if the
device supports touch at all. And that’s useful if you’ve done, for instance, a native
touch-enabled UI on your side. If the device is not able to
ever send you a touch event, you might not wanna put up that UI at all. I do wanna talk for a minute about the physical dimensions that are reported by
XGameStreamingGetStreamPhysicalDimensions. These dimensions are
the width and the height of the video frame of
your game in millimeters. That is the same aspect ratio
as your game is rendering, not the aspect ratio of the
device you’re rendering on. So imagine a phone, for
instance, with screen dimensions that are 150 millimeters
by 65 millimeters. On that device, your game
might be rendered pillarboxed, maybe it’s 110 millimeters
by 62 millimeters. And those are the values
that are returned by the API. You get the 110 by 62
size of your game’s image, not the 150 by 65 size of the screen. Now we can make an observation
about DPI using this. Since your game’s
rendering to 1920 by 1080 back buffer in the Xbox, we
can calculate a virtual DPI by dividing that resolution by the physical size of our stream. In this case it’d be on the order of 440 rather than a DPI of the screen
the game is rendering on. Let’s go back to that
shooter game demo again and put in some simple
code that will detect that we’re running on a small screen and adjust the font sizes that the game is using to compensate. Now here’s a basic function we call whenever we get a notification that the screen size is changed. Here, we just do something really simple. We say screens that are less
than six inches are small, between six and 12 are
medium, maybe that’s a tablet, and large, and a 12 is a large screen. Maybe we’re attached to a monitor. With that in place, you can
see how we scaled the text to be more readable on a mobile screen when someone’s streaming our game. The original text sizes on the left, the rescaled versions on the right. This additionally has a side effect for making the hitboxes or
touch a little bit easier to use when you’re using the menu as well. Now, of course, your designers might wanna do all sorts of things in response knowing the size
of the screen they’re on. This is just a really basic example. Another thing you can do
with the Cloud Aware APIs is detect which Azure data
center that you’re located in. And this will let you do things like co-locate your game servers inside the same data center. Maybe matchmake between players
that are located together so that they get nearly
zero ping between them. One interesting thing to note is that we noticed some bugs when we’re
bringing up Project xCloud. Some games that we’re
measuring the distance between themselves and their servers we’re seeing that they were
getting a zero millisecond ping and not reacting to that well, not thinking that zero
milliseconds was possible. And there’s different ways
you could handle this. You could just realize
that zero milliseconds is a possible ping between your Xbox game and the servers that you’re talking to. Maybe you wanna do the
math in microseconds. Or maybe just the front-end
code that’s doing that ping knows that the rest of the code is not gonna handle zero very well and rounds it up to
one when it detects it. Now, incidentally, this API
is the one which is affected by the data center region
name configuration setting we saw on the streaming
tab of Dev Home earlier. By default,
XGameStreamingGetServerLocationName is gonna return that your
dev kit is not a server. It’s gonna return a null string. But if you wanna test
some code you’ve written that tries to co-locate
servers that tries to something interesting with
the Azure data center, you could put a string here and will that value
verbatim back to your game. And that’s literally the only thing that this text box does here. We’re not gonna do other things like try to set up
latency between yourself to match what you would
be in that data center. It’s just changing the location name. Next up, let’s talk about
our latency, mitigation and measurement APIs. One thing you might notice, based upon what we were
just talking about, is that the latency between two Xboxes or latency between an
Xbox and a backend server might be significantly lower
than what you’re used to especially if they’re both
in the same data center. What this means is that if
you already know algorithms that help mitigate latency
between Xboxes and their servers, you have a chance to
reuse those algorithms just in a different location in your game. For example, you might want to apply some of those mitigation
algorithms between your game and the player as opposed to between one person playing your game at one Xbox and a different person on the next Xbox. Let’s take a look at
some of those APIs next. Now, of course, to do that,
you’ll need to be able to measure the latency
your game is observing. And that’s where
XGameStreamingGetStreamAddedLatency comes into play. It’s gonna return to you the number of microseconds of
latency that are observed between when a controller input is pressed and when it’s provided to
the Xbox running the game. It also measures the
number of microseconds of latency observed between
when the game renders a frame and when the player sees that frame. The third value it provides
is a standard deviation. Now the input and output latencies are provided as an average
over the last several seconds, the standard deviation
lets you know the jitter that’s inside of that average. So if you see a high standard deviation, maybe it’s eight milliseconds or more, that means that the network connection between your player and your
game is pretty variable. The data points that went
into calculating that average are kind of more widely spread apart. If you see a lower jitter between them, it might mean that those data points are more tightly packed together and the average that you get
is a little more actionable. So if you’ve called this API and you do see a high jitter value, you might wanna go ahead
and wait a few seconds and call it again to
get a more reliable view of what the player is seeing. Now let’s talk about exactly what’s being returned to you by this API. When a player plays the game they’re gonna press the
button on their controller which is gonna go over a Bluetooth connection to their phone. The phone is gonna pass it to the app, package it up and send
it over the internet to the console made
available to your game. The Bluetooth controller latency is not included in the input latency. Similarly, because there’s
really just not a great way for us to measure that. But the time between when
the app receives the input and when it’s available on
the Xbox for your game to see, that’s what’s being
measured as input latency. Once your game receives the input, it’s gonna run its game logic, it’s eventually gonna render
a frame that responds to it. Maybe the player pressed jump and eventually you render a frame that has the first jump
animation frame in it. And that time between when
your game receives the input and when it renders the output of it is not included in either
input or output latency, because remember we’re
measuring stream added latency. The time it takes your game to run is gonna be the same whether
you’re streaming or not. Well, once you’ve rendered the frame it’s gonna go through our video encode, get packaged up to send
back over the internet, and decode it on the client device, and then play it on their screen. And the time between encoding,
sending it and decoding it is what’s returned as the output latency. We don’t measure the time that it takes to go on to the screen simply because, again there’s not a great way for us to measure that on every client device. Now that diagram I showed you was obviously a lot more fine-grained than just input and output latency. So in the future, we’re adding a new API that allows you to get
more detailed statistics about the frame. And this really gives you a choice. If you wanna work on the higher level, more bundled input and
output latency level, you have the option to do that. If, however, you wanna work and figure out exactly what the encode duration was, exactly what the decode duration was, you may do that as well. And in fact you might wanna mix and match. Some of your game logic might wanna work on the just input latency average and output latency average numbers while you might have some diagnostic code that wants to see the real details. This API is not available right now, but it’s coming in the future
and it’ll let you choose how you want to implement your game logic. Of course, if you’ve
written any code like this, you’re gonna need to test it. And to do that, we’ve
augmented the XbStress tool with game streaming profiles. We’ve got a profile in there that sets up the latency that we’ve seen
for minimum requirement to play on Project xCloud. The 25th percentile that we’ve
observed inside our preview and the average that we’ve
seen inside our preview. And we’ve kinda kept
things abstract this way so that we can keep the numbers up-to-date as our preview continues. And we can keep things so
that you’ll really be able to test to see what the real world Project xCloud environment looks like. Before we finish, I wanna
talk about another API that’s coming in the future. This API lets you pass tokens between the server and the
client, back to the server to let you see and understand exactly what the client was seeing when they provided input. Let’s imagine the situation where a player is playing your game with high latency. Maybe your game simulation is detecting that the player is aiming at an enemy, but that frame hasn’t yet made
it to the player’s device. Well, when you render that frame you can render it and tag it to say, “Yes, on this frame the
player’s targeting the enemy,” that tag gets sent along with the frame to the client device. Now, because of the
latency of the connection, your game simulations move forward and the player’s no longer
aiming in the right spot. But as this is happening,
the player thinks they are at the right spot. So they pull the trigger. When they pull that trigger, we send the state that you
sent over to the device for the frame that’s
currently on the screen back to the game. And now you have a choice. You can process either
what the current state is on your game simulation or you can process what the state is in the
frame that the player sees. And in this way you can decide. “Should I treat this as a hit,” the player thought to hit the enemy, or, “Do I need to do something different?” Now obviously your game
state’s not gonna be something as simply as, “I’m targeting
the enemy,” or, “I’m not,” but as you can see in the example the type of thing you might be able to do with this token passing the API which is coming on a future SDK update. And that wraps up our quick tour of the APIs available today
to make your game Cloud Aware. Hopefully that gave you some inspiration for experiences you
could build in your game, to enhance it when it’s
being streamed from Xbox. Now we know our API sets’ not done, and if you did get an
idea of an experience you’d love to build into your game but you didn’t see the tools or APIs there that you would need to do it, we would love to hear about it. We’re definitely open to adding new sets of capabilities for you to build the experiences
you want for your game. If you’re interested in learning more about the how Project
xCloud Public Preview has been going over the last year, you might wanna check
out another presentation given by my colleague, Ray Cohen. Ray goes through some of the
things that we’ve learned during the first year of running the Project xCloud Public Preview. Otherwise, thank you for your time.

, , , , , , , , , , , , , , , , , , , ,

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *