The critical driver for WebRTC-based customer interactions in the enterprise will be around streamlining and contextualizing interactions. Let me explain by way of an example. You may remember seeing TV ads for the Kindle Fire HDX around the holidays last year. The touted feature of the newest tablet from Amazon is something called the Mayday button. When end users hit the Mayday button, it directly establishes a video call back to a tech advisor in an Amazon contact center. That tech advisor can hear the customer and can even draw on the screen of the device.
The obvious improvement over a traditional approach is the fact that several steps have been removed for a customer trying to reach Amazon. On a non-Mayday-enabled Kindle, the user would have to open a browser, go to Amazon’s site, look for a Contact Us page, hope to find a phone number, then find a phone and call that number. Hitting the Mayday button simplifies that whole process down to a single step, which is a huge win. Furthermore, the customer sees video of the contact center agent (the agent only gets audio back from the customer) and can draw directly on the screen of the device, which improves the quality of the interaction over an audio-only PSTN-based interaction.
However, there is another, more subtle—and potentially more interesting—improvement going on as well. When the end user initiates a Mayday call, a lot of context about that user is available to Amazon. They know who the user is (via their Amazon login), and can use that to look up their account details. They know what device they are calling from and which apps/screens they were looking at when they hit the button. All of this context can be used to automatically route the call to the correct agent based on skills and availability. Compare this to calling into an interactive voice response (IVR), trying to navigate to the right department, typing in your account number or other details via the keypad, or trying to do agent screen pops based on the caller ID of the call. It’s clumsy in comparison to the Mayday example.
The Mayday button is an example of embedded real-time communications, and we are in the very early stages of seeing this type of approach in the market. As WebRTC browser and mobile compatibility issues are worked out, I anticipate businesses will increasingly move to embed WebRTC-based communications directly into their web and mobile applications. The driver for this will be the improved customer interactions it makes possible, not the cost savings associated with bypassing the PSTN.
At Thinking Phone Networks, we are working on an embeddable WebRTC-based UC client that customers will be able to put directly into their applications. This embeddable UC client will integrate with our cloud-based contact center and other cloud-based UC endpoints to streamline and contextualize customer interactions for our business users, allowing them to greatly enhance the overall customer experience.