These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is testing their APIs, going beyond just monitoring and understand the details of each request and response.12 Oct 2017
Around 2010, the world of APIs began picking up speed with the introduction of the iPhone, and then Android mobile platforms. Web APIs had been used for delivering data and content to websites for almost a decade at that point, but their potential for delivering resources to mobile phones is what pushed APIs into the spotlight. The API management providers pushed the notion of being multi-channel, and being able to deliver to web and mobile clients, using a common stack of APIs. Seven years later, web and mobile are still the dominant clients for API resources, but we are seeing a next generation of clients begin to get more traction, which includes voice, bot, and other conversational interfaces.
If you deliver data and content to your customers via your website and mobile applications, the chance that you will also be delivering it to conversational interfaces, and the bots and assistants emerging via Alexa and Google Home, as well as on Slack, Facebook, Twitter, and other messaging platforms, is increasing. I’m not selling that everything will be done with virtual assistants, and voice commands in the near future, but as a client we will continue to see mainstream user adoption, and voice be used in automobiles, and other Internet connected devices emerging in our world. I am not a big fan of talking to devices, but I know many people who are.
I don’t think Siri, Alexa, and Google Home will live up to the hype, but there is enough resources being invested into these platforms, and the devices that they are enabling, that some of it will stick. In the cracks, interesting things will happen, and some conversational interfaces will evolve and become useful. In other cases, as a consumer, you won’t be able to avoid the conversational interfaces, and be required to engage with bots, and use voice enabled devices. This will push the need to have conversationally literate APIs that can deliver data to people in bite-size chunks. Sensors, cameras, drones, and other Internet-connected devices will increasingly be using APIs to do what they do, but voice, and other types of conversational interfaces will continue to evolve to become a common API client.
I am hoping at this point we begin to stop counting the different channels we deliver API data and content to. Despite many of the Alexa skills, and Slack bots I encounter being pretty yawn-worthy, I’m still keeping an eye on how APIs are being used by these platforms. Even if I don’t agree with all the uses of APIs, I still find the technical, business, and politics beyond them evolving worth tuning into. I tend to not emphasize to my clients that they work on voice or bot applications if they aren’t too far along their API journey, but I do make sure they understand one of the reasons they are doing APIs is to support a wide and evolving range of clients, and that at some point they’ll have to begin studying how voice, bots, and other conversational approaches will be a client they have to consider a little more in their overall strategy.
I hear a lot of noise about voice as an interface. I don’t doubt that voice enablement will have it’s place, and be used in a variety of situations, I’m just not convinced that it will end be everything everybody is thinking it will be. My feelings on the subject are mostly because of how I see the world, but come to thinking about, all my feelings are this way. Hmmmm? While the API aspects of voice enablement like Alexa are interesting, I seriously doubt that it will become the primary interface for how folks engage with the web, and move too far beyond a novelty, because of the existing deal we’ve established between our brain and the keyboard.
There is an connection the exists between my brain, fingers, and the keyboard. This exists on my laptop, as well as my iPhone and iPad. I’m just not a talker. I just don’t talk on the phone. I make most conversations straight forward and to the point, and enjoy talking with people, not much else. I can’t even take audio notes. As I said, I recognize that this is completely from my perspective, and there are other folks who will adopt a voice enabled way of doing things, and be just find talking to get things done. I just don’t think it will be as many people as we think, and I don’t think it will be practical for much of what we need to get done. We need more connection, privacy, and isolation with our thoughts to accomplish what we need on the Internet each day.
Having a conversation, or verbally giving commands to my computer and devices just doesn’t seem as elegant as typing, with a combination of mouse or finger gestures via a trackpad. I’ve become pretty skilled with generating a pretty significant amount of content via a MacBook keyboard and trackpad. There are plenty of ways to optimize my output in this environment, I just don’t see going voice will bring me any benefits, efficiencies, or even be obtainable in the environment(s) I regularly work. I know many folks are looking to push forward technology, but there are some things I think just work, and will continue to work for sometime. I’ll keep experimenting with new technology that comes out, but I don’t see anything on the horizon that will disrupt the connection that exists between me and the keyboard, doing what I do online each day.
404: Not Found
I get why people are interested in voice-enabled solutions like Alexa and Siri. I'm personally not a fan of speaking to get what I want, but I get the attraction for others. Similarly, I get why people are interested in bot enabled solutions like Facebook and Slack are bringing to the table, but I'm personally not a fan of the human-led noise in both of these channels, let alone automating this mayhem with bots.
In short, I'm not 100% on board that voice and bots will be as revolutionary as promised. I do think they will have a significant impact and are worthy of paying attention to, but when it comes to API driven conversational interfaces, I'm putting my money on push driven approaches to making API magic happen. Approaches like Push by Zapier, and Webtask.io, where you can initiate a single, or chain of API driven events from the click on a button in the browser, in a web page, on my mobile phone, or hell, using the Amazon Dash button approach.
These web tasks operate in an asynchronous way, making them more conversational-esque. Allowing those of us who are anti-social, and have adequate air gapped our social and messaging channels, and haven't fully subscribed to the surveillance economy, alternate solutions. These mediums could even facilitate a back and forth, passing machine readable values, until the desired result has been achieved. Some conversations could be predefined or saved, allowing me to trigger using a button at any point (ie. reorder that product from Amazon, retweet that article from last week). I'm not saying I don't want to have an API-enabled conversation, I'm just not sure I want a speaker or bot always present to get what I need to get done in my day.
I understand that I am not the norm. There are plenty of folks who have no problem with devices listening around their home or business, and are super excited when it comes to engaging with half-assed artificial intelligence wich accomplish basic tasks in our world(s). But I can't be that far out in left field. I'm thinking the landscape for conversational interfaces will be diverse, with some voice, some chat, and hopefully also some asynchronous approaches to having conversations that can be embedded anywhere across our virtual and physical worlds.
I am spending a lot of time thinking about conversational interfaces, and how APIs are driving the voice and bot layers of the space. While I am probably not as excited about Siri, Alexa and the waves of Slack bots being developed as everyone else, I am interested in the potential when it comes to some of the technology and business approaches behind them.
When it comes to these "conversational interfaces", I think voice can be interesting, but not always practical for actually interacting with everyday systems--I just can't be talking to devices to get what I need done each day, but maybe that is just me. I'm also not very excited about the busy, chatty bots in my Slack channels, as I'm having trouble even dealing with all the humans in there, but then again maybe this is just me.
I am interested in the interaction between these conversational interfaces and the growing number of API resources I track on, and how the voice and bot applications which are done thoughtfully, might be able to do some interesting things and enable some healthy interactions. I am also interested in how webhooks, iPaaS, and push approaches like we are seeing out of Zapier, can influence the conversation around conversational interfaces.
Conceptually I can be optimistic about voice enablement, but I work in the living room across from my girlfriend, I'm just not going to be talking a lot to Siri, Alexa or anyone else...sorry. Even if I move back to our home office, I'm really not going to be having a conversation with Siri or Alex to get my work done, but then again maybe its just me. I'm also really aware of the damaging effects of too much messaging, chat, and push notification channels open, so the bot thing just doesn't really work for me, but then again maybe it's me.
I am more of a fan of asynchronous conversations than I am of the synchronous variety, which I guess could be more about saved conversations, crafted phrases or statements that run as events triggered by different signals, or even by me when I need via my browser--like Push by Zapier does. I see these as conversations, that enable single or a series of API enabled events to occur. This feels more like orchestration, or scripted theater which accomplishes more of what I'm looking to do than synchronous conversations would accomplish for me.
Anyways, just some exercising of my brain when it comes to conversational interfaces. I know that I'm not the model user that voice and bot enablement will be targeting with their services, but I can't be all that out in left field (maybe I am). Do we really want to have conversations with our devices or the imaginary elves that live on the Internet in our Slacks and Facebook chats? Maybe for some things? What I'd really like to see is a number of different theaters where I can script and orchestrate one time, and recurring conversations with the systems and services I depend on daily, with the occasional synchronous engagement required with myself or other humans, when it is required.
I enjoy being able to switch gears between all the different areas of my API research. It helps me find the interesting areas of overlap and potentially synchronicity in how APIs are being put to work. After thinking about the API abstraction layer present in Meya's bot platform, I was reading about Clearbit's iPaaS integration layer with Zapier. Zaps are just like the components employed by Meya, and Clearbit walks us through delivering intended workflows with the valuable APIs they provide, executed Zapier's iPaaS service.
Whether its skills for voice, intents for bots, or triggers for iPaaS, an API is delivering the data, content, or algorithmic response required for these interactions. I've been pushing for API providers to be iPaaS ready, working with providers like Zapier for some time. I predict you'll hear find me showcasing examples of API providers sharing their voice and bot integration solutions, just like with Clearbit has with their iPaaS solutions, in the future.
I would say that even before API providers think about the Internet of Things, they should be thinking more deeply about iPaaS, voice, and bots. Not that all these areas will be relevant, or valuable to your API operations, but they should be considered. If you have the resources, they might provide you with some interesting ways to make your API more accessible to non-developers--as Clearbit opens their blog post opening.
When it comes to skills, intents, and iPaaS workflows, I am thinking we are going to have to be more willing to share our definitions (broken record), like we see Meya doing with their Bot Flow Markup Language (BFML) in YAML. I will have to do some more digging to see how Amazon is working to make Alexa Skills more shareable and reusable, as well as take another look edition of the Zapier API to understand what is possible--I took a look at it back in the spring, but will need a refresher.
While the world of voice and bots API integration seems to be moving pretty fast, I predict it will play out much like the iPaaS world has, and take years to evolve, and stabilize. I'm still skeptical about the actual adoption of voice and bots, and it all living up to the hype, but when it comes to iPaaS I'm super hopeful about the benefits to actual humans--maybe if we consider all of these channels together, we can leverage them all equally as common tools in our API integration toolbox.
I'm going through Amazon's approach to their Alexa voice services, and it is making me think how bot platforms out there should be following their lead when it comes crafting their own playbook. I see voice and bots in the same way that I see web and mobile--they are just one possible integration channel for APIs. They each have their own nuances of course, but as I'm going through Amazon's approach, there are quite a few lessons on how to do it correctly here--that apply to bots.
Amazon's approach to investment in developers on the Alexa platform and their approach to skills development should be replicated across the bot space. I know Slack has an investment fund, but I don't see the skills development portion present in their ecosystem. Maybe it's in there, but it's not as prominent as Amazon's approach. Someday, I envision galleries of specific voice and bot skills like we have application galleries today--the usefulness and modularity of these skills will be central to each provider's success (or failure).
I had profiled Slack's approach before I left for the summer, something I will need to update as it stands today. I will keep working on profiling Amazon's approach to Alexa, and put together both as potential playbook(s). I would like to eventually be able to lay them side by side and craft a common definition that could be applied in both vthe oice API, as well the bot API sector. I need to spend more time looking at the bot landscape, but currently I'm feeling like any bot platform that can emulate Amazon's approach is going to win at this game--like Amazon is doing with voice.
As I'm working through my morning work monitoring the API space, I'm proccesing stories about the availability of valuable resources, like the House Rules Committee data being released in XML formats, and ExoMol, the molecular line lists DB used in simulation of atmospheric models of exoplanets, brown dwarfs & cool stars.
I feel fortunate to live in a time where the world is opening up such valuable resources, making them available online--available for anyone to use, remix, improve, and make better. My faith in APIs doesn't come from any single API, it comes from the possibilities that will exist when individuals, companies, organizations, institutions, and government agencies all publish valuable resources using APIs.
While there is still a lot of work ahead, I'm seeing the early signs of this reality emerging across my API monitoring in 2016. I'm coming across so many, extremely valuable, openly licensed, machine readable resources that can be used in some very interesting ways. The trick now, is how do we expose the most meaningful parts of these resources, and make sure they get found by the people who will actually put them to use. As the number of APIs increase, this is something that is going to get harder and harder, and the need for value even more critical.
Another dimension to this discussion is the growing number of channels we need to make our API resources available in. Web and mobile are still king when it comes to consuming APIs, but quickly devices, messaging, voice, bots, and other channels are growing in use. The next wave of API evangelism is going to require that the right people (domain experts) are available to help expose the most meaningful skills that our APIs posses, via these growing number of quick moving channels.
An example of this in action, using one of the valuable resources above, could involve making the Congressional activity that is most relevant and important to me, available in my Slack channel (or messaging app of choice), or even available via my Amazon Echo, using Alexa Voice Skills. How do we start carving out meaningful skills from government, and other open data, using simple APIs? How do we use these to educate individuals, either as an average citizen, or maybe in a professional or commercial scenario.
We have many, many years ahead of us, helping individuals, companies, institutions, and government understand why they need to be exposing valuable data, content, and other digital resources via simple web APIs. However, alongside these efforts, we are going to need armies of other individuals who have the ability to identify valuable resources, and help craft simple, usable, and meaningful endpoints, that can be added as skills within the web, mobile, device, messaging, bot, and voice apps of the future.
I'm evaluating the Alexa Voice Service ecosystems alongside leading API messaging platforms like Telegram, and Slack, who are changing the way users engage and communicate, but also are evolving how we are putting our API driven resources to work. As I do this research, I keep finding myself coming back to Amazon's concept of an Alexa Skill, and thinking about how it applies to average everyday APIs like mine.
Do my APIs have the skills they need to compete in this new voice and bot enabled world? It is bad enough that I don't always have the skills necessary to compete as a programmer, but now my APIs have to have the right skills? WTF ;-) Seriously though, I feel the Amazon's concept of the "skill" reflects a wider experiental shift in the API space, where APIs need to deliver information and other digital resources in the context of how they will be experienced by users, and not just how they are stored and maintained by developers and IT operations.
Since there is such a diverse amount of APIs out there, what exactly consitutes as a "skill" could vary widely. If you are person or business directory, the skill might be returning the website address or phone number for an individual or business. If you are an email or SMS service it might be simply send a message to an individual. The concept of a skill further come into focus when you think in context of Alexa Voice Service, or as Amazon puts it:
Alexa, the voice service that powers Amazon Echo, provides capabilities, or skills, that enable customers to interact with devices in a more intuitive way using voice. Examples of skills include the ability to play music, answer general questions, set an alarm or timer, and more.
How does this same way of thinking apply when we are communicating in Slack? Does my API have the same skills to identify that someone just asked a question, or possible executed a keyboard shortcut, and can respond intelligently, in real-time, with the expected behavior the user is anticipating? In addition to having the right skills, Slack is also asking if our APIs can enable Bot Users to be "delightful, interesting, and fun"--significantly raising the bar for what is expected.
As with the evolution of our own personal and professional skills, it will take some practice to develop the new skills that our APIs will need to be successful in this evolving landscape. Something that cannot even begin, unless we have already embarked our own API journey, exposing valuable data, content, and other resources as APIs, while also having in place an efficient way to add, and evolve our API resources. Only then can we start really polishing and honing our API skills, to operate via voice enabled platforms like Alexa, and the next generation of messaging platforms like like Telegram, and Slack.
All of a sudden I feel like my APIs are just a teenager who is typing up their first resume, headed out to find their first job interview, so they can afford a car, and go out on their first date.
As I listen to my hangout with Wade Foster of Zapier, I'm considering the overlap between my API reciprocity, bots, virtualization, containerization, webhooks, and even voice research. At the same time I'm thinking about how APIs are being used to inject valuable data, content, and other valuable API driven resources to the stream of existing applications we use every day.
Some of this thinking is derived from my bot research, where they impersonate Twitter users, or respond to specific key words, phrases, or keyboard shortcuts in Slack. Some of this thinking comes from learning more about the ability to inject "code steps" into multi-step workflows with Zapier. Then as I continue doing my curation of news I read about Uber allowing developers to create trip experiences, opening up another window for potential API driven injection into the Uber "experience".
It got me thinking, where else is this happening? I would say Twitter Cards is a form of this, but is an example that is more media focused, rather than bots (although it could be bot driven behavior). Then I started looking across the 50 areas of the API life cycle I'm monitoring, and voice stood out, as another area of potential overlap. Amazon is allowing developers to inject API driven content, data, and other resources into the conversational stream occurring Alexa users. I don't see this being much different that bots injecting responses into the messaging stream of Slack and Twitter users.
I'm just getting going with these thoughts. Something I'm thinking containers, and serverless approaches to API deployment are going to impact this line of thought as well. I'm considering pulling together a research project around this overlap, something I will call API injection. Essentially, how are people injecting API driven data, content, and other resources, into the streams of other applications, using APIs. I could see a whole new breed of API provider emerge, just to satisfy the appetite of bots, and other API injection tooling, in engaging with users via the streams they are already existing daily, whether messaging, voice, or any other online experience.
I do not think this is limited to consumer level injections. I could see the B2B approaches to API injection, opening up some pretty interesting API monetization opportunities. Hock'n your bot warez within other people's business streams. ;-) We'll see where this line of thought goes...regardless it is fun to think about.
I just finished looking through the documentation for the Zapier API, and for the Alex Voice Service, trying to understand the approach these platforms are taking to incorporate API driven resources into their services. How do you translate a single API call into a Zapier trigger or action? How do you build a rich index of resources for Alexa to search via voice commands? Learning how APIs are being consumed, or can be consume, is an infinitely fascinating journey for me, and something I enjoy researching.
All of my research into reciprocity, bots, and voice enablement via APIs, makes me think more about experience based API design, trumping much of the resource based dogma that has domnated much of the conversation. How will my APIs enable meaningful interoperability via Zapier, voice searches via Alexa, and stimulate interesting bot interactions? I am not just focusing on how my resources are defined, I am now also forced to think about how they will be searched, consumed, and put to use in these new client experiences.
Just like mobile significantly shaped how we craft our APis, automation, voice, bots, and increasingly Internet connected devices will continue to define our API strategies. Users, and developers will increasingly small, meaningful resources that they can use to orchestrate their personal and business lives, that will respond to simple voice commands at home, in the car, and via our mobile devices. When we are designing our APIs, are we thinking about these bite-size resources that will be needed in this emerging bot and voice driven evolution in the API space.
Shortly after the Zypr voice API came on to the scene in 2011, I launched my research into voice APIs. Like many other areas of the API universe, voice has come in and out of focus for me, something I think will take much longer to unfold, than any of us could have ever imagined. Zypr quickly ran out of steam, and other similar solutions have come and gone over the last couple years as well, leaving my research pretty scattered across many different concepts of how voice and APIs are colliding--lacking any real coherency.
I took a moment last week to take a fresh look at my voice API research, because of a comment by Steven Willmott (@njyx), the CEO of 3Scale. Its not an exact quote, but Steve spoke about how voice is the future of API consumption, after he had attended the AWS:Reinvent in Las Vegas. I agree with him. Voice APIs is a topic that has been significantly stimulated with the introduction of the Amazon Echo platform, but I also feel also coincides with a critical mass of available API driven resources that will deliver some of the value these platforms are promising users.
Voice recognition has always been something that leaves a lot to be desired--think Siri. Even with these challenges there are many dimensions to the voice API discussion, and with the amount of resources now available via simple APIs in 2015, I feel we are reaching a more fertile, and friendly time for voice solutions to return the value end-users desire. We now have a rich playing field of weather, news, stocks, image, video, podcast, and other data, content, rich media, and programmatic resources, which can be linked to specific voice commands--something we didn't have before.
While there is still so much work to be done, but I agree with Steve's vision, that voice will play an increasingly significant role as an API client. I would add that like mobile, or the recent wave of wearables, voice will have special constraints when it comes to API design, further requiring API providers keep their APIs simple, and reflect how users will experience them, not just being a SELECT * FROM table WHERE q = 'search', with a URL bound to it.
I think the API providers who are further along in their journey, will get a boost as voice evolves as an API client, and voice enabled app developers are able to easily integrate valuable API driven resources into their solutions. Even with my new found optimism about voice APIs, I still think we are years away from voice solutions actually living up to, even a small portion of the hype they seem to get over the years. Regardless, I'll be working to keep a closer eye on things, and will be sharing via my voice API research.
I've been tracking on the potential for voice APIs since Siri was first announced, a topic that often meant telephony like from Twilio, or audio transcription from Popup Archive and Clarify. When I close my eyes and think about the the future of APIs and voice enablement, it is more akin to the Siri example, where the digital world is (supposedly) just a voice command away.
Imagine making all your employee directory, company calendar, or product catalog available via voice commands. How do you do this? You do it with APIs, and a voice enablement platform in-between the application developers, and the available API resources. Much like all other voice enablement, I think we have a huge amount of work to get anywhere close to the pictures we all have in our head, when if comes to voice enablement.
It is my mission to find these signs across the API landscape, and keep an eye on what they are doing. One platform that is now open for beta is Amazon Echo. Amazon says, "Echo is designed around your voice. It's always on—just ask for information, music, news, weather, and more.” APIs are how we will deliver on the “more” part of this equation. The difference between Apple Siri, and Amazon Echo at this point, is Amazon will let you (API providers) help deliver on the “more” part of Amazon Echo discovery equation.
I’m signing up for the beta, and if I get access, I will share more stories. I encourage you to sign up as well, and if you have any work your doing with Amazon Echo, or know of other API driven voice-enablement platforms I should be paying attention to, let me know.
There is a laundry list of problems with the current state of terms of service, affectionality called TOS--those legal documents we all agree to as part of our usage of online services, and are defining relationships between API providers and their consumers. API Voice is dedicated to exploring this, and other building blocks that make up the politics of APIs, an area you will see increased coverage of in 2014.
I strongly believe that to fully realize the API economy as many of us technologists see it, the terms of service have to be machine readable, allowing for seamless integration into the other political building blocks like privacy policies, service level agreements, partner access tiers, and pricing. If you think about it, current API terms of service reflect the command and control governance style of SOA, not the flexible, agile and innovative approach that APIs are often known for.
Why aren't API terms of service negotiable? Well they are, it just isn't built into the existing API platform. Many API ecosystem allow for circumventing and negotiating at the terms of service level behind closed doors, with partners, and API consumers who share the same investors, it just isn't a conversation that occurs out in the open. This approach reflects legacy ways of doing business, where if you are in the know, have the right connections, you can negotiate, and not the new API driven approach that will allow the API economy to scale as required.
The ability for API consumers to negotiate the terms of service isn't something that we can just roll out overnight, it is something we have to evolve towards over time. I’m hoping to help facilitate this evolution, through brainstorming, stories and conversations around the potential of machine readable terms of service, here on API Voice over the next couple of years.
I am preparing a job description for an API evangelist position at the Cashtie API, something I do for companies from time to time. When working on a new one, I go out and look at current API evangelist job positions, to see what is new and noteworthy since last time I did it.
While doing this today, I came across a posting for API Evangelist at Akamai, and while reading, two lines really stood out:
As an API Evangelist, you are a thoughtful voice that represents a new developer mode of interaction, both inside and outside the company.
You will spread awareness of and encourage participation in our emerging API program to mold the future of our entire developer experience.
Both of these lines reflect what any company should be looking for in their API evangelist. I really like the use of “thoughtful voice”, “spread awareness” and “encourage participation” and talking about how important the role is to the future of a company's “entire developer experience”.
When crafting your own role description for an API evangelist, make sure you spend some time looking through other companies job postings and see what details are most important to you.
A free, open-source, API driven conference solution called Voice Chat API popped up on my API monitoring radar today, as I was going through my feeds. The Voice Chat API is a very cool, dead-simple conferencing solution. As a tool it provides clear value, and I really like the approach from Plivo to rollout out an open, API driven resource like this—a model that could be applied to other valuable resources.
What really stands out is the Voice Chat API does one thing and does it well—audio conferences. It is easy to tell what it does. We aren't having to convince users of a problem, then sell them on a solution. The problem is clear, the solution is simple.
The Voice Chat API is open source and available on Github, built using "Plivo WebSDK and APIs”. I haven’t investigated the separation between what code is open source, and where the dependencies on Plivo is, but regardless the approach is interesting.
Providing users with more ways to deploy a conference, the Voice Chat API provides a set of add-ons including a Hubot Plugin, that works in Campfire or Hipchat, a Chrome extension, and a bookmarklet for deploying a conference from any other browser.
The Voice Chat API centers around its API, which allows developers to create a unique audio conference, the call mobile & landline phone numbers (PSTN) into the bridge. The API deploys to your Heroku account, allowing you do manage your ad-hoc audio conference deployments in the cloud.
I’m still playing around with the Voice Chat API, understanding how the application and the API works, as well as considering how this new open source approach dovetails with Plivo's business model, but now I’m intrigued which the approach, and how they crafted this API driven resource.
Saturday afternoons are great for closing out tabs I’ve had open all week, and the theme this Saturday is APIs and the Internet of Things. This time it is about controlling your Internet of Things using voice the Thingspeak Talkback API and the Arduino Yún, which seems to be the darling of API to Internet of Things projects.
The Thingspeak Talkback API allows for the adding, updating, deleting and executing of voice commands. It acts as a middleware for the Arduino Yún, allowing IoT devices to to be able to check for commands that need executing—providing an API driven queue of voice commands for all of the Internet connected devices in your life.
As the cost of connecting everyday objects to the Internet gets easier and cheaper, it is fascinating to see the different approaches that providers take to connect these objects to the web. Thingspeak is choosing to go the voice route, where Temboo is betting on the fact that people want to connect IoT devices with their existing cloud platforms and services.
I'm totally thankful for the experiences I've had over the last 90 days in Washington D.C. as a Presidential Innovation Fellow, and even more thankful I'm able to keep doing much of the work I was doing during my fellowship. In reality, I'm actually doing more work now, than I was in DC.
While there were several challenges during my time as a PIF, the one that I regret the most, and is taking the longest to recover from, is losing my storytelling voice. This is my ability to capture everyday thoughts in real-time via my Evernote, sit down and form these thoughts into stories, and then share these stories publicly as the API Evangelist.
During my time in DC, I was steadily losing my voice. It wasn't some sort of government conspiracy. It is something that seems to happen to me in many institutional or corporate settings, amidst the busy schedule, back to back meetings and through a more hectic project schedule--eventually my voice begins to fade.
In July I wrote 61 blog posts, August 41 and September 21. A very scary trend for me. My blog is more than just just stories for my audience and page views generated. My blog(s) are about me working through ideas and preparing them for public consumption.
Without storytelling via my blog(s) I don't fully process ideas, think them through, flush them out and think about the API space with a critical eye. Without this lifecycle I don't evolve in my career, and maintain my perspective on the space.
In October I've written 28 posts and so far in November I've already written 27 posts, so I'm on the mend. In the future, I'm using my voice as a canary in the coal mine. If a project I'm working on is beginning to diminish my voice, I need to stop and take a look at things, and make sure I'm not heading in a negative direction.
I’m working through lists of APIs and API service providers who I’ve rated pretty highly because of their work in the past, but red flags have gone up because I’ve haven’t seen a blog post, tweet or commit from them lately. One of the service providers I’m reviewing is Zypr, which providers voice enabled architecture built on popular APIs.
Zypr fits in with my vision of where APIs are going, because Zypr is doing the same thing as aggregation, automation and other trending service providers, but the end goal is this use case is voice. In Zyprs own words:
Zypr aggregates proprietary 3rd party APIs, categorizes their functions, and then presents those functions through a single, normalized API. By aggregating and normalizing 3rd party APIs, Zypr creates a stable access point for devices and apps to access those services without concern for service and API changes.
That sounds exactly like what Singly is doing, but with voice as the vehicle for making valuable API resources avilable on apps running via mobile phones, tablets and in our cars and homes. I really like the Zypr graphic. I will have to create a similar version to articulate some the API trends I’m seeing.
I hope Zypr is just heads down, working hard on their platform. According to my API stack rank, there aren’t very many positive signals coming out of the platform:
- Last Blog Post - 09/28/2012
- Last Tweet - 01/30/2013
- Last Commit - 10 months ago
What Zypr is doing is important--I hope they have enough runway to make happen. If they need to cut corners, they shouldn't replicate the service or client adapter portion of their platform, they could just partner with an existing API aggregators like Singly for these layers, as well as reciprocity providers like Elastic.io or Foxweave to migrate and tranform data. In my opinion Zypr should focus on how their engine works, and partner and generate open source to build the rest.
Personally, I would like to see more competition in the space of API driven voice enabled architecture. So far I haven’t found any other players doing it at the same level as Zypr. Let me know if you know of something. Siri is going to need some healthy competition.
I’m looking at new and innovative ways companies are building analytics and visualizations on top of APIs, and one of the new tools I’ve come across is ImpactStory. ImpactStory aggregates altmetrics: diverse impacts from your articles, datasets, blog posts, and more. But this post isn’t about ImpactStory, I’ll crunch what they do and write about in another post.
This post is about their usage of feedback, helpdesk, and knowledge base management tool UserVoice, which is a service I always recommend to API owners looking to support different aspects of their API community.
ImpactStory simply has a “BETA - Send us Your Feedback” image in the top right corner of their site. When you click on the logo, you are presented with a simple UserVoice form for submitting your ideas of where you think ImpactStory should take their platform.
I think this is a pretty dead simple way of soliciting feedback from your community. Involving your users in your roadmap planning can go a long way in building goodwill with them, and encouraging participation and innovation in other ways.
After I spend more time playing with ImpactStory, I’ll do another post on what else they are up to, but at first glance its a pretty interesting approach to developing tracking analytics, visualization and other embeddable goodness using APIs.
After looking back at 2012, I wanted a January 1, 2013 blog post for my blogs. My first blog post of 2012 was my tour schedule for January, 2012. While it was a pretty busy time for evangelizing and hackathons, I wanted something a little deeper. I’m not sure what, but I will play with the format year to year, until I find what I’m looking for.
Every year I rewrite my bio, based upon where I am. I’ve been doing this since 2009. This year I will write inaugural blog posts along with my bio rework, and post to each of my active blogs. We’ll see if it resonates again in 12 months and I do it again in 2014.
I started API Voice because I believe the politics of APIs is one of the most important areas that will make or break the API space. APIs are not just technical. There are a wide range of political issues facing companies when it comes to APIs.
The politics of APIs can range from terms of service issues to potential government regulation. We are seeing the politics of APIs play out in the Twitter ecosystem and in the U.S. legal system with Oracle vs. Google.
When I started API Evangelist, I wanted to provide a platform for research into the business of APIs. Over two years I’ve begun to solidify some perspective on the space, and form clear opinions around potential best practices. With API Voice, I want to do the same for the politics of APIs.
At this point, all I can do is write about things as they unfold--as APIs are acquired and as they make the terms of service changes that impact developers. API Voice is in its infancy, and I will be working to cover as many of what unfolds in 2013 as I can.
I can’t help but feel that 2013 is going to be a big year when it comes to the politics of APIs.
Pioneer Corporation just announced the availability of Zypr™, a new voice-powered API that provides a conversational, voice-control command API allowing developers to integrate Siri like functionality into their own applications.
The Zypr API provides a single RESTful API for developers to access voice UI, maps and routing, local search, social networking, music and radio, contacts, calendars and weather from multiple service providers, including: Facebook, Twitter, Google, Yelp, AccuWeather, INRIX real-time and predictive traffic information, Slacker Radio, Tuner2 Radio, Wcities, xAd and VoiceBox.
I’ve covered unified and aggregate APIs in the past, where within maturing industries like social and cloud computing, service providers are stepping up with a single API to work with multiple providers. Zypr is similar in that it offers a single API for multiple providers, but uses voice as the mechanism for search. Zypyr’s goal is to normalize, stable and voice-enhanced method for accessing a wide array of constantly changing APIs, Zypr can reduces the impact of service-specific API changes.
“Pioneer created Zypr so that any type of developer or device maker on any platform can easily offer compelling new services to enhance their own technology and brand equity,” said Susumu Kotani, the president of Pioneer Corporation. “We have also changed the rules by allowing developers and device makers alike the opportunity to share in revenue when they deploy Zypr.”
In addition to providing revenue sharing opportunities, the Zypr API is available at no chrage to developers. You can access documentation, code, sample application and other developer tools at www.zypr.net.
With a simple, but powerful RESTful API and the buzz around Siri, Apples new voice tasking system , the Zypr API is sure to be a hit. If Pioneer Corporation can successfully evangelize the Zypr API and get it into the hands of hackers and developer globally, it could easy become the Twilio of voice tasking.
Twilio just made their first move into Europe by offering Twilio Voice service in the United Kingdom.
The Twilio Voice API is now available in the UK, allowing developers to launch local phone numbers in the country. Along with the release Twilio has announced the opening of their first international office based in East London.
The UK launch is the first to go live of five countries currently in Beta including France, Poland, Austria, Denmark and Portugal with several more countries on the way.
Twilio's APIs have proven a favorite of developers, and fast becoming an essential cloud computing service for applications.
The move into Europe will only grow their developer base and the number of applications using the cloud telephony API. Something that traditional telcos should take notice of.
If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.