Watson Discovery at the Size You Want

I just worked with a customer this week on an issue that they had – and the solution didn’t seem obvious, so I figured that I would share it with a larger audience.

The Issue

My customer has a couple of Watson Discovery instances in the IBM Cloud environment. These instances are supporting a cognitive application that they have acting in an expert assistant role – providing quick access to guidance and information to their associates. One instance is a Discovery instance which is supporting the production environment, the other is supporting their development and test environment. Both instances are Small sized.

They realize that they would like to save some money by using a smaller size instance for their development and test environment, where they think they only need an X-Small sized instance. They asked me for some guidance on how to do this.

The Background

This is not as simple a request as it might seem at first. The issue is that once you move into the Advanced sized instances (instead fo the free Lite instances), your Discovery instances begin to cost you money. They also can be upgraded from one size to a larger size, but they cannot be shrunk. Why? We can always expand and allocate additional resource to an instance, but we cannot guarantee that there will be no loss of data when shrinking instances. So we don’t shrink them.

It’s probably best to start by looking at the various sizes and plans. Looking at the Discovery page on the Cloud dashboard gives you some idea of the costs and charges, but it is not easy to read. Instead, I find that the help pages on upgrading Discovery, and Discovery pricing, are much more helpful. The table on each of these pages are informative and when they are combined, they give you the basics of what you need to know (this is accurate at the time of publishing – November 2019).

SizeLabelDocsPrice
X-SmallXS50k$500/mo
SmallS1M$1500/mo
Medium-SmallMS2M$3k/mo
MediumM4M$5k/mo
Medium-LargeML8M$10k/mo
LargeL16M$15k/mo
X-LargeXL32M$20k/mo
XX-LargeXXL64M$35k/mo
XXX-LargeXXXL100M$45k/mo

One other IMPORTANT difference between the plans is this: Each plan gives you a single environment that supports up to 100 collections and free NLU enrichments. The only exception is the X-Small plan, which will only support 4 collections. Also, note that you may also pay additional for news queries and custom models.

What Needs To Be Done

In order to “downsize” one of their Discovery instances from Small to X-Small, the customer will need to migrate the data themselves. What makes this difficult is that they will only have 4 collections available to them in the X-Small instance, instead of the 100 that were available in their Small instance. So they need to take these steps:

  • Create a new Discovery instance, with a size of X-Small.
  • Select the 4 (or fewer) collections that will be used in the new X-Small instance.
  • Re-ingest documents into the 4 new collections.
  • Delete the old development and test Discovery instance.

Creating a Discovery Instance of a Certain Size

The issue that my customer ran into was this: How do I create a Discovery instance of a certain size? When I look at the Discovery page on the Cloud dashboard , all I see is that I can select the Advanced plan – but no option on what size to use. So how do you do it?

It’s simple, and it’s outlined in the help docs in the section on upgrading Discovery. You first need to go in and create a new instance of the Discovery service with the Advanced plan. After you do this, the service will take some time to provision. You’ll need to wait patiently while this is done – it’s usually less than 2 minutes.

Now open your Discovery instance, by clicking on the link, and then choosing the “Launch Watson Discovery” button on the Manage page. You will now see the Discovery instance come up, and you will click on the small icon in the upper right corner of the browser to bring up a dialog that will allow you to “Create Environment”.

Hit the small icon in the upper left, and then the “Create Environment” button…

Then you will be able to select the SIZE of the Discovery instance that you want. You will see a dialog that looks similar to what is shown below:

For our X-Small option, we’ll need to click on “Development”…

See that you can choose from three different menus: Development (which shows the X-Small option), Small/Medium (which shows the Small through Medium-Large options), and Large (which shows Large through XXX-Large). Choose the size that you want, and then hit the “Set Up” button. This will create your Discovery environment, in the size that you want.

What If I Want To Increase the Size of my Discovery Instance?

In the above case, we had to do some specific actions to get a new instance created in a size that we wanted. We also learned that if we wanted to SHRINK in size, we needed to create a new instance and migrate data to the new instance.

What if I have been using Discovery for a while now, and I want to INCREASE in size? How do I do that? it’s actually pretty simple, and it’s also documented in the online help, in the section on upgrading Discovery. it just provides a link to the API, but not a lot of additional explanation. I’ll give you a bit more, so it’s a bit more clear.

If you look at the Discovery API reference, you’ll see a section on Update an Environment. This is the API call that you can use to upgrade your environment (and thus, the size of your Discovery instance). The API call is spelled out in this doc, and you can get examples for whatever language or way that you want to generate this API call by just selecting a type for the example in the black window on the right. In the example below, I chose to look at this in Python.

I wanted to see the API call example in Python, but most people will just use Curl…

Just make sure to use the “size” parameter in your API call, and make sure that you use the right code for the size that you want (the label from the table earlier in this post). That’s all there is to it.

Getting Swagger API Pages for Watson APIs

In the past couple of weeks, I have seen a few comments from my customers complaining about the lack of “sufficient” API documentation for the various Watson API’s. I used to like to point my customers to the Swagger API documentation, but I can’t seem to find it anymore. So I asked some of my fellow IBM folks if they knew where these pages were. They didn’t know, they just had some vague notion that they were no longer supported.

I miss those Swagger API pages – so I found out how to get them. The IBM development teams no longer host these pages, but you can generate them for yourself, whenever you want, but just following this short little guide.

Go Ahead – Get That Swagger

Step 1 – Figure out which API you want to generate a Swagger page for. Go to the IBM Cloud catalog, and select the service that you want to see. For the purposes of this example, I’ll go and look at the Watson Assistant service.

Step 2 – Get to the API Documentation page by clicking on the link titled View API Docs – as shown below.

Step 2a – You can skip all of this hassle by just going to the IBM Cloud API Docs page, and then selecting the specific API documentation page that you are looking for (which in our case is Watson Assistant v1). This is much quicker – and easier to bookmark and remember.

Step 3 – You are now on the Watson Assistant API (V1) page. Look for the ellipsis in the white text portion of the UI, as shown below, and click on it. Save a version of the API by selecting Download OpenAPI Definition. This will download a JSON file to your local machine.

Step 4 – Open a new browser window, and go to the web-based Swagger editor.

Step 5 – In the Swagger editor window, select File -> Import File. Then select your recently downloaded Watson API JSON file (from Step 3).

You can now look at the Swagger API version of the Watson API documentation. This allows you to see all of the API calls for the service, along with the various parameters, and the responses. It also allows you to try to use the interface in an interactive manner. Pretty nice!!

Happy Holidays for 2017

With the end of the year quickly approaching, it is a great time to look back on the past year, and to look forward in anticipation for what is coming in 2018.

2017 was an interesting year.  I saw an explosion in the development of chatbots of various different types.  Some were very simple, others used both Watson Conversation and the Watson Discovery service to provide a deeper user experience – with an ability to answer both short tail and long tail questions.  I saw a huge uptick in interest in basic Cloud approaches, and a lot of interest in containers and Kubernetes.  I expect that both of these trends will continue into 2018.

In 2018 I expect to see the IBM Cloud mature and expand in it’s ability to quickly provide development, test and production computing environments for our customers.  I also expect that more people will become interested in hybrid cloud approaches, and will want to understand some best practices for managing these complex environments.  I am also excited about some of the excellent cognitive projects that I have seen which could soon be in production for our customers.  I also expect that some of our more advanced customers will be looking at how cognitive technologies can be incorporated into their current DevOps processes, and how these processes can be expanded into the cloud.

I hope that your 2017 was a good one, and I hope that you have a happy and safe holiday season.

Full Cycle Cognitive Development – Part 1 – Business Concepts

Introduction

I’ve been working with IBM Watson and cognitive computing for a while now, and I have seen a lot of really amazing and innovative applications of cognitive technology by our customers.  Customers take combinations of the cognitive capabilities of Watson, and combine them to provide capabilities, insight, and intelligence that is unique and game changing.

Often these efforts begin as small prototypes, showing the potential of the application being developed.  They are bare boned, lack an attractive UI, and just show the promise of what could potentially be a new and powerful capability.  They have no security, and don’t address any of the other non-functional requirements of any enterprise application.

As these prototypes evolve, they are used to generate interest in further funding their development – and this is where things begin to get tricky.  Up to this point, the prototype has displayed great promise and raw capabilities with relatively little effort.  The development team is either very small, or limited to people doing this in their “spare time”.  Getting the thing 50% or 75% completed has taken a relatively short amount of time, cost very little, and has been done through the efforts of a small group of people.

Once the enterprise concerns need to be addressed, the costs and effort begin to balloon.  Adding fully functional security to a project requires effort, as does the need to build a scalable and robust architecture.  Management begins to think that the team has gotten dumb, as the “last 25%” of a project takes twice as long and costs twice as much as anticipated.  It is at this point that many organizations abandon their cognitive efforts.

Some organizations do continue their efforts, and when they do, they are rewarded with a new cognitive capability that provides them with benefits that they cannot get with their existing tools and systems.  So what makes these organizations successful, while others are not?  I’ll explore this a bit in this series of blog posts, and discuss what you can do to improve your chances for success.

Getting Some Terminology Straight

This post is focusing on the organizational and business concerns that you will encounter in the development of a cognitive application.  One of the largest sources of confusion between the business people and the technical staff is in the terminology used to describe the maturity of a cognitive application.  Terms like “prototype”, “MVP”, “pilot”, “proof of concept” (or POC), and “production” get thrown around, and there are different understandings of what they mean.

I asked a bunch of Watson architects for their definition of these terms, and we all had some differences of opinion.  So we discussed it, argued a little bit, and generally hashed it out.  We ended up with this progression of application maturity:

Demonstration (Mockup) – this is exploring technology for understanding, developing, and managing requirements.  This is something that is done so people can say, “Now I see what you mean…”

Proof Of Concept (POC) – this is a collection of services and code that show what cognitive services can do, with a functional but not fully polished UI, that show the basics of a technology.  It helps determine the feasibility of implementing some cognitive capability, and focuses on the highest risk area.  It is meant to a focus on figuring out what you don’t know and can only figure out by putting code on machine (and in the case of Watson content, by pushing data through our APIs).  The POC is not really polished, and could be completely thrown out (depending on what you learn) before moving to the next phase.

Prototype – This has a more fleshed out architecture than a POC, consider it a big brother to a POC.  This is a simple collection of services and code that show the basics of a technology, and the beginning of an architecture for a product.  It should handle most of the primary use cases for a product. Still not fully polished, it could be considered a bit “rough around the edges”.

Minimum Viable Product (MVP) – this is the next stage. A collection of services and code that provide a fully functional use case, with a “happy path” and some alternative paths fully realized. It can stand on it’s own. A fully functional UI, but potentially a limited set of functionality and capabilities, that may only serve a subset of the intended user population. The core of the architecture has been completed, and there may be extension points for anticipated future enhancements.

Pilot – an initial release of the MVP application to a small subset of the anticipated user community. Used to help solicit end user feedback and to allow for the “hardening” of non-functional pieces of the architecture (performance, scalability, etc.).  It’s where you learn to operationalize the application (even though you should have been doing this all along).

Production – A production system is the full system suitable for use by the complete target user group with the level of robustness and functional and non-functional capabilities that meets stakeholder expectations.

You might agree with this progression, or you may think we’re wrong.  It doesn’t really matter, what does matter is that your technical and development teams and your business stakeholders have a common understanding of the terminology used and the progression that it implies.  Otherwise you run the risk of having a huge misalignment of expectations.  Technical people will know that a large amount of work is needed to mature and “harden” a prototype into an MVP, while your business stakeholders may see the prototype and think to themselves, “we’re only a week or two away from being able to make this available to everyone!”.

Have a real solution to a real problem

One of the key things to being successful is having a good idea for a cognitive application.  Many people come up with “cool” ideas for applications, and even create new and inventive capabilities using the IBM Watson cognitive services.  This is great – but it can lead to you burning time on projects that will never be funded or fully supported by your organization.  These types of things should be done as “open source” projects.  Develop your idea, learn more about the capabilities, and share your ideas with the world – while demonstrating your understanding of the cognitive space.

For those of you working for an organization that you want to fund and support your idea, you need to make sure that your idea provides a real solution to a real business problem.  It should provide a competitive advantage if not save your organization money, or both.  You should be able to roughly quantify the benefits of your cognitive application.  If you want your organization to spend money to develop your application or solution, they have to have a reasonable expectation that your efforts will be worth the time and money spent.  It also needs to have a clear benefit to the end user.  If nobody wants to use this, then how much benefit can it really have?

The best way to do this is to build a two slide slide deck.  I know, I know…. Powerpoint and slide decks are evil – but in this case your effort will save you time and provide a vision for your efforts in the future.  Slide one should be a quick description of the problem that your cognitive application or solution will solve.  Describe the problem, and highlight the negative impact that it has.  Get quotes from leaders within your industry or organization that highlight the negative impact – and use them.  Slide two should be a high level description of your cognitive application or solution.  Don’t get too detailed – just describe how you will solve the problem, and then highlight the benefits of your solution.  Don’t neglect to highlight the consequences of  failing to solve the problem, it helps put the effort into context.  Now make sure that EVERYONE on your team has access to this slide deck, since EVERYONE (developers, business analysts, project managers, and the receptionist) on the team should be aware of what your primary goals are, and should be able to communicate that.

I know that this is “Business 101” kind of guidance, but you would be amazed at how many teams are unable to explain WHAT they are building, and WHY it is important.  Even worse is when the technical leadership thinks that they are building one thing, the business leadership thinks they are building something else, and the development team is actually building something that is neither of the two.  It is also important that you keep this deck updated.  Keep in mind that situations change, and that the benefits of your solution may change based on your competitors, the business environment, and other outside factors.  The key here is to keep it simple, brief, and easy for EVERYONE involved to understand.  You need to be able to “sell” this idea to anyone who questions your use of people, time and money – from the most senior Executive, to the most junior developer.

Conclusion

So now we have an idea, a mission for the team, and some succinct benefits that your cognitive application or solution can provide.  That should help us articulate our vision and our mission to our technical stakeholders and our business stakeholders.  We should have ALL of the people involved having a similar understanding of how things will progress, and what it is that we are actually building.  In Part 2 of this series of blog posts, I will explore the use Agile development methods, and how to dig into the real work of developing a cognitive application.

Thanks to my fellow Watson architects for their input and observations on cognitive application maturity.  The version that I shared in this blog post bears little resemblance to what I originally started with….

Watson Application Starter Kits

In an earlier blog on on learning about Watson (see Learning About Watson – Getting Started) I mentioned using the Watson Application Starter Kits (ASKs).  As I talk to and work with more developers who are looking to become familiar with the Watson cognitive services, I find that many of them seem to really like these.  So what is an ASK?

An ASK is a shared set of code and other assets that developers can use as a starting point for doing development of cognitive applications that use the Watson services.  It also provides a sample application so you can see what an end user will see.  What developers REALLY like about these ASKs is that they include real working code – and provide a starting point for some of the more common use cases being implemented with Watson (like the Answer Retrieval ASK).  You can simply fork (or download) the GitHub project containing the ASK code, and see all of the code needed for a working cognitive application. I know that a lot of bootcamp teams will use ASKs as a quick way to put together the basic infrastructure for their cognitive use cases – it gives them a starting point that executes and works.  They can then modify the UI and the business logic to fit their own needs.

This week our development team (led by Lakshmi Krishnamurthy) is releasing a new ASK.  This ASK is implemented in Node,js, and it uses the Watson Conversation service to create a Text Message Chatbot. It’s pretty simple, and you can see a quick demo of the text message chatbot online.  This can serve as a great starting point for your own work in creating a chatbot that serves a need that you have.