Tilbage til Blog
IBM Watson Json Code Generator Buttons

IBM Watson Json Nem Vejledning

Read in English United Kingdom Flag

HVORFOR JSON?

In translation!

You probably know that you can build a Watson Chatbot solely by using the Watson GUI to create intents, entities and dialogs, and glue’m all together into a Watson skill. For simple and small skills, with less than ten or twenty of the mentioned elements, that works fine. But for larger skills, that needs to grow and be maintained, the GUI gets to troublesome and timeconsuming, especially with many dialogboxes referring to many intents!

Therefore IBM has made it possible to represent a skill by the descriptive language Json, which like XML or HTML, is only text and can be computer generated! This blogpost explains the basic elements of the Watson Json structure, and is mainly for developers and Watson architects.

BEST PRACTICE!

Best practice for building effective Watson Assistant Chatbots, with good and comprehensive Dialog Skills is done by (programatically) creating the Json code that defines the Skill, and then to import it to your Watson Assistant in the IBM Cloud, to create the Skill!

The Json code itself is very structured, and basically only consists of four different blocks, which I will explain to you here.

It is a good idea to use a Json parser while constructing the code, because wrong syntax of the Json code, is one of the biggest issues/problems when importing Skills via Json! There are many standalone and online tools for the job, and I often use the JsonFormatter:

https://jsonformatter.org/json-parser

My example is a simple chatbot, where you can ask for the address of a company, and use different synonyms for asking, and you will get back the answer “Our business is located in ‘Watson Square 5, NY 10040’ and ‘Discovery Avenue, AZ 85010′” or the usual “I didn’t get your meaning.” if the bot couldn’t recognize anything triggering an intent.

Block 1: INTENTS

“intents”:[
{
“intent”: “business_address”,
“examples”:
[
{
“text”: “What is the address of your @business ?”
},
{
“text”: “What is your @business location ?”
},
{
“text”: “Where is your @business ?”
}
],
“description”: “Addresses for the business”
}
],

As you can see above, the INTENTS section of the bot starts with the keyword “intents”, followed by a “[” and the descriptor  “intent”: “descriptor_name”, which in our case is “business_address“.

After that you list all the examples of that Intent, overall defined by the descriptor “examples”, followed by each example.

A Watson Assistant Chatbot will normally contain many different intents, and they are then listed as sections beneath the first intent. The “intents” section ends with the “]”, which tells the importer/parser, that all intents have been defined for import.

Remember to follow the Json rules by getting all the “{“, “}”, “[“, “]” and commas right!

Block 2: ENTITIES

“entities”: [
{“entity”: “business”,
“values”: [
{
“type”: “synonyms”,
“value”: “business”,
“synonyms”:[
“agency”,
“bureau”,
“firm”,
“office”,
“shop”
]
}
],
“fuzzy_match”: true
}
],

Like for Intents, Entities has its own section “entities“, and a descriptor “entity“:”descriptor_name” that defines the entity, which in our case is “business“.

After that, you list all the entity synonyms as “values“. The setting “fuzzy_match”: true, means that you don’t have to write the exact intent to get a match – a partly match (like bureau <> bureu) will do, which is not the best choice in all cases!

A Watson Assistant Chatbot will normally contain many different entitites, and they are then listed as sections beneath the first entity. The “entities” section ends with the “]”, which tells the importer/parser, that all entities have been defined for import.

Block 3: DIALOG_NODES

“dialog_nodes”: [

{
“type”: “standard”,
“title”: “Welcome”,
“output”: {
“generic”: [
{
“values”: [
{
“text”: “Hello. How can I help you?”
}
],
“response_type”: “text”,
“selection_policy”: “sequential”
}
]
},
“conditions”: “welcome”,
“dialog_node”: “Welcome”
}

{
“type”: “standard”,
“title”: “Business address”,
“output”: {
“generic”: [
{
“values”: [
{
“text”: “Our business is located in ‘Watson Square 5, NY 10040’ and ‘Discovery Avenue, AZ 85010′”
}
],
“response_type”: “text”,
“selection_policy”: “sequential”
}
]
},
“conditions”: “#business_address”,
“dialog_node”: “Business_Address”,
“previous_sibling”: “Welcome”
},

{
“type”: “standard”,
“title”: “Anything else”,
“output”: {
“generic”: [
{
“values”: [
{
“text”: “I didn’t understand. You can try rephrasing.”
},
{
“text”: “Can you reword your statement? I’m not understanding.”
},
{
“text”: “I didn’t get your meaning.”
}
],
“response_type”: “text”,
“selection_policy”: “sequential”
}
]
},
“conditions”: “anything_else”,
“dialog_node”: “Anything_Else”,
“previous_sibling”: “Business_Address”,
“disambiguation_opt_out”: true
},

],

The DIALOG_NODES section, normally consists of a minimum of three subsections, wherein the first is a “Welcome” and the last is the “Anything_else” (error handling) section. All the other subsections contain the real dialogues with the user and can be seen as handlers for the previous mentioned intents.

Each subsection has a name, defined by the descriptor “dialog_node” : “node name”. In our example, the node names are: “Welcome“, “Business_Address” and “Anything_Else“.

To be able to define the right order of the dialogues when importing, each dialogue has the descriptor “previous_sibling“, which is, of course, not true for the “Welcome” dialogue that initiates the flow – it has no predecessor!

Some of the descriptors are system specific to Watson, such as “type” and “output”, where others are part of the flow. The one that link a dialog to a certain intent is the descriptor “conditions”.

For the dialog_node “Business_Address”, the conditions is given the value “#business_address”, and as you might know from the Watson GUI, the “#” defines an intent! So the “Business_Address” dialogues is triggered by the questions in the #business_address intent.

The answer (return to the user) in the dialog_node is defined in values, and in our case it returns the two addresses for the company that we run.

Block 4: MAIN

“counterexamples”: [],
“system_settings”: {
“off_topic”: {
“enabled”: true
},
“disambiguation”: {
“prompt”: “Did you mean:”,
“enabled”: true,
“randomize”: true,
“max_suggestions”: 5,
“suggestion_text_policy”: “title”,
“none_of_the_above_prompt”: “None of the above”
},
“system_entities”: {
“enabled”: true
},
“human_agent_assist”: {
“prompt”: “Did you mean:”
},
“intent_classification”: {
“training_backend_version”: “v2”
},
“spelling_auto_correct”: true
},
“learning_opt_out”: false,
“name”: “Business Address Skill”,
“language”: “en”,
“description”: “Simple chatbot for a business adresse”
}

The last block of the skill is the MAIN block, that mostly has a lot of system specific descriptors for the IBM Watson Json. The most important to the user is the descriptor “name”, which simply defines the name of the whole skill! In our case “Business Address Skill”.

There can be many other descriptors defining a skill, making it quite more complex and sofisticated, but the ones shown here make up a simple skill!

IBM Watson Skill upload

To the left you can see how you upload the Json code to the IBM Cloud in order to create the Watson Skill.

As you can see, there’s also a Download menu item, and that’s where I first learned about Watson Json, because here you can also export the Json code from an existing Skill for investigation and further development.

Tilbage til Blog