Skip to content
Erika Dsouza edited this page Sep 12, 2018 · 16 revisions

Version 2.0.1

2018-07-12

GENERAL CHANGES FOR ALL SERVICES:

  • DetailedResponse is now the default response which contains the result, headers and HTTP status code. Previously, the response only contained the direct response from the service. Use get_result() to obtain the method response.
from watson_developer_cloud import AssistantV1
 assistant = AssistantV1(
    username='xxx',
    password='yyy',
    version='2017-04-21')
 response = assistant.list_workspaces(headers={'Custom-Header': 'custom_value'})
print(response.get_result())
print(response.get_headers())
print(response.get_status_code())
  • iam_api_key renamed to iam_apikey The constructor for each service now looks like:
    def __init__(
            self,
            url=default_url,
            username=None,
            password=None,
            iam_apikey=None,
            iam_access_token=None,
            iam_url=None,
    ):

PERSONALITY INSIGHTS:

  • profile method parameter reordering:
	    def profile(self,
                content,
                content_type,
                accept=None,
                content_language=None,
                accept_language=None,
                raw_scores=None,
                csv_headers=None,
                consumption_preferences=None,
                **kwargs):

VISUAL RECOGNITION:

  • classify would no longer support the parameters keyword, the new interface is:
    def classify(self,
                 images_file=None,
                 accept_language=None,
                 url=None,
                 threshold=None,
                 owners=None,
                 classifier_ids=None,
                 images_file_content_type=None,
                 images_filename=None,
                 **kwargs):
  • detect_faces would no longer support the parameters keyword, the new interface is:
    def detect_faces(self,
                     images_file=None,
                     url=None,
                     images_file_content_type=None,
                     images_filename=None,
                     **kwargs):

SPEECH TO TEXT:

  • recognize parameter reordering and version parameter renamed to base_model_version
    def recognize(self,
                  audio,
                  content_type,
                  model=None,
                  customization_id=None,
                  acoustic_customization_id=None,
                  base_model_version=None,
                  customization_weight=None,
                  inactivity_timeout=None,
                  keywords=None,
                  keywords_threshold=None,
                  max_alternatives=None,
                  word_alternatives_threshold=None,
                  word_confidence=None,
                  timestamps=None,
                  profanity_filter=None,
                  smart_formatting=None,
                  speaker_labels=None,
                  **kwargs):
  • create_job, parameter reordering and version parameter renamed to base_model_version
    def create_job(self,
                   audio,
                   content_type,
                   model=None,
                   callback_url=None,
                   events=None,
                   user_token=None,
                   results_ttl=None,
                   customization_id=None,
                   acoustic_customization_id=None,
                   base_model_version=None,
                   customization_weight=None,
                   inactivity_timeout=None,
                   keywords=None,
                   keywords_threshold=None,
                   max_alternatives=None,
                   word_alternatives_threshold=None,
                   word_confidence=None,
                   timestamps=None,
                   profanity_filter=None,
                   smart_formatting=None,
                   speaker_labels=None,
                   **kwargs):
  • add_corpus no longer supports corpus_file_content_type and corpus_filename. The corpus_file should be A TEXT file.
    def add_corpus(self,
                   customization_id,
                   corpus_name,
                   corpus_file,
                   allow_overwrite=None,
                   **kwargs):
  • add_word
    def add_word(self,
                 customization_id,
                 word_name,
                 word=None,
                 sounds_like=None,
                 display_as=None,
                 **kwargs):
  • recognize_using_websocket
    • A new underlying websocket client is now used
    • audio is of type AudioSource
    • recognize_callback’s on_transcription () and on_hypothesis () results swapped with each other
    def recognize_using_websocket(self,
                                  audio,
                                  content_type,
                                  recognize_callback,
                                  model=None,
                                  customization_id=None,
                                  acoustic_customization_id=None,
                                  customization_weight=None,
                                  base_model_version=None,
                                  inactivity_timeout=None,
                                  interim_results=None,
                                  keywords=None,
                                  keywords_threshold=None,
                                  max_alternatives=None,
                                  word_alternatives_threshold=None,
                                  word_confidence=None,
                                  timestamps=None,
                                  profanity_filter=None,
                                  smart_formatting=None,
                                  speaker_labels=None,
                                  http_proxy_host=None,
                                  http_proxy_port=None,
                                  **kwargs):

Version 1.1.0

2018-03-09

Conversation

  • Change: update_workspace() has a new parameter append
  • Change: message() has a new parameter nodes_visited_details
  • Change: get_workspace(), list_workspaces(), get_intent(), list_intents(), get_example(), list_examples(), get_entity(), list_entities(), get_value(), list_values(), get_synonym(), list_synonyms(), get_dialog_node(), list_dialog_nodes(), get_counterexample(), list_counterexamples() has a new parameter include_audit

Discovery

  • New: create_expansions(), delete_expansions() and list_expansions() have been added
  • Change: federated_query(), federated_query_notices(), query(), query_notices() has new parameters similar, similar_document_ids and similar_fields

Speech to Text

  • New: recognize_with_websocket() method has been added. For more information look into examples in speech to text. There is also a microphone example added.
    def recognize_with_websocket(self,
                               audio=None,
                               content_type='audio/l16; rate=44100',
                               model='en-US_BroadbandModel',
                               recognize_callback=None,
                               customization_id=None,
                               acoustic_customization_id=None,
                               customization_weight=None,
                               version=None,
                               inactivity_timeout=None,
                               interim_results=True,
                               keywords=None,
                               keywords_threshold=None,
                               max_alternatives=1,
                               word_alternatives_threshold=None,
                               word_confidence=False,
                               timestamps=False,
                               profanity_filter=None,
                               smart_formatting=False,
                               speaker_labels=None):
  • Change: SpeechToTextV1.init() does not take in keyword arguments. Its method signature is now:
      def __init__(self, url=default_url, username=None, password=None)
  • Change: recognize() have some parameters deprecated: continuous and interim_results. The order of the parameters for recognize method is changed. Apps that call recognize with positional parameters may need to use the new parameter ordering. The following is the method signature:
          def recognize(self,
                  model=None,
                  customization_id=None,
                  acoustic_customization_id=None,
                  customization_weight=None,
                  version=None,
                  audio=None,
                  content_type='audio/basic',
                  inactivity_timeout=None,
                  keywords=None,
                  keywords_threshold=None,
                  max_alternatives=None,
                  word_alternatives_threshold=None,
                  word_confidence=None,
                  timestamps=None,
                  profanity_filter=None,
                  smart_formatting=None,
                  speaker_labels=None):
  • Change: add_corpus() parameter file_data renamed to corpus_file, additional parameter corpus_filename
  • Deprecated: models would be deprecated, use list_models
  • Deprecated:create_custom_model would be deprecated, use create_language_model
  • Deprecated:delete_custom_model would be deprecated, use delete_language_model
  • Deprecated:get_custom_model would be deprecated, use get_language_model
  • Deprecated:list_custom_models would be deprecated, use list_language_models
  • Deprecated:train_custom_model would be deprecated, use train_language_model
  • Deprecated:train_custom_model would be deprecated, use train_language_model
  • Deprecated:add_custom_word would be deprecated, use add_word
  • Deprecated:add_custom_words would be deprecated, use add_words
  • Deprecated:delete_custom_word would be deprecated, use delete_word
  • Deprecated:get_custom_word would be deprecated, use get_word
  • Deprecated:list_custom_words would be deprecated, use list_words
  • New: methods to handle asyncronous recognitions have been added:
    • check_job, check_jobs, create_job and delete_job
  • New: methods to register and unregister callbac urls:
    • register_callback and unregister_callback
  • New: methods around language models:
    • reset_language_model and upgrade_language_model
  • New: methods supporting custom acoustic models:
    • create_acoustic_model, delete_acoustic_model, get_acoustic_model, list_acoustic_models, reset_acoustic_model and train_acoustic_model
  • New: methods supporting custom audio resources:
    • add_audio, delete_audio, get_audio, list_audio

Text to Speech

  • Change: TextToSpeechV1.init() does not take in keyword arguments. Its method signature is now:
      def __init__(self, url=default_url, username=None, password=None)
  • Change: synthesize returns a response object
  • Deprecated: voices would be deprecated, use list_voices
  • Deprecated: pronunciation would be deprecated, use get_pronunciation
  • Deprecated:mcreate_customization would be deprecated, use create_voice_model
  • Deprecated: delete_customization would be deprecated, use delete_voice_model
  • Deprecated: get_customization would be deprecated, use get_voice_model
  • Deprecated: customizations would be deprecated, use list_voice_models
  • Deprecated: update_customization would be deprecated, use update_voice_model
  • Deprecated: set_customization_word would be deprecated, use add_word
  • Deprecated: add_customization_words would be deprecated, use add_words
  • Deprecated: delete_customization_word would be deprecated, use delete_word
  • Deprecated: get_customization_word would be deprecated, use get_word
  • Deprecated: get_customization_words would be deprecated, use list_words

Visual Recognition

  • Change: In classify(), parameters would be deprecated. Pass in the parameters values in url, threshold, owners and classifier_ids
     def classify(self,
                  images_file=None,
                  accept_language=None,
                  images_file_content_type=None,
                  images_filename=None):
                  images_filename=None,
                  url=None,
                  threshold=None,
                  owners=None,
                  classifier_ids=None):
  • Change: In detect_faces(), parameters would be deprecated. Pass in the parameters values in url
       def detect_faces(self,
                      images_file=None,
                      images_file_content_type=None,
                      images_filename=None):
                      images_filename=None,
                      url=None):

Tone Analyzer

  • Change: In tone(), the parameter re-ordering causes a breaking change. The new order is:
    def tone(self,
             tone_input,
             content_type,
             sentences=None,
             tones=None,
             content_language=None,
             accept_language=None)

Version 1.0 (2017-11-13)

This version of the SDK accepts either models or dicts as input parameters and produces dicts as method responses. Models for response classes are still generated and not pruned, so users can create a model from the returned dict.

Conversation

  • message() parameter message_input renamed to input

Discovery

  • create_configuration() parameter config_data={"name": ""} renamed to name
  • update_configuration() parameter config_data={"name": ""} renamed to name
  • add_document() parameter file_data is removed. File contents are now passed with the file/filename parameters.
  • update_document() parameters mime_type renamed to file_content_type, file_info and file_data replaced by file, and filename is the file name given to the file
  • Some methods have been renamed:
    • get_environments -> list_environments
    • test_document -> test_configuration_in_environment
    • get_document -> get_document_status
    • delete_training_data -> delete_all_training_data
    • add_training_data_query -> add_training_data
    • delete_training_data_query -> delete_training_data
    • get_training_data_query -> get_training_data
    • add_training_data_query_example -> create_training_example
    • delete_training_data_query_example -> delete_training_example
    • get_training_data_query_example -> get_training_example
    • update_training_data_query_example -> update_training_example
  • list_training_data_query_examples() is removed
  • query() parameter query_options changed to filter

Language Translator

  • Some methods have been renamed:
    • get_models -> list_models
    • get_identifiable_languages -> list_identifiable_languages

Natural Language Classifier

  • Some methods have been renamed:
    • list -> list_classifiers
    • status -> get_classifier
    • create -> create_classifier
      • create_classifier() parameter metadata has been added
    • remove -> delete_classifier

Natural Language Understanding

  • analyze() parameter limit_text_characters has been added

  • Dropped hand-written Features module in favor of generated Features model. For example:

    natural_language_understanding.analyze(
    text='Messi is the best',
    features=[Features.Entities(), Features.Keywords ()])

    is now:

    natural_language_understanding.analyze(
    text='Messi is the best',
    features=Features(entities=EntitiesOptions(), keywords=KeywordsOptions()))

Tone Analyzer

  • tone() parameters have been reordered:

    tone(self, tone_input, content_type='application/json', sentences=None, tones=None, content_language=None,
            accept_language=None)
  • tone() parameter text replaced by tone_input

  • tone() parameter content_type default value changed from text/plain to application/json

  • tone() parameters content_language and accept_language have been added

    tone(self, text, tones=None, sentences=None, content_type='text/plain')

    is now:

    tone(tone_input, content_type='application/json', sentences=None, content_language=None, accept_language=None):

Personality Insights

  • profile() parameter text changed to content
  • profile() parameter content_type default value changed from text/plain to application/json
  • profile() parameter accept is removed

Visual Recognition

  • classify parameters images_url, classifier_ids, owners, and xxx replaced with parameters`.

    classify(images_file=images_file, threshold=0.1, classifier_ids=['CarsvsTrucks_1479118188', 'default'])

    is now:

    parameters = json.dumps({'threshold': 0.1, 'classifier_ids': ['CarsvsTrucks_1479118188', 'default']})
    visual_recognition.classify(images_file=images_file, parameters=parameters)