Natural Language Processing with Transformers

Natural Language Processing with Transformers

  • Lewis Tunstall
  • Leandro von Werra
  • Thomas Wolf
Publisher:O'Reilly MediaISBN 13: 9781098103248ISBN 10: 1098103246

Paperback & Hardcover deals ―

Amazon IndiaGOFlipkart GOSnapdealGOSapnaOnlineGOJain Book AgencyGOBooks Wagon₹3,805Book ChorGOCrosswordGODC BooksGO

e-book & Audiobook deals ―

Amazon India GOGoogle Play Books GOAudible GO

* Price may vary from time to time.

* GO = We're not able to fetch the price (please check manually visiting the website).

Know about the book -

Natural Language Processing with Transformers is written by Lewis Tunstall and published by O'Reilly Media. It's available with International Standard Book Number or ISBN identification 1098103246 (ISBN 10) and 9781098103248 (ISBN 13).

Since their introduction in 2017, Transformers have quickly become the dominant architecture for achieving state-of-the-art results on a variety of natural language processing tasks. If you're a data scientist or machine learning engineer, this practical book shows you how to train and scale these large models using HuggingFace Transformers, a Python-based deep learning library. Transformers have been used to write realistic news stories, improve Google Search queries, and even create chatbots that tell corny jokes. In this guide, authors Lewis Tunstall, Leandro von Werra, and Thomas Wolf use a hands-on approach to teach you how Transformers work and how to integrate them in your applications. You'll quickly learn a variety of tasks they can help you solve. Build, debug, and optimize Transformer models for core NLP tasks, such as text classification, named entity recognition, and question answering Learn how Transformers can be used for cross-lingual transfer learning Apply Transformers in real-world scenarios where labeled data is scarce Make Transformer models efficient for deployment using techniques such as distillation, pruning, and quantization Train Transformers from scratch and learn how to scale to multiple GPUs and distributed environments