banner



How To Search Most Viewed Videos On Youtube Api

· 14 min read · Updated may 2022 · Application Programming Interfaces

YouTube is no doubt the biggest video-sharing website on the Net. It is i of the main sources of teaching, entertainment, ad, and many more fields. Since it's a data-rich website, accessing its API will enable you lot to become most all of the YouTube data.

In this tutorial, we'll cover how to get YouTube video details and statistics, search by keyword, get YouTube channel information, and extract comments from both videos and channels, using YouTube API with Python.

Here is the tabular array of contents:

  • Enabling YouTube API
  • Getting Video Details
  • Searching by Keyword
  • Getting YouTube Aqueduct Details
  • Extracting YouTube Comments

Enabling YouTube API

To enable YouTube Information API, you should follow below steps:

  1. Go to Google'due south API Console and create a projection, or use an existing one.
  2. In the library console, search forYouTube Information API v3, click on it and click Enable.Enabling YouTube API
  3. In the credentials panel, click onCreate Credentials, and chooseOAuth client ID.Create Credentials
  4. Select Desktop App as the Application type and continue.Desktop app as application type in OAuth client
  5. Y'all'll see a window similar this:OAuth client created
  6. ClickOK and download the credentials file and rename it to credentials.json:Downloading Credentials

Note: If this is the showtime fourth dimension you use Google APIs, y'all may demand to simply create an OAuth Consent screen and add together your e-mail as a testing user.

Now that you have set up YouTube API, become your credentials.json in the electric current directory of your notebook/Python file, and let'south get started.

Commencement, install required libraries:

          $ pip3 install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib        

Now let's import the necessary modules we gonna demand:

          from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request  import urllib.parse as p import re import os import pickle  SCOPES = ["https://www.googleapis.com/auth/youtube.force-ssl"]        

SCOPES is a listing of scopes of using YouTube API; we're using this one to view all YouTube data without any problems.

Now allow's brand the function that authenticates with YouTube API:

          def youtube_authenticate():     os.environ["OAUTHLIB_INSECURE_TRANSPORT"] = "one"     api_service_name = "youtube"     api_version = "v3"     client_secrets_file = "credentials.json"     creds = None     # the file token.pickle stores the user'due south access and refresh tokens, and is     # created automatically when the dominance flow completes for the first time     if bone.path.exists("token.pickle"):         with open("token.pickle", "rb") as token:             creds = pickle.load(token)     # if at that place are no (valid) credentials availablle, let the user log in.     if not creds or not creds.valid:         if creds and creds.expired and creds.refresh_token:             creds.refresh(Asking())         else:             flow = InstalledAppFlow.from_client_secrets_file(client_secrets_file, SCOPES)             creds = catamenia.run_local_server(port=0)         # save the credentials for the next run         with open("token.pickle", "wb") as token:             pickle.dump(creds, token)      return build(api_service_name, api_version, credentials=creds)  # cosign to YouTube API youtube = youtube_authenticate()        

youtube_authenticate() looks for the credentials.json file that we downloaded earlier, and try to authenticate using that file, this will open your default browser the first time yous run it, so you accept the permissions. After that, it'll save a new file token.pickle that contains the authorized credentials.

It should look familiar if you used a Google API earlier, such as Gmail API, Google Bulldoze API, or something else. The prompt in your default browser is to take permissions required for the app. If you see a window that indicates the app isn't verified, you may just want to caput to Advanced and click on your App proper name.

Getting Video Details

Now that you have everything set, allow'due south begin with extracting YouTube video details, such every bit title, description, upload fourth dimension, and even statistics such as view count, like count, and dislike count.

The following function volition assist us extract the video ID (that we'll need in the API) from a video URL:

          def get_video_id_by_url(url):     """     Render the Video ID from the video `url`     """     # dissever URL parts     parsed_url = p.urlparse(url)     # go the video ID by parsing the query of the URL     video_id = p.parse_qs(parsed_url.query).get("five")     if video_id:         return video_id[0]     else:         enhance Exception(f"Wasn't able to parse video URL: {url}")        

We simply used the urllib.parse module to become the video ID from a URL.

The below part gets a YouTube service object (returned from youtube_authenticate()  office) , as well as any keyword argument accepted by the API, and returns the API response for a specific video:

          def get_video_details(youtube, **kwargs):     return youtube.videos().list(         part="snippet,contentDetails,statistics",         **kwargs     ).execute()        

Observe we specified office ofsnippet,contentDetails andstatistics, as these are the well-nigh important parts of the response in the API.

Nosotros also passkwargs to the API directly. Next, allow's define a function that takes a response returned from the in a higher placeget_video_details() function, and prints the most useful information from a video:

          def print_video_infos(video_response):     items = video_response.get("items")[0]     # get the snippet, statistics & content details from the video response     snippet         = items["snippet"]     statistics      = items["statistics"]     content_details = items["contentDetails"]     # go infos from the snippet     channel_title = snippet["channelTitle"]     championship         = snippet["title"]     description   = snippet["description"]     publish_time  = snippet["publishedAt"]     # go stats infos     comment_count = statistics["commentCount"]     like_count    = statistics["likeCount"]     dislike_count = statistics["dislikeCount"]     view_count    = statistics["viewCount"]     # become elapsing from content details     duration = content_details["duration"]     # duration in the form of something like 'PT5H50M15S'     # parsing it to be something like '5:50:15'     parsed_duration = re.search(f"PT(\d+H)?(\d+M)?(\d+Due south)", elapsing).groups()     duration_str = ""     for d in parsed_duration:         if d:             duration_str += f"{d[:-i]}:"     duration_str = duration_str.strip(":")     print(f"""\     Title: {title}     Description: {description}     Channel Title: {channel_title}     Publish fourth dimension: {publish_time}     Elapsing: {duration_str}     Number of comments: {comment_count}     Number of likes: {like_count}     Number of dislikes: {dislike_count}     Number of views: {view_count}     """)        

Finally, let's use these functions to excerpt information from a demo video:

          video_url = "https://www.youtube.com/watch?v=jNQXAC9IVRw&ab_channel=jawed" # parse video ID from URL video_id = get_video_id_by_url(video_url) # make API call to get video info response = get_video_details(youtube, id=video_id) # print extracted video infos print_video_infos(response)        

We first get the video ID from the URL, and then nosotros get the response from the API call and finally impress the data. Here is the output:

                      Title: Me at the zoo     Description: The showtime video on YouTube. Perchance it'south time to go back to the zoo?     Channel Title: jawed     Publish time: 2005-04-24T03:31:52Z     Duration: nineteen     Number of comments: 11018071     Number of likes: 5962957     Number of dislikes: 153444     Number of views: 138108884        

You encounter, we used the id parameter to become the details of a specific video, y'all can also employ the sameget_video_details() part to get your liked/disliked videos past passingmyRating="like" ormyRating="dislike" instead ofid=video_id.

You can likewise fix multiple video IDs separated past commas, so you make a unmarried API call to get details about multiple videos, check the documentation for more detailed information.

Searching By Keyword

Searching using YouTube API is straightforward; we merely pass q parameter for query, the aforementioned query we utilise in the YouTube search bar:

          def search(youtube, **kwargs):     return youtube.search().list(         part="snippet",         **kwargs     ).execute()        

This time we care well-nigh the snippet, and we usesearch() instead ofvideos() like in the previously defined get_video_details() function.

Allow'south, for case, search for "python" and limit the results to only two:

          # search for the query 'python' and remember 2 items only response = search(youtube, q="python", maxResults=2) items = response.get("items") for item in items:     # go the video ID     video_id = detail["id"]["videoId"]     # go the video details     video_response = get_video_details(youtube, id=video_id)     # print the video details     print_video_infos(video_response)     print("="*50)        

We setmaxResults to two so we call up the showtime two items, hither is a part of the output:

          Title: Learn Python - Total Course for Beginners [Tutorial]     Clarification: This course will requite y'all a full introduction into all of the core concepts in python...<SNIPPED>     Aqueduct Title: freeCodeCamp.org     Publish time: 2018-07-11T18:00:42Z     Duration: four:26:52     Number of comments: 30307     Number of likes: 520260     Number of dislikes: 5676     Number of views: 21032973 ==================================================     Title: Python Tutorial - Python for Beginners [Full Course]     Clarification: Python tutorial - Python for beginners   Learn Python programming for a career in automobile learning, information science & web development...<SNIPPED>     Aqueduct Title: Programming with Mosh     Publish fourth dimension: 2019-02-18T15:00:08Z     Elapsing: half-dozen:14:7     Number of comments: 38019     Number of likes: 479749     Number of dislikes: 3756     Number of views: 15575418        

Y'all can also specify theorder parameter insearch() role to gild search results, which can exist'appointment','rating','viewCount','relevance' (default),'title', and'videoCount'.

Another useful parameter is thetype, which can be'channel','playlist' or'video', default is all of them.

Please bank check this page for more data nearly the search().list() method.

Getting YouTube Channel Details

This section will take a channel URL and excerpt channel information using YouTube API.

Start, we demand helper functions to parse the channel URL. The below functions will help us to do that:

          def parse_channel_url(url):     """     This function takes channel `url` to check whether it includes a     channel ID, user ID or channel proper name     """     path = p.urlparse(url).path     id = path.carve up("/")[-1]     if "/c/" in path:         return "c", id     elif "/channel/" in path:         return "channel", id     elif "/user/" in path:         render "user", id  def get_channel_id_by_url(youtube, url):     """     Returns channel ID of a given `id` and `method`     - `method` (str): can be 'c', 'channel', 'user'     - `id` (str): if method is 'c', and then `id` is display proper noun         if method is 'channel', then it'southward channel id         if method is 'user', then it's username     """     # parse the aqueduct URL     method, id = parse_channel_url(url)     if method == "channel":         # if it'southward a channel ID, and so but return information technology         return id     elif method == "user":         # if information technology'southward a user ID, make a request to become the channel ID         response = get_channel_details(youtube, forUsername=id)         items = response.get("items")         if items:             channel_id = items[0].become("id")             return channel_id     elif method == "c":         # if information technology's a aqueduct proper name, search for the channel using the name         # may be inaccurate         response = search(youtube, q=id, maxResults=1)         items = response.get("items")         if items:             channel_id = items[0]["snippet"]["channelId"]             return channel_id     raise Exception(f"Cannot find ID:{id} with {method} method")        

Now we can parse the aqueduct URL. Let's define our functions to call the YouTube API:

          def get_channel_videos(youtube, **kwargs):     return youtube.search().list(         **kwargs     ).execute()   def get_channel_details(youtube, **kwargs):     return youtube.channels().list(         part="statistics,snippet,contentDetails",         **kwargs     ).execute()        

We'll be usingget_channel_videos() to get the videos of a specific channel, andget_channel_details() will allow us to excerpt information most a specific youtube channel.

Now that nosotros take everything, allow'due south make a concrete example:

          channel_url = "https://world wide web.youtube.com/aqueduct/UC8butISFwT-Wl7EV0hUK0BQ" # get the aqueduct ID from the URL channel_id = get_channel_id_by_url(youtube, channel_url) # go the channel details response = get_channel_details(youtube, id=channel_id) # extract channel infos snippet = response["items"][0]["snippet"] statistics = response["items"][0]["statistics"] channel_country = snippet["country"] channel_description = snippet["description"] channel_creation_date = snippet["publishedAt"] channel_title = snippet["title"] channel_subscriber_count = statistics["subscriberCount"] channel_video_count = statistics["videoCount"] channel_view_count  = statistics["viewCount"] print(f""" Title: {channel_title} Published At: {channel_creation_date} Clarification: {channel_description} State: {channel_country} Number of videos: {channel_video_count} Number of subscribers: {channel_subscriber_count} Total views: {channel_view_count} """) # the post-obit is grabbing channel videos # number of pages y'all desire to get n_pages = 2 # counting number of videos grabbed n_videos = 0 next_page_token = None for i in range(n_pages):     params = {         'part': 'snippet',         'q': '',         'channelId': channel_id,         'type': 'video',     }     if next_page_token:         params['pageToken'] = next_page_token     res = get_channel_videos(youtube, **params)     channel_videos = res.become("items")     for video in channel_videos:         n_videos += 1         video_id = video["id"]["videoId"]         # easily construct video URL by its ID         video_url = f"https://www.youtube.com/watch?v={video_id}"         video_response = get_video_details(youtube, id=video_id)         print(f"================Video #{n_videos}================")         # print the video details         print_video_infos(video_response)         print(f"Video URL: {video_url}")         print("="*40)     print("*"*100)     # if there is a next page, then add it to our parameters     # to proceed to the next page     if "nextPageToken" in res:         next_page_token = res["nextPageToken"]        

We showtime become the channel ID from the URL, and then we make an API phone call to get channel details and print them.

Subsequently that, we specify the number of pages of videos we want to excerpt. The default is x videos per page, and we tin also change that by passing the maxResults parameter.

We iterate on each video and make an API call to get various information about the video, and we use our predefined print_video_infos() to print the video information.

Here is a part of the output:

          ================Video #1================     Title: Async + Await in JavaScript, talk from Wes Bos     Description: Flow Command in JavaScript is hard! ...     Channel Title: freeCodeCamp.org     Publish time: 2018-04-16T16:58:08Z     Duration: fifteen:52     Number of comments: 52     Number of likes: 2353     Number of dislikes: 28     Number of views: 74562 Video URL: https://www.youtube.com/watch?five=DwQJ_NPQWWo ======================================== ================Video #2================     Championship: Protected Routes in React using React Router     Clarification: In this video, we will create a protected route using...     Channel Title: freeCodeCamp.org     Publish fourth dimension: 2018-10-16T16:00:05Z     Duration: 15:forty     Number of comments: 158     Number of likes: 3331     Number of dislikes: 65     Number of views: 173927 Video URL: https://www.youtube.com/lookout?v=Y0-qdp-XBJg ...<SNIPPED>        

You lot tin can get other information; yous can print the response dictionary for farther information or bank check the documentation for this endpoint.

YouTube API allows us to extract comments; this is useful if you want to go comments for your text classification project or something like.

The beneath function takes care of making an API telephone call tocommentThreads():

          def get_comments(youtube, **kwargs):     return youtube.commentThreads().list(         part="snippet",         **kwargs     ).execute()        

The below code extracts comments from a YouTube video:

          # URL can be a channel or a video, to extract comments url = "https://www.youtube.com/picket?v=jNQXAC9IVRw&ab_channel=jawed" if "watch" in url:     # that's a video     video_id = get_video_id_by_url(url)     params = {         'videoId': video_id,          'maxResults': 2,         'order': 'relevance', # default is 'time' (newest)     } else:     # should exist a channel     channel_id = get_channel_id_by_url(url)     params = {         'allThreadsRelatedToChannelId': channel_id,          'maxResults': 2,         'society': 'relevance', # default is 'time' (newest)     } # go the offset 2 pages (2 API requests) n_pages = 2 for i in range(n_pages):     # make API telephone call to get all comments from the channel (including posts & videos)     response = get_comments(youtube, **params)     items = response.go("items")     # if items is empty, breakout of the loop     if not items:         suspension     for particular in items:         comment = item["snippet"]["topLevelComment"]["snippet"]["textDisplay"]         updated_at = particular["snippet"]["topLevelComment"]["snippet"]["updatedAt"]         like_count = item["snippet"]["topLevelComment"]["snippet"]["likeCount"]         comment_id = item["snippet"]["topLevelComment"]["id"]         print(f"""\         Annotate: {comment}         Likes: {like_count}         Updated At: {updated_at}         ==================================\         """)     if "nextPageToken" in response:         # if there is a next page         # add next page token to the params we pass to the function         params["pageToken"] =  response["nextPageToken"]     else:         # must be terminate of comments!!!!         interruption     print("*"*70)        

You can also changeurl variable to be a YouTube channel URL and so that it will pass allThreadsRelatedToChannelId instead ofvideoId as a parameter tocommentThreads() API.

Nosotros're extracting two comments per page and ii pages, so iv comments in total. Here is the output:

                      Comment: We&#39;re and then honored that the first ever YouTube video was filmed hither!         Likes: 877965         Updated At: 2020-02-17T18:58:15Z         ==================================                 Comment: Wow, still in your recommended in 2021? Squeamish! Yay         Likes: 10951         Updated At: 2021-01-04T15:32:38Z         ==================================         **********************************************************************         Comment: How many are seeing this video now         Likes: 7134         Updated At: 2021-01-03T19:47:25Z         ==================================                 Comment: The first youtube video Ever. Wow.         Likes: 865         Updated At: 2021-01-05T00:55:35Z         ==================================         **********************************************************************        

We're extracting the comment itself, the number of likes, and the last updated appointment; you can explore the response dictionary to get various other useful data.

You're costless to edit the parameters we passed, such as increasing the maxResults, or changing thesociety. Please cheque the page for this API endpoint.

Conclusion

YouTube Data API provides a lot more than what we covered here. If you take a YouTube channel, you can upload, update and delete videos, and much more.

I invite you to explore more in the YouTube API documentation for avant-garde search techniques, getting playlist details, members, and much more.

If y'all want to extract YouTube information but don't want to use the API, and so we also have a tutorial on getting YouTube data with spider web scraping (more similar an unofficial way to do information technology).

Beneath are some of the Google API tutorials:

  • How to Extract Google Trends Data in Python.
  • How to Use Google Drive API in Python.
  • How to Utilise Gmail API in Python.
  • How to Employ Google Custom Search Engine API in Python.

Happy Coding ♥

View Total Code


Read Also


How to Extract Google Trends Data in Python

How to Extract YouTube Data in Python

How to Use Google Drive API in Python


Comment panel

Source: https://www.thepythoncode.com/article/using-youtube-api-in-python

Posted by: pizzosament1964.blogspot.com

0 Response to "How To Search Most Viewed Videos On Youtube Api"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel