動画一覧 - DataCamp - 機械学習のまとめ DataCampの動画一覧です。 https://ml.streamdb.net/videos-rss/c/UC79Gv3mYp6zKiSwYemEik9A Thu, 02 May 24 00:00:00 +0900 Getting Started With Data Analysis in Alteryx Cloud https://ml.streamdb.net/timelines/v/QO9Hlwp-ySA Thu, 02 May 24 00:00:00 +0900 Getting Started With Data Analysis in Alteryx Cloud Implementing A Culture To Create Data Products https://ml.streamdb.net/timelines/v/FEV1QS5ixHQ Fri, 26 Apr 24 00:00:00 +0900 Implementing A Culture To Create Data Products #ChatGPT Chain-Of-Thought Prompting For Accurate Answers with #AlexBanks https://ml.streamdb.net/timelines/v/aBnWFU5al5I Thu, 25 Apr 24 01:25:00 +0900 #ChatGPT Chain-Of-Thought Prompting For Accurate Answers with #AlexBanks Listen to the full episode 👉 https://ow.ly/pmnh50Re9qf #chatgpt #ai #prompts #prompt engineering #chain of thought #openai #podcast #dataframed #datacamp #chatgpt #ai #prompts #prompt engineering #chain of thought #openai #podcast #dataframed #datacamp Principles of Building Data Profitable Products https://ml.streamdb.net/timelines/v/uZW5p4h1MAc Thu, 25 Apr 24 00:00:00 +0900 Principles of Building Data Profitable Products Understanding LLM Inference: How AI Generates Words https://ml.streamdb.net/timelines/v/NJ1jAfWR84k Wed, 24 Apr 24 00:00:00 +0900 Understanding LLM Inference: How AI Generates Words No-Touch is the Baseline of AI-First Companies https://ml.streamdb.net/timelines/v/Xf3lgvv8yIY Tue, 23 Apr 24 19:54:25 +0900 No-Touch is the Baseline of AI-First Companies Full episode: https://bit.ly/3xNbgJE What Start Ups Need to Pay Attention To When Building AI Tools https://ml.streamdb.net/timelines/v/R6fycK0X4W0 Tue, 23 Apr 24 19:53:49 +0900 What Start Ups Need to Pay Attention To When Building AI Tools Full episode: https://bit.ly/3xNbgJE The Core Foundations of SQL From the Inventor Himself 🧪 https://ml.streamdb.net/timelines/v/rBJkwKKP9ek Tue, 23 Apr 24 19:52:58 +0900 The Core Foundations of SQL From the Inventor Himself 🧪 Full episode: https://bit.ly/3W98RD6 How the Inventor of SQL Found Out About Relational Databases 👀 https://ml.streamdb.net/timelines/v/nD_1vs3Ib4g Tue, 23 Apr 24 19:51:47 +0900 How the Inventor of SQL Found Out About Relational Databases 👀 Full episode: https://bit.ly/3W98RD6 Here's What Made SQL Popular | SQL Inventor Shares Why It's So Widely Used https://ml.streamdb.net/timelines/v/1bW0yyGRbz8 Tue, 23 Apr 24 00:53:06 +0900 Here's What Made SQL Popular | SQL Inventor Shares Why It's So Widely Used Don Chamberlin is renowned as the co-inventor of SQL (Structured Query Language), the predominant database language globally, which he developed with Raymond Boyce in the mid-1970s. Chamberlin's professional career began at IBM Research in Yorktown Heights, New York, following a summer internship there during his academic years. His work on IBM's System R project led to the first SQL implementation and significantly advanced IBM’s relational database technology. His contributions were recognized when he was made an IBM Fellow in 2003 and later a Fellow of the Computer History Museum in 2009 for his pioneering work on SQL and database architectures. Chamberlin also contributed to the development of XQuery, an XML query language, as part of the W3C, which became a W3C Recommendation in January 2007. Additionally, he holds fellowships with ACM and IEEE and is a member of the National Academy of Engineering. In the episode, Richie and Don explore his early career at IBM and the development of his interest in databases alongside Ray Boyce, the database task group (DBTG), the transition to relational databases and the early development of SQL, the commercialization and adoption of SQL, how it became standardized, how it evolved and spread via open source, the future of SQL through NoSQL and SQL++ and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 #data #ai #podcast #sql #sequel #dataframed #datacamp #programming languages #don chamberlin #sql founder #sql inventor #data #ai #podcast #sql #sequel #dataframed #datacamp #programming languages #don chamberlin #sql founder #sql inventor #200 50 Years of SQL | Don Chamberlin Computer Scientist and Co-Inventor of SQL https://ml.streamdb.net/timelines/v/5VqM5nmcmPI Mon, 22 Apr 24 17:36:18 +0900 #200 50 Years of SQL | Don Chamberlin Computer Scientist and Co-Inventor of SQL Over the past 199 episodes of DataFramed, we’ve heard from people at the forefront of data and AI, and over the past year we’ve constantly looked ahead to the future AI might bring. But all of the technologies and ways of working we’ve witnessed have been built on foundations that were laid decades ago. For our 200th episode, we’re bringing you a special guest and taking a walk down memory lane—to the creation and development of one of the most popular programming languages in the world. Don Chamberlin is renowned as the co-inventor of SQL (Structured Query Language), the predominant database language globally, which he developed with Raymond Boyce in the mid-1970s. Chamberlin's professional career began at IBM Research in Yorktown Heights, New York, following a summer internship there during his academic years. His work on IBM's System R project led to the first SQL implementation and significantly advanced IBM’s relational database technology. His contributions were recognized when he was made an IBM Fellow in 2003 and later a Fellow of the Computer History Museum in 2009 for his pioneering work on SQL and database architectures. Chamberlin also contributed to the development of XQuery, an XML query language, as part of the W3C, which became a W3C Recommendation in January 2007. Additionally, he holds fellowships with ACM and IEEE and is a member of the National Academy of Engineering. In the episode, Richie and Don explore his early career at IBM and the development of his interest in databases alongside Ray Boyce, the database task group (DBTG), the transition to relational databases and the early development of SQL, the commercialization and adoption of SQL, how it became standardized, how it evolved and spread via open source, the future of SQL through NoSQL and SQL++ and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 Links Mentioned in the Show: The first-ever journal paper on SQL. SEQUEL: A Structured English Query Language - https://dl.acm.org/doi/pdf/10.1145/800296.811515 Don’s Book: SQL++ for SQL Users: A Tutorial - https://g.co/kgs/fmy8ffh System R: Relational approach to database management - https://research.ibm.com/publications/system-r-relational-approach-to-database-management SQL Courses - https://www.datacamp.com/courses-all?q=sql SQL Articles, Tutorials and Code-Alongs - https://www.datacamp.com/blog/category/sql Related Episode: Scaling Enterprise Analytics with Libby Duane Adams, Chief Advocacy Officer and Co-Founder of Alteryx - https://www.datacamp.com/podcast/scaling-enterprise-analytics-with-libby-duane-adams-chief-advocacy-officer-and-co-founder-of-alteryx Rewatch sessions from RADAR: The Analytics Edition - https://www.datacamp.com/radar-analytics-edition New to DataCamp? Learn on the go using the DataCamp mobile app - https://www.datacamp.com/mobile Empower your business with world-class data and AI skills with DataCamp for business - https://www.datacamp.com/business #data #ai #podcast #sql #sequel #dataframed #datacamp #programming languages #don chamberlin #sql founder #sql inventor #data #ai #podcast #sql #sequel #dataframed #datacamp #programming languages #don chamberlin #sql founder #sql inventor Create and Name Matrices | Simple R Programming Tutorial https://ml.streamdb.net/timelines/v/Sa6hg0o_9Oc Sat, 20 Apr 24 13:00:20 +0900 Create and Name Matrices | Simple R Programming Tutorial Understand how to create and name your matrices in R. Join DataCamp today, and start our interactive intro to R programming tutorial for free: https://www.datacamp.com/courses/free-introduction-to-r So, what is a matrix. Well, a matrix is kind of like the big brother of the vector. Where a vector is a _sequence_ of data elements, which is one-dimensional, a matrix is a similar collection of data elements, but this time arranged into a fixed number of rows and columns. Since you are only working with rows and columns, a matrix is called two-dimensional. As with the vector, the matrix can contain only one atomic vector type. This means that you can't have logicals and numerics in a matrix for example. There's really not much more theory about matrices than this: it's really a natural extension of the vector, going from one to two dimensions. Of course, this has its implications for manipulating and subsetting matrices, but let's start with simply creating and naming them. To build a matrix, you use the matrix function. Most importantly, it needs a vector, containing the values you want to place in the matrix, and at least one matrix dimension. You can choose to specify the number of rows or the number of columns. Have a look at the following example, that creates a 2-by-3 matrix containing the values 1 to 6, by specifying the vector and setting the nrow argument to 2: R sees that the input vector has length 6 and that there have to be two rows. It then infers that you'll probably want 3 columns, such that the number of matrix elements matches the number of input vector elements. You could just as well specify ncol instead of nrow; in this case, R infers the number of _rows_ automatically. In both these examples, R takes the vector containing the values 1 to 6, and fills it up, column by column. If you prefer to fill up the matrix in a row-wise fashion, such that the 1, 2 and 3 are in the first row, you can set the `byrow` argument of matrix to `TRUE` Can you spot the difference? Remember how R did recycling when you were subsetting vectors using logical vectors? The same thing happens when you pass the matrix function a vector that is too short to fill up the entire matrix. Suppose you pass a vector containing the values 1 to 3 to the matrix function, and explicitly say you want a matrix with 2 rows and 3 columns: R fills up the matrix column by column and simply repeats the vector. If you try to fill up the matrix with a vector whose multiple does not nicely fit in the matrix, for example when you want to put a 4-element vector in a 6-element matrix, R generates a warning message. Actually, apart from the `matrix()` function, there's yet another easy way to create matrices that is more intuitive in some cases. You can paste vectors together using the `cbind()` and `rbind()` functions. Have a look at these calls `cbind()`, short for column bind, takes the vectors you pass it, and sticks them together as if they were columns of a matrix. The `rbind()` function, short for row bind, does the same thing but takes the input as rows and makes a matrix out of them. These functions can come in pretty handy, because they're often more easy to use than the `matrix()` function. The `bind` functions I just introduced can also handle matrices actually, so you can easily use them to paste another row or another column to an already existing matrix. Suppose you have a matrix `m`, containing the elements 1 to 6: If you want to add another row to it, containing the values 7, 8, 9, you could simply run this command: You can do a similar thing with `cbind()`: Next up is naming the matrix. In the case of vectors, you simply used the names() function, but in the case of matrices, you could assign names to both columns and rows. That's why R came up with the rownames() and colnames() functions. Their use is pretty straightforward. Retaking the matrix `m` from before, we can set the row names just the same way as we named vectors, but this time with the rownames function. Printing m shows that it worked: Setting the column names with a vector of length 3 gives us a fully named matrix Just as with vectors, there are also one-liner ways of naming matrices while you're building it. You use the dimnames argument of the matrix function for this. Check this out. #R (Programming Language) #data science #r programming #r tutorial #matrices #r matrices #R (Programming Language) #data science #r programming #r tutorial #matrices #r matrices Working with the OpenAI API | How to Build Your Own AI Tools https://ml.streamdb.net/timelines/v/aBrTvcYOQiE Fri, 19 Apr 24 20:45:00 +0900 00:00:00 Introduction 00:00:39 What is OpenAI, ChatGPT, and the OpenAI API? 00:01:28 What is an API? 00:02:36 Using the OpenAI API vs. the web interface 00:03:09 Why use the OpenAI API? Working with the OpenAI API | How to Build Your Own AI Tools Welcome to our deep dive into the OpenAI API! 🚀 In this video, we'll introduce you to the incredible potential of the OpenAI API. Whether you're a developer, a tech enthusiast, or just curious about the world of AI, this video is for you! 🔎Video Breakdown: 00:00 Introduction 00:39 What is OpenAI, ChatGPT, and the OpenAI API? 01:28 What is an API? 02:36 Using the OpenAI API vs. the web interface 03:09 Why use the OpenAI API? 💡Key Takeaways: Discover the groundbreaking work of OpenAI and their flagship product, ChatGPT. Understand the role of APIs in modern software and how they bridge communication between applications. Learn the advantages of using the OpenAI API over the web interface. Realize the transformative potential of integrating AI into products and services. 🔗About the Course: This video is just the beginning! Dive deeper into the world of the OpenAI API with our comprehensive course on DataCamp, Working with the OpenAI API: https://bit.ly/43Wr5bq DataCamp is a leading online learning platform for data science and AI, with courses designed by industry experts. Learn at your own pace and elevate your skills. 👇Follow us for more: Facebook: https://www.facebook.com/datacampinc/ Twitter: https://twitter.com/DataCamp Instagram: https://www.instagram.com/datacamp LinkedIn: https://www.linkedin.com/school/datacampinc/ ⭐️Credits: Host: James Chapman Image Credit: OpenAI 👍 If you found this video helpful, don't forget to hit the like button, share it with your friends, and subscribe for more insightful tech content! #OpenAI #OpenAIAPI #gen ai #openai #ai #ai programming #intro to chatgpt #chatgpt #datacamp #generative ai #generative ai tutorial #gen ai #openai #ai #ai programming #intro to chatgpt #chatgpt #datacamp #generative ai #generative ai tutorial The Future of AI | What Comes Next For Generative AI Models? https://ml.streamdb.net/timelines/v/FYjOdEWbsJ4 Fri, 19 Apr 24 06:00:08 +0900 00:00:00 - Introduction 00:00:13 - What performance improvements will we see in generative AI models? 00:00:46 - What will drive LLM improvements? 00:01:47 - The challenges in improving LLM performance 00:02:41 - Transitioning from generalized to specialized models 00:03:16 - Other types of generative AI models that will shape the future The Future of AI | What Comes Next For Generative AI Models? Are you ready to explore the exciting future of generative AI models like ChatGPT and the key hurdles the industry must overcome? You're in the right place! This video will dive into the advancements expected in generative AI, the factors that will drive these improvements, and the challenges that must be faced. Video Breakdown: 00:00 - Introduction 00:13 - What performance improvements will we see in generative AI models? 00:46 - What will drive LLM improvements? 01:47 - The challenges in improving LLM performance 02:41 - Transitioning from generalized to specialized models 03:16 - Other types of generative AI models that will shape the future As technology evolves, we expect generative AI models to create content that more closely resembles human-generated content and handles complex instructions more reliably. We'll also look at the role of increasing training data and user rating in driving model improvements, the challenge of reducing bias in vast and unstructured training data, and the potential misuse of AI as it becomes more human-like. As we move forward, specialized models will become more common and AI's accessibility will play a key role in its wide-scale adoption. 🔗 Dive deeper into the world of Generative AI with our comprehensive 'Introduction to ChatGPT' course: https://bit.ly/44qfySt DataCamp is a leading online learning platform for data science and AI, with courses designed by industry experts. Learn at your own pace and elevate your skills. Show your support by liking, sharing, and subscribing to our channel. Your comments and questions are always appreciated, so feel free to leave them below! 👇Follow us for more: Facebook: https://www.facebook.com/datacampinc/ Twitter: https://twitter.com/DataCamp Instagram: https://www.instagram.com/datacamp LinkedIn: https://www.linkedin.com/school/datacampinc/ #GenerativeAI #AI #FutureAI #gen ai #openai #ai #ai programming #intro to chatgpt #chatgpt #datacamp #generative ai #generative ai tutorial #gen ai #openai #ai #ai programming #intro to chatgpt #chatgpt #datacamp #generative ai #generative ai tutorial How to Subset Matrices | Step by Step R Programming Tutorial https://ml.streamdb.net/timelines/v/Rg4U_Z9BBRo Fri, 19 Apr 24 02:00:26 +0900 How to Subset Matrices | Step by Step R Programming Tutorial Discover how you can subset matrices using R. Join DataCamp today, and start our interactive intro to R programming tutorial for free: https://www.datacamp.com/courses/free-introduction-to-r Just as for vectors, there are situations in which you want to select single elements or entire parts of a matrix to continue your analysis with. Again, you can use square brackets for this, but the fact that you're dealing with two dimensions now, complicates things a bit. Have a look at this matrix containing some random numbers. If you want to select a single element from this matrix, you'll have to specify both the row and the column of the element of interest. Suppose we want to select the number 15, located at the first row and the third column. We type m, open brackets, 1, comma, 3, comma, close brackets. As you can probably tell, the first index refers to the row, the second one refers to the column. Likewise, to select the number 1, at row 3 and column 2, we write the following line: Works like a charm! Notice that the results are single values, so vectors of length 1. Now, what if you want to select an entire row or column from this matrix? You can do this by letting out some of the indices between square brackets. Instead of writing 3, comma, 2 inside square brackets to select the element at row 3 and column 2, you can leave out the 2 and keep the 3, comma part. Now, you select all elements that are in row 3, namely 6, 1, 4 and 2. Notice here that the result is not a matrix anymore! It's also a vector, but this time one that contains more than 1 element. You selected a single row from the matrix so a vector suffices to store this one-dimensional information. To select columns, you can work similarly, but this time the index that comes before the comma should be removed. To select the entire 3rd column, you should write m, open brackets, comma, 3, close brackets. Again, a vector results, this time of length 3, corresponding to the third column of `m`. Now, what happens when you decide not to include a comma to clearly discern between column and row indices? Let's simply try it out and see if we can explain it. Suppose you simply type m and then 4 inside brackets. The result is 11. How did R get to that? Well, when you pass a single index to subset a matrix, R simply goes through the matrix column by column from left to right. The first index is then 5, the second one 12, the third one 6 and the fourth one is 11, in the next column. This means that if we pass m[9], we should get 4, in the third row and third column. Correct! There aren't a lot of cases in which using a single index without commas in a matrix is useful, but I just wanted to point out that the comma is really crucial here. In vector subsetting, you also learned how to select multiple elements. In matrices, this is of course also possible and the principles are just the same. Say, for example, you want to select the values 14 and 8, in the middle of the matrix. This command will do that for you: You select elements that are on the second row and on the second and third column. Again, the result is a vector, because 1 dimension suffices. But you can't select elements that don't have one of row or column index in common. If you want to select the 11, on row 1 and column 2, and 8, on row 2 and column 3, this call will not give the wanted result. Instead, a submatrix gets returned, that spans the elements on row 1 and 2 and column 2 and 3. These submatrices can also be built up from disjoint places in your matrix. Creating a submatrix that contains elements on row 1 and 3 and on columns 1 , 3 and 4, for example, would look like this Now, remember these other ways of performing subsetting, by using names and with logical vectors? These work just as well for matrices. Let's have a look at subsetting by names first. First, though, we'll have to name the matrix: In fact subsetting by name works exactly the same as by index, but you just replace the indices with the corresponding names. To select 8, you could use the row index 2 and column 3, or use the row name r2 and column name column c: You can even use a combination of both: Just remember to surround the row and column names with quotes Selecting multiple elements and submatrices from a matrix is straightforward as well. To select elements on row r3 and in the last two columns, you can use: Finally, you can also use logical vectors. Again, the same rules apply: rows and columns corresponding to a TRUE are kept, while those corresponding to FALSE are left out. To select the same elements as in the previous call, you can use: The rules of vector recycling also apply here. Suppose that you only pass a vector of length 2 to perform a selection on the columns: The column selection vector gets recycled to FALSE, TRUE, FALSE, TRUE: Giving the same result. #R (Programming Language) #Statistics (Field Of Study) #datacamp #r tutorial #matrices #subset matrices #R (Programming Language) #Statistics (Field Of Study) #datacamp #r tutorial #matrices #subset matrices DataCamp Classrooms Orientation Spring 2024 https://ml.streamdb.net/timelines/v/BsfPgbeAX_c Fri, 19 Apr 24 00:00:00 +0900 DataCamp Classrooms Orientation Spring 2024 t-SNE High-Dimensional Data Visualization | Python Tutorial https://ml.streamdb.net/timelines/v/D9bdJm1GYFY Thu, 18 Apr 24 21:52:15 +0900 t-SNE High-Dimensional Data Visualization | Python Tutorial Want to learn more? Take the full course at https://learn.datacamp.com/courses/dimensionality-reduction-in-python at your own pace. More than a video, you'll learn hands-on coding & quickly apply skills to your daily work. --- In this video, you'll learn to apply t-Distributed Stochastic Neighbor Embedding or t-SNE. While this may sound scary, it's just a powerful technique to visualize high dimensional data using feature extraction. t-SNE will maximize the distance in two-dimensional space between observations that are most different in a high-dimensional space. Because of this, observations that are similar will be close to one another and may become clustered. This is what happens when we apply t-SNE to the Iris dataset. We can see how the Setosa species forms a separate cluster, while the other two are closer together and therefore more similar. However, the Iris dataset only has 4 dimensions to start with, so let's try this on a more challenging dataset. Our ANSUR female body measurements dataset has 99 dimensions. Before we apply t-SNE we're going to remove all non-numeric columns from the dataset by passing a list with the unwanted column names to the pandas dataframe .drop() method. t-SNE does not work with non-numeric data as such. We could use a trick like one-hot encoding to get around this but we'll be using a different approach here. We'll create a TSNE() model with learning rate 50. While fitting to the dataset, t-SNE will try different configurations and evaluate these with an internal cost function. High learning rates will cause the algorithm to be more adventurous in the configurations it tries out while low learning rates will cause it to be conservative. Usually, learning rates fall in the 10 to 1000 range. Next, we'll fit and transform the TSNE model to our numeric dataset. This will project our high-dimensional dataset onto a NumPy array with two dimensions. We'll assign these two dimensions back to our original dataset naming them 'x' and 'y'. We can now start plotting this data using seaborn's .scatterplot() method on the x and y columns we just added. The resulting plot shows one big cluster, and in a sense, this could have been expected. There are no distinct groups of female body shapes with little in between, instead, there is a more continuous distribution of body shapes, and thus, one big cluster. However, using the categorical features we excluded from the analysis, we can check if there are interesting structural patterns within this cluster. The Body Mass Index or BMI is a method to categorize people into weight groups regardless of their height. I added a column 'BMI_class' to the dataset with the BMI category for every person. If we use this column name for the hue, which is the color, of the seaborn scatterplot, we'll be able to see that weight class indeed shows an interesting pattern. From the 90+ features in the dataset, TSNE picked up that weight explains a lot of variance in the dataset and used that to spread out points along the x-axis, with underweight people on the left and overweight people on the right. We've also added a column with height categories to the dataset. If we use this 'Height_class' to control the hue of the points we'll be able to see that in the vertical direction, variance is explained by a person's height. Tall people are at the top of the plot and shorter people at the bottom. In conclusion, t-SNE helped us to visually explore our dataset and identify the most important drivers of variance in body shapes. Now it is your turn to use t-SNE on the combined male and female ANSUR dataset. #PythonTutorial #DataCamp #Python #Dimensionality #Reduction #visualization #data #t-sne #data visualization #python #python tutorial #datacamp #high-dimensional data #Dimensionality #Reduction #Python #DataCamp #PythonTutorial #visualization #data #t-sne #data visualization #python #python tutorial #datacamp #high-dimensional data #Dimensionality #Reduction #Python #DataCamp #PythonTutorial #visualization #data Shifting Mindset with AI-First Culture | Sanjay Srivastava, Genpact https://ml.streamdb.net/timelines/v/gysQyORROJY Thu, 18 Apr 24 20:05:09 +0900 Shifting Mindset with AI-First Culture | Sanjay Srivastava, Genpact Sanjay Srivastava is the Chief Digital Strategist at Genpact. He works exclusively with Genpact’s senior client executives and ecosystem technology leaders to mobilize digital transformation at the intersection of cutting-edge technology, data strategy, operating models, and process design. In his previous role as Chief Digital Officer at Genpact, Sanjay built out the company’s offerings in artificial intelligence, data and analytics, automation, and digital technology services. He leads Genpact’s artificial-intelligence-enabled platform that delivers industry-leading governance, integration, and orchestration capabilities across digital transformations. Before joining Genpact, Sanjay was a Silicon Valley serial entrepreneur and built four high-tech startups, each of which was successfully acquired by Akamai, BMC, FIS, and Genpact, respectively. Sanjay also held operating leadership roles at Hewlett Packard, Akamai, and SunGard (now FIS), where he oversaw product management, global sales, engineering, and services businesses. In the episode, Sanjay and Richie cover the shift from experimentation to production seen in the AI space over the past 12 months, the importance of corporate culture in the adoption of AI in a business environment, how AI automation is revolutionizing business processes at GENPACT, how change management contributes to how we leverage AI tools at work, adapting skill development pathways to make the most out of AI, how AI implementation changes depending on the size of your organization, future opportunities for AI to change industries and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 #data #ai #podcast #dataframed #datacamp #genpact #ai culture #ai literacy #change management #data #ai #podcast #dataframed #datacamp #genpact #ai culture #ai literacy #change management #199 Creating an AI-First Culture | Sanjay Srivastava, Chief Digital Strategist at Genpact https://ml.streamdb.net/timelines/v/-7zjw_NLPZA Thu, 18 Apr 24 18:32:53 +0900 #199 Creating an AI-First Culture | Sanjay Srivastava, Chief Digital Strategist at Genpact Last year saw the proliferation of countless AI tools and initiatives, many companies looked to find ways where AI could be leveraged to reduce operational costs and pressure wherever possible. 2023 was a year of experimentation for anyone trying to harness AI, but we can’t walk forever. To keep up with the rapidly changing landscape in business, last year’s experiments with AI need to find their feet and allow us to run. But how do we know which initiatives are worth fully investing in? Will your company culture impede the change management that is necessary to fully adopt AI? Sanjay Srivastava is the Chief Digital Strategist at Genpact. He works exclusively with Genpact’s senior client executives and ecosystem technology leaders to mobilize digital transformation at the intersection of cutting-edge technology, data strategy, operating models, and process design. In his previous role as Chief Digital Officer at Genpact, Sanjay built out the company’s offerings in artificial intelligence, data and analytics, automation, and digital technology services. He leads Genpact’s artificial-intelligence-enabled platform that delivers industry-leading governance, integration, and orchestration capabilities across digital transformations. Before joining Genpact, Sanjay was a Silicon Valley serial entrepreneur and built four high-tech startups, each of which was successfully acquired by Akamai, BMC, FIS, and Genpact, respectively. Sanjay also held operating leadership roles at Hewlett Packard, Akamai, and SunGard (now FIS), where he oversaw product management, global sales, engineering, and services businesses. In the episode, Sanjay and Richie cover the shift from experimentation to production seen in the AI space over the past 12 months, the importance of corporate culture in the adoption of AI in a business environment, how AI automation is revolutionizing business processes at GENPACT, how change management contributes to how we leverage AI tools at work, adapting skill development pathways to make the most out of AI, how AI implementation changes depending on the size of your organization, future opportunities for AI to change industries and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 Links Mentioned in the Show: Genpact - https://www.genpact.com/ [Course] Implementing AI Solutions in Business - https://www.datacamp.com/courses/implementing-ai-solutions-in-business Article: AI adoption accelerates as enterprise PoCs show productivity gains - https://www-cio-com.cdn.ampproject.org/c/s/www.cio.com/article/2074821/ai-adoption-accelerates-as-enterprise-pocs-show-productivity-gains.html?amp=1 Related Episode: How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist - https://www.datacamp.com/podcast/how-generative-ai-is-changing-business-and-society-with-bernard-marr Rewatch sessions from RADAR: The Analytics Edition - https://www.datacamp.com/radar-analytics-edition New to DataCamp? Learn on the go using the DataCamp mobile app - https://www.datacamp.com/mobile Empower your business with world-class data and AI skills with DataCamp for business - https://www.datacamp.com/business #data #ai #podcast #dataframed #datacamp #genpact #ai culture #ai literacy #change management #data #ai #podcast #dataframed #datacamp #genpact #ai culture #ai literacy #change management How #Data is Harnessed in #Retail https://ml.streamdb.net/timelines/v/sKDxFTYxenM Thu, 18 Apr 24 18:31:47 +0900 How #Data is Harnessed in #Retail Full episode 🎧 https://bit.ly/4aTsd3q Open Source & Collaboration at #GitHub https://ml.streamdb.net/timelines/v/Eq9zvpeWklo Thu, 18 Apr 24 16:15:00 +0900 Open Source & Collaboration at #GitHub Full episode 🎧 https://bit.ly/3Jlm58s 5 Best Practices for Launching an Internal Data Science Bootcamp https://ml.streamdb.net/timelines/v/vmTRDb9pcuU Thu, 18 Apr 24 00:00:00 +0900 5 Best Practices for Launching an Internal Data Science Bootcamp #GitHub COO Explains #Copilot Stats https://ml.streamdb.net/timelines/v/4r1_L6vkB_Q Wed, 17 Apr 24 00:39:47 +0900 #GitHub COO Explains #Copilot Stats Full episode 🎧 https://bit.ly/3Jlm58s Emerging #AI trends in retail used by Walmart https://ml.streamdb.net/timelines/v/yAP_CklWwGU Wed, 17 Apr 24 00:29:01 +0900 Emerging #AI trends in retail used by Walmart Full episode 🎧 https://bit.ly/4aTsd3q Using Data & AI at Walmart | Supply Chain, Demand Forecasting & More https://ml.streamdb.net/timelines/v/-7i1A_qTZBM Wed, 17 Apr 24 00:21:12 +0900 Using Data & AI at Walmart | Supply Chain, Demand Forecasting & More Swati Kirti is a Senior Director of Data Science, leading the AI/ML charter for Walmart Global Tech’s international business in Canada, Mexico, Central America, Chile, China, and South Africa. She is responsible for building AI/ML models and products to enable automation and data-driven decisions, powering superior customer experience and realizing value for omnichannel international businesses across e-commerce, stores, supply chain, and merchandising. In the episode, Swati and Richie explore the role of data and AI at Walmart, how the data and AI teams operate under Swati’s supervision, how Walmart improves customer experience through the use of data, supply chain optimization, demand forecasting, retail-specific data challenges, scaling AI solutions, innovation in retail through AI and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 #data #ai #podcast #dataframed #datacamp #walmart #supply chain #demand forecasting #data #ai #podcast #dataframed #datacamp #walmart #supply chain #demand forecasting #198 How Walmart Leverages Data & AI | Swati Kirti, Sr Director of Data Science at Walmart https://ml.streamdb.net/timelines/v/pprwOqO2Lrg Wed, 17 Apr 24 00:10:22 +0900 #198 How Walmart Leverages Data & AI | Swati Kirti, Sr Director of Data Science at Walmart There aren’t many retail giants like Walmart. In fact, there are none. The multinational generates 650bn in revenue, (including 50bn in eCommerce)—the highest revenue of any retailer globally. With over 10,000 stores worldwide and a constantly evolving product line, Walmart’s data & AI function has a lot to contend with when it comes to customer experience, demand forecasting, supply chain optimization and where to use AI effectively. So how do they do it? What can we learn from one of the most successful and well-known organizations on the planet? Swati Kirti is a Senior Director of Data Science, leading the AI/ML charter for Walmart Global Tech’s international business in Canada, Mexico, Central America, Chile, China, and South Africa. She is responsible for building AI/ML models and products to enable automation and data-driven decisions, powering superior customer experience and realizing value for omnichannel international businesses across e-commerce, stores, supply chain, and merchandising. In the episode, Swati and Richie explore the role of data and AI at Walmart, how the data and AI teams operate under Swati’s supervision, how Walmart improves customer experience through the use of data, supply chain optimization, demand forecasting, retail-specific data challenges, scaling AI solutions, innovation in retail through AI and much more. Find DataFramed on DataCamp https://www.datacamp.com/podcast and on your preferred podcast streaming platform: Apple Podcasts: https://podcasts.apple.com/us/podcast/dataframed/id1336150688 Spotify: https://open.spotify.com/show/02yJXEJAJiQ0Vm2AO9Xj6X?si=d08431f59edc4ccd Google Podcasts: https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5jYXB0aXZhdGUuZm0vZGF0YWZyYW1lZC8 Links Mentioned in the Show: Article - Walmart’s Generative AI search puts more time back in customers' hands: https://tech.walmart.com/content/walmart-global-tech/en_us/blog/post/walmarts-generative-ai-search-puts-more-time-back-in-customers-hands.html Walmart Global Tech: https://tech.walmart.com/ [Course] Implementing AI Solutions in Business: https://www.datacamp.com/courses/implementing-ai-solutions-in-business Related Episode: How Generative AI is Changing Business and Society with Bernard Marr, AI Advisor, Best-Selling Author, and Futurist: https://www.datacamp.com/podcast/how-generative-ai-is-changing-business-and-society-with-bernard-marr #data #ai #podcast #dataframed #datacamp #walmart #supply chain #demand forecasting #data #ai #podcast #dataframed #datacamp #walmart #supply chain #demand forecasting Full Stack Data Engineering with Python https://ml.streamdb.net/timelines/v/aGH0Vw2f5uo Wed, 17 Apr 24 00:00:00 +0900 Full Stack Data Engineering with Python #ChatGPT Prompt Engineering: A Foundational Skill https://ml.streamdb.net/timelines/v/NfoZnpZuWF8 Sat, 13 Apr 24 01:29:53 +0900 #ChatGPT Prompt Engineering: A Foundational Skill Upskill or risk falling behind. The full episode with #AlexBanks 👉 https://ow.ly/pmnh50Re9qf Analyzing Airbnb Data Using SQL #sql #datacamp https://ml.streamdb.net/timelines/v/QhX3BwUgQ7E Fri, 12 Apr 24 02:18:40 +0900 Analyzing Airbnb Data Using SQL #sql #datacamp Code along on with the full session! https://www.datacamp.com/code-along/a-beginners-guide-to-data-analysis-with-sql #data #analysis #data analysis #sql #airbnb data #sql analysis #data #analysis #data analysis #sql #airbnb data #sql analysis Driving Growth and Innovation With Product Analytics https://ml.streamdb.net/timelines/v/KSKq6ujWOc8 Fri, 12 Apr 24 00:00:00 +0900 Driving Growth and Innovation With Product Analytics If your company sells products then you need product analytics! Understanding who is buying what and using which features is essential both bringing your product to market and for developing the product roadmap. In this session, you'll learn how product analytics can be used to generate value, along with details of how to successfully create a product analytics function in your organization. Key Takeaways: - Learn how product analytics can be used to add value to your organization. - Learn about the tools, processes, and techniques you need for successful product analytics. - Learn how to build teams and upskill for product analytics.