Skip to content

kweinmeister/notebooks

Repository files navigation

This is not an official Google product.

Notebooks

Product Data Enrichment with Vertex AI

This notebook demonstrates how to enrich your data using Generative AI with Vertex AI on Google Cloud.

The specific example is a retail use case for improving product description metadata. Better product descriptions lead to more user engagement and higher conversion rates.

Causal Inference with Vertex AI AutoML Forecasting

This notebook introduces the concept of causal inference. It shows how to estimate the effect of an intervention using the tfcausalimpact library and with Vertex AI AutoML Forecasting.

Medical Imaging notebooks using Vertex AI

The pipeline notebook should be run first. It will pre-process DICOM medical images in the dataset (which needs to be downloaded prior to running). Then, it will create an AutoML model, and deploy it to an endpoint. It demonstrates how to build a pipeline using standard and custom components.

The custom training notebook can be run afterward. It shows how to train a TensorFlow model using the same managed dataset.

Understand how your TensorFlow model is making predictions

This notebook demonstrates how to build a model using tf.keras and then analyze its feature importances using the SHAP library.

The model predicts the expected debt-to-earnings ratio of a university's graduates. It uses data from the US Department of Education's College Scorecard.

More details about the model can be found in the blog post.

You can run the model live in Colab with zero setup here.

To run it locally, make sure you have Jupyter installed (pip install jupyter).

I've included the model code as a Jupyter notebook (tensorflow-shap-college-debt.ipynb). From the root directory run jupyter notebook to start your notebook. Then navigate to localhost:8888 and click on tensorflow-shap-college-debt.ipynb.

20 Newsgroups data import script for Google Cloud AutoML Natural Language

This notebook downloads the 20 newsgroups dataset using scikit-learn. This dataset contains about 18000 posts from 20 newsgroups, and is useful for text classification. The script transforms the data into a pandas dataframe and finally into a CSV file readable by Google Cloud AutoML Natural Language.

How to use the Google Cloud Natural Language API

This notebook demonstrates how to perform natural language tasks such as entity extraction, text classification, sentiment analysis, and syntax analysis using the Google Cloud Natural Language API.

[def]:

About

Jupyter notebooks for learning and demonstrations

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published