MTH 522 – 12/02/2023

ARIMA and LSTM time series forecasting models require data to be in chronological sequence. ARIMA works well for datasets with clear historical trends that require stationarity, whereas LSTM works well for capturing more complicated interactions that go beyond linear or seasonal patterns. When dealing with high-dimensional datasets that require numerical inputs and careful feature scaling, Support Vector Machines (SVM) are useful. SVMs are very useful when classes are clearly separated. Because they require numerical inputs and scaled features, neural networks excel at managing complicated and huge datasets where traditional methods may fall short. They are appropriate for cases with complex variable relationships.

ARIMA and LSTM can model and predict response times based on past patterns when forecasting response time and resource requirements. SVMs can categorize occurrences based on expected reaction times or resource requirements, but neural networks excel at complicated prediction tasks including a plethora of influencing elements in historical data. The model chosen is determined by unique data features and the nature of the forecasting activity.

MTH 522 – 11/13/2023

Random Forest Overview:
Random Forest is a powerful ensemble learning technique that combines multiple models to enhance predictions. By aggregating diverse model outputs, it mitigates overfitting and boosts overall performance.
Key Features:
1. Bagging: Utilizes bagging, randomly sampling subsets from the training data to create diverse training sets for each decision tree. This prevents overfitting.
2. Random Feature Selection: Randomly selects a subset of features at each split node, reducing correlation between trees and improving generalization ability.
3. Decision Tree Construction: Constructs decision trees similarly to individual decision trees but with a random subset of features at each node to avoid overfitting.
Prediction Process:**
Classification: Uses a majority vote among individual trees for the final prediction.
Regression: Averages predictions from all trees for the final regression prediction.
Advantages:
– Improved predictive accuracy compared to individual decision trees.
– Robust to noisy data, overfitting, and works well with minimal tuning.
– Handles both classification and regression tasks.
Disadvantages:
– Can be computationally expensive for a large number of trees.
– Interpretability challenges with a large number of trees.
– May not perform well on highly imbalanced datasets, requiring more memory and storage.

MTH 522 – 11/29/2023

I’m happy to report that I’ve examined the Project 1 feedback and made the necessary adjustments. I have learnt the following
NLTK is a popular Python library for natural language processing that provides an extensive toolkit for tasks like tokenization and sentiment analysis. In a more straightforward manner, TextBlob appears as a user-friendly Python library that makes text processing easier. It includes tools designed specifically for manipulating textual data as well as a sentiment analysis API. Using a rule-based methodology and a pre-built lexicon, VADER, an acronym for Valence Aware Dictionary and Sentiment Reasoner, specializes in sentiment analysis for social media text. Especially for sentiment analysis projects, Scikit-learn is a well-known machine learning framework that is useful for building and evaluating machine learning models. Finally, pre-trained models like BERT and GPT are introduced via Hugging Face’s Transformers Library, enabling users to adjust them for particular sentiment analysis applications.

MTH 522 – 11/27/2023

I have become completely engrossed in the complexities of sophisticated statistical analysis, with a focus on the skill of regression modeling in particular. We can now systematically measure the correlations between important factors, adding a quantitative dimension to our previously qualitative insights thanks to this sophisticated method. This analysis marks a turning point in our attempts to understand the intricate relationships that are buried in our information. Regression modeling has helped us gain a deeper knowledge by revealing the subtle patterns hidden inside our data and turning our theoretical understanding into measurable findings.

MTH 522 – 11/24/2023

I’m ready to use more sophisticated statistical techniques, such regression modeling and hypothesis testing, to go deeper into our research after learning a lot from our first data investigation. By revealing complex relationships between different components, this tactical approach seeks to provide a more nuanced understanding of Boston’s economic dynamics. Moving past the basic investigation, we are now using advanced instruments to uncover more specific information about the connections between important variables. This next round of study should provide a more thorough understanding of how many factors interact, which will ultimately lead to a more complex and nuanced view of Boston’s economic environment.

MTH 522 – 11/22/2023

Sentiment analysis is difficult since language is inherently ambiguous. A statement’s overall tone, sarcasm, and contextual cues all play a significant role in determining the sentiment it conveys. Moreover, generic sentiment analysis models could not perform well in domain-specific contexts; domain-specific lexicons should be added, and relevant data should be fine-tuned. Given their ability to significantly alter a sentence’s sentiment, negations and modifiers must also be taken into account. Sentiment analysis models that work successfully must take into account the impact of words like “not” and modifiers like “very.”

MTH 522 – 11/20/2023

Different Methods for Sentiment Analysis:
Supervised Learning: In supervised learning, sentiment analysis is treated as a task of classification. On labeled datasets, every text has a sentiment label associated with it (positive, negative, or neutral). This allows models to be trained.
Unsupervised Learning: Without the need for labeled training data, unsupervised techniques use topic modeling or clustering to put comparable sentiments in one group.
Methods for Deep Learning: Recurrent Neural Networks (RNNs): While they may struggle with long-term relationships, RNNs are excellent at capturing sequential dependencies in text.
Convolutional Neural Networks (CNNs): CNNs are useful for sentiment analysis tasks because they are good at identifying subtle patterns in text.
Transformers: By recognizing word associations and capturing contextual information, transformer-based models—like BERT and GPT—have produced state-of-the-art sentiment analysis results.

MTH 522 – 11/17/2023

Opinion mining, or sentiment analysis, is a natural language processing activity that aims to identify the sentiment expressed in a given text. This sentiment can be neutral, positive, negative, or a combination of these. Sentiment analysis has several uses, such as evaluating sentiments in social media content, analyzing consumer feedback, and determining public opinion.
Text Preprocessing: Text data usually goes through preprocessing steps like tokenization, stemming, and stop word removal before being subjected to sentiment analysis. These procedures help to provide a more successful analysis by standardizing and cleaning the text.
Feature extraction: To represent the data required for sentiment analysis, features must be extracted from the preprocessed text. Word embeddings, n-grams, and word frequencies are often used features that serve as a basis for intelligent sentiment analysis.
Sentiment lexicons are collections of words that have been matched with a sentiment polarity, which denotes whether the words are neutral, positive, or negative. In order to provide a more complex comprehension of the sentiment portrayed, these lexicons are essential for matching terms within the text and providing appropriate sentiment scores.

MTH 522 – 11/15/2023

Making decisions in a variety of disciplines requires the use of forecasting, which makes predictions about future trends using data from the past and present. The basis is time series data that reveals patterns and trends, such as stock prices or sales information. Prior to using forecasting techniques, exploratory data analysis, or EDA, is essential for identifying underlying structures through analytical and visual inspection.
The type of data determines the method to use: machine learning (e.g., LSTM) for complex dependencies, ARIMA for time-dependent data, and linear regression for consistent trends. Preprocessing the data to remove outliers and missing values is essential. In order to enable model learning and evaluation on unobserved data, the train-test split is crucial. Measures of gauge accuracy include RMSE and MAE. Through this iterative process, forecasting techniques are improved and made more adaptable to shifting data patterns, enabling meaningful and well-informed decision-making.

MTH 522 – 11/08/2023

For tasks involving regression and classification, decision trees are used as supervised machine learning tools. Its capacity to handle both numerical and categorical data, as well as its user-friendly interface and powerful visualization, make it useful for predictive modeling and decision-making. Recursive partitioning, in which the dataset is partitioned successively according to particular features or criteria, is a process used in the development of decision trees.
The best feature in the dataset is found by the algorithm, which then divides the data at each internal node. The goal is to identify the feature that minimizes error or impurity and best divides the data into homogeneous groups.The splitting criterion is utilized to determine the best way to divide the data according to the selected feature.While mean squared error (MSE) is frequently used in regression projects, Gini impurity and entropy are common criteria for classification tasks. Until a stopping requirement is met, the cycle of feature selection, splitting, and child node formation repeats itself. A maximum tree depth, a minimum number of samples needed to generate a node, or a maximum number of leaf nodes could be included in this criterion.

MTH 522 – 11/06/2023

A statistical technique used to evaluate the relationship or reliance between two category variables is the Chi-Square test. It is especially helpful in determining whether there is a substantial relationship between variables and whether the frequencies found in a contingency table significantly differ from the expected values based on independence.
Often used in studies to determine if one variable is dependent upon another, the Chi-Square Test for Independence (sometimes called the 2 Test for Independence) examines whether there is a significant association between two category variables.
Chi-Square Goodness-of-Fit Test: This test determines if the data that was seen fit into a certain distribution, be it uniform, normal, or any other expected distribution.

It is frequently employed to assess a model’s suitability.
The Chi-Square Test for Homogeneity evaluates whether a categorical variable’s distribution holds true for various populations or groupings.

MTH 522 – 11/03/2023

Mean Shift Clustering allows for a wide range of cluster sizes and forms by searching for the modes (peaks) of data density without requiring a set number of clusters.
In order to divide data into clusters, spectral clustering uses the eigenvalues and eigenvectors of a similarity matrix in a graph representation of the data. This allows it to handle non-convex clusters for applications like community discovery and picture segmentation.
Since fuzzy clustering, and more especially fuzzy C-Means, allows data points to belong to several clusters to differing degrees, it is a suitable option when it is difficult to identify definite cluster borders.

MTH 522 – 11/01/2023

Analysis of Variance, or ANOVA, is a statistical tool used to evaluate mean differences between different groups in a sample. An analysis of variance (ANOVA) is very useful in determining statistically significant differences between three or more groups/conditions. Through the assessment of whether the difference between group means exceeds the variation within groups, ANOVA is useful in a variety of research and experimental contexts.

One-way ANOVA is the suitable option when dealing with a single categorical independent variable that has more than two levels and requires a comparison of their means.The principles of one-way ANOVA are expanded upon by Two-Way ANOVA for a more comprehensive investigation incorporating two independent variables and an analysis of their interaction.Multifactor ANOVA is used in situations where there are more than two independent variables or components that may interact intricately.

 

MTH 522 – 10/30/2023

Fundamentally distinct clustering techniques, K-means and DBSCAN, each have their own advantages and disadvantages. Because K-means depends on centroid-based partitioning and needs the user to predefine the number of clusters, it works best in scenarios where the number of clusters is known and the shapes are generally spherical. It is less efficient, nevertheless, and sensitive to initializations when handling clusters with unusual shapes. However, DBSCAN, a density-based technique, works well at finding clusters of any shape and doesn’t require a predetermined number of clusters. It’s a good option for more intricate and varied data clustering tasks because of its great ability to handle noise and outliers and classify them as unassigned data points. The particulars of the data and the intended clustering objectives should be taken into consideration while selecting between K-means and DBSCAN.

MTH 522 – 10/23/2023

Logistic regression is a statistical method employed to analyze datasets with one or more independent variables that influence a binary or dichotomous outcome. It is particularly well-suited for situations where the result falls into one of two categories, such as ‘Yes’ or ‘No.’ The key elements of logistic regression include modeling the relationship between independent variables and the log-odds of the binary outcome, utilizing a sigmoid function to transform these log-odds into probabilities between 0 and 1, and estimating coefficients for independent variables to determine the strength and direction of their impact. Additionally, logistic regression calculates odds ratios to quantify how changes in independent variables affect the odds of the binary outcome. This method finds applications in diverse fields, from medical research to marketing and credit scoring, providing valuable insights into the likelihood of specific events occurring based on a set of relevant factors.
Logistic regression serves as a powerful analytical tool for understanding and modeling binary outcomes across a wide range of domains. It enables researchers and analysts to uncover the intricate relationships between independent variables and the probability of specific events, offering practical applications in medical prognosis, customer behavior prediction, credit risk assessment, and more. Whether it’s predicting the likelihood of a patient developing a medical condition or forecasting customer purchase decisions, logistic regression proves invaluable in making informed decisions and understanding the dynamics of binary outcomes.

MTH 522 – 10/27/2023

There are three primary types of logistic regression: multinomial, ordinal, and binary. For scenarios where there are only two possible outcomes, like loan approval, cancer risk assessment, or sports match predictions, Binary Logistic Regression is perfect. In situations when the dependent variable has ordered categories with unequal intervals, such as student selections or pet food types, ordinal logistic regression is utilized. However, when the dependent variable is nominal, meaning it has more than two levels and no particular order—such as test scores, survey replies, or shirt sizes—multinomial logistic regression is appropriate.
Key practices for the effective application of logistic regression include understanding the technical requirements of the model, carefully choosing dependent variables to maintain consistency in the model, accurate estimation, interpreting results meaningfully, and comprehensive validation to guarantee the accuracy and reliability of the model.

MTH 522 – 10/20/2023

Mixed effects models and generalized linear models (GLMs) are combined in generalized linear mixed models (GLMMs). When dealing with data that is not regularly distributed and has correlations and hierarchical structures, they are helpful. GLMMs work well with hierarchical data because they take into account both fixed and random effects. They employ link functions to connect variables and maximum likelihood estimation of parameters.
GLMMs can detect regional clusters, temporal trends, and demographic discrepancies in the context of police fatal shootings. They aid in determining risk variables and evaluating how policy changes may affect the number of fatal shootings.

MTH 522 – 10/18/2023

I’ve started my examination of the ‘fatal-police-shootings-data’ dataset in Python. I’ve initiated the process of loading the data to examine its different variables and their respective distributions. Notably, the ‘age’ variable, which is a numerical column, stands out as it provides insights into the ages of individuals who tragically lost their lives in police shootings. Additionally, the dataset includes latitude and longitude values, allowing us to precisely determine the geographical locations of these incidents.
During this initial assessment, I’ve come across an ‘id’ column, which appears to have limited relevance for our analysis. Consequently, I’m considering excluding it from our further investigation. Going deeper, I’ve scanned the dataset for missing values, revealing that several variables contain null or missing data, including ‘name,’ ‘armed,’ ‘age,’ ‘gender,’ ‘race,’ ‘flee,’ ‘longitude,’ and ‘latitude.’ Furthermore, I’ve checked the dataset for potential duplicate records, and I found only a single duplicate entry, notable for its absence of a ‘name’ value. As we move on to the next phase of this analysis, our focus will shift to exploring the distribution of the ‘age’ variable, a crucial step in gaining insights from this dataset.
In our recent classroom session, we acquired essential knowledge about computing geospatial distances using location information. This newfound expertise enables us to create GeoHistograms, a valuable tool for visualizing and analyzing geographic data. GeoHistograms serve as a powerful instrument for identifying spatial patterns, pinpointing hotspots, and discovering clusters within datasets related to geographic locations. As a result, our understanding of the underlying patterns in the data is significantly enhanced.

MTH 522 – 10/16/2023

In machine learning and data analysis, cluster analysis is a potent technique that groups related objects or data points together based on their shared properties. Finding underlying patterns in complicated datasets is the basic goal of cluster analysis, which also helps to make decision-making easier and more informed. This method, which does not require labeled data for training, is extensively used in many different fields, such as image segmentation, anomaly detection, and customer segmentation. This procedure makes use of well-known clustering algorithms including K-Means, Hierarchical Clustering, and DBSCAN. By repeatedly allocating data points to the closest cluster centroids, K-Means, for example, divides a dataset into K separate clusters. The method’s final goal is to minimize the sum of squared distances between each data point and its cluster centroids. K-Means is an effective technique, although it does require the number of clusters to be specified in advance, which is an important analysis parameter.
A crucial tool for improving data interpretation and decision-making is cluster analysis, which identifies underlying structures in complicated datasets. This method organizes data points according to common criteria and is applied in areas such as picture segmentation, anomaly detection, and consumer segmentation. In this process, clustering techniques like K-Means, Hierarchical Clustering, and DBSCAN are frequently used. To minimize the total squared distances between data points and their various centroids, K-Means, for instance, iteratively assigns data points to the closest cluster centroids in order to partition data into discrete clusters. K-Means is renowned for its effectiveness, but a crucial aspect of the analysis is that it requires predetermining the number of clusters.

MTH 522 – 10/13/2023

In my initial steps of working with the two CSV files, ‘fatal-police-shootings-data’ and ‘fatal-police-shootings-agencies,’ my process began with loading them into Jupyter Notebook. Here’s an overview of the actions I took and the obstacles I encountered:
1. Data Loading: I initiated the process by importing both CSV files into Jupyter Notebook. The ‘fatal-police-shootings-data’ dataset consists of 8,770 entries with 19 attributes, whereas the ‘fatal-police-shootings-agencies’ dataset comprises 3,322 entries with 5 attributes.
2. Column Correspondence: Upon reviewing the column descriptions available on GitHub, I realized that the ‘ids’ column in the ‘fatal-police-shootings-agencies’ dataset corresponds to the ‘agency_ids’ in the ‘fatal-police-shootings-data’ dataset. To streamline the merging process, I renamed the column in the ‘fatal-police-shootings-agencies’ dataset from ‘ids’ to ‘agency_ids.’
3. Data Type Inconsistency: When I attempted to merge the two datasets using the ‘agency_ids,’ I encountered an error indicating an inability to merge on a column with mismatched data types. Upon inspecting the data types using the ‘.info()’ function, I found that one dataset had the ‘agency_ids’ column as an object type, while the other had it as an int64 type. To address this, I utilized the ‘pd.to_numeric()’ function to ensure both columns were of type ‘int64.’
4. Data Fragmentation: A new challenge surfaced in the ‘fatal-police-shootings-data’ dataset: the ‘agency_ids’ column contained multiple IDs within a single cell. To overcome this, I am currently in the process of splitting these cells into multiple rows.
Once I successfully split the cells in the ‘fatal-police-shootings-data’ dataset into multiple rows, my next steps will involve a deeper dive into data exploration and commencing data preprocessing. This will encompass tasks like data cleaning, managing missing data, and preparing the data for analysis or modeling. Your journey into data analysis and preprocessing appears to be off to a promising start, and navigating through these challenges will help you uncover valuable insights from the data.

MTH 522 – 10/11/2023

The “Fatal Force Database,” a comprehensive project that painstakingly tracks and documents instances in which American police officers shoot and mortally hurt citizens while carrying out their duty, was first reported by The Washington Post. Important information is included in this database, such as the victim’s race, the circumstances of the shootings, if the victim was carrying a weapon, and whether the victim was going through a mental health crisis. Data is gathered from a number of sources, including as social media, law enforcement websites, independent databases like Fatal Encounters, and local news articles.

Importantly, the database was updated in 2022 to improve accountability and transparency inside departments by standardizing and making publicly available the identities of member police organizations. This dataset has regularly documented more than twice as many fatal police shootings since 2015 as official sources like the FBI and CDC, highlighting a large data gap and highlighting the necessity for thorough tracking. Constantly updated, it continues to be a priceless tool for scholars, decision-makers, and the general public. It provides information about shootings in which police are involved, encourages openness, and adds to the continuing conversations about police accountability and reform.

MTH 522 – 10/02/2023

I have fully understood the fundamental ideas and guidelines presented in the offered instructions for authoring data science reports. It has been fascinating to see the need of developing informative titles, summarizing the key points in a straightforward and non-technical way, fully presenting findings, and explaining their ramifications. Additionally, the methodical method used to arrange appendices, describe results, and guarantee correct reference has given me important new perspectives on how to write reports.
Along with understanding these fundamental concepts, I also took the initiative to quickly scan the website’s “Punchline reports” PDF. This additional resource should improve my comprehension of data science report writing and its particular complexities.

I am excited to put this information to use in my upcoming projects and keep honing my ability to effectively deliver data-driven insights.

MTH 522 09/29/2023

I have used Python to put my knowledge of polynomial regression and 5-fold cross-validation into practice, which has strengthened my understanding of these ideas. In order to translate these methods, difficulties with data preprocessing had to be overcome, and Python’s syntax had to be adjusted for data display using Matplotlib and Seaborn.
I also learned a lot from instructional films on K-fold cross-validation, prediction error estimation, the value of validation sets, and other related topics. My comprehension of these ideas and how to effectively use them in Python have increased as a result of these real-world applications and educational resources.

MTH 522 – 09/27/2023

I successfully learnt and applied the technique of 5-fold cross-validation and polynomial regression. The process of translating this technique into Python was pleasant, albeit not without problems. Adapting the data pretreatment methods, guaranteeing data consistency, and dealing with any data anomalies were all key challenges. Furthermore, transferring the nonlinear model fitting part into Python necessitated a good understanding of Python tools such as scikit-learn as well as modeling functions.

Another noteworthy task was reproducing data visualizations, specifically the ListPlot of mean square error (MSE) data, using Python plotting packages such as Matplotlib or Seaborn. Python’s syntax and customisation choices for creating similar charts differed from those in Mathematica. Finally, debugging and error handling were critical in verifying that the Python code generated results that were consistent with the Mathematica code. Despite these difficulties, the translation process gave an excellent opportunity for me to improve my grasp of cross-validation and polynomial regression within a Python programming context.

MTH 522 – 09/26/2023

I watched the video Estimating Prediction Error and Validation Set Approach and learned how important it is to accurately estimate prediction error in machine learning models. This video emphasized the significance of reserving a validation set to evaluate the model’s performance.

The K-fold Cross-Validation video was quite informative. I now understand how K-fold cross-validation improves the robustness of a model’s performance over a single validation set. I utilized this strategy in Python for a diabetic data, dividing my dataset into ‘K’ subgroups and using each as a validation set while training on the remaining data iteratively. I found the ‘Cross-Validation: The Right and Wrong Ways’ video to be quite informative. It stressed the correct and incorrect methods of performing cross-validation, shining light on typical blunders to avoid.

MTH 522 – 09/22/2023

I carried out a detailed investigation by first visualizing the non-normal distribution of the post-molt and pre-molt data by producing histograms for each group of data independently. To learn more, I plotted both histograms side by side and performed a statistical analysis to determine whether there was a significant difference in the means. I did this by calculating the p-value using a t-test.

MTH 522 – 09/20/2023

I have studied linear regression modeling and its application to data fitting in last class. This entails understanding how to work with non-normally distributed, skewed variables with high variation and kurtosis. Furthermore, I’ve dabbled with data analysis with a dataset that includes two measurements: “post-molt,” which indicates the size of a crab’s shell after molting, and “pre-molt,” which reflects the size of a crab’s shell before molting. In reality, I’ve successfully performed linear regression on this dataset, drawing the regression line and evaluating descriptive statistics to gain a better understanding of the data’s relationships. I have also learnt about the significance of the t-test and its use in statistical analysis.

MTH 522 – 09/18/2023

I learned about correlation analysis, linear regression with three variables, creating 3D plots, analyzing residuals plots, and understanding the planes formed in 3D plots while considering three variables in the previous class. I additionally looked at how quadratic equations can be used in data analysis.
I’ve been working on graphs that incorporate data on diabetes, inactivity, and obesity. My current focus is on plotting the linear regression plane within the 3D plot, which I hope will provide useful information about the correlations between these variables.

MTH 522 – 09/15/2023

During our last class, My comprehension of the significance of p-values in statistical analysis was strengthened as a result of our in-depth discussion of the topic. I also expressed some concerns about how linear regression calculates distances. Given that the distance is normally calculated as the perpendicular distance between a point and a line in geometry, I was confused as to why the distance was calculated parallel to the y-axis. I also questioned why polynomials of different degrees weren’t explored instead of using the linear equation “y = mx + c” exclusively to model data interactions. I’m happy to report that my inquiries were thoughtfully answered in class, giving me clarification on these crucial topics.

Additionally, I learned a lot from the inquiries made by others.

MTH 522 – 09/13/2023

When I compare the kurtosis numbers for diabetes and inactivity to those given in the course materials and discussed in class, I see a disparity. I used the kurtosis() function from the scipy library to quantify these statistics. The observable variances have prompted inquiries about the distribution of the underlying data and the potential causes of these variations. I’m looking at possible explanations for these discrepancies and trying to make the calculated kurtosis values match the anticipated trends. For our data analysis to be accurate and reliable, this inquiry is crucial.

Along with the kurtosis research, I have also dabbled with modeling the link between diabetes and inactivity using regression approaches. Although linear regression is frequently used for this purpose, I experimented with polynomial regression to identify more complex data patterns. A polynomial of degree 6 (y = -0.00x^8 0.00x^7 -0.14x^6 3.88x^5 -67.96x^4 753.86x^3 -5171.95x^2 20053.68x^1 -33621.52) offers the best fit for our dataset, according to my analysis of different polynomial degrees. This conclusion raises an important question: Why do we frequently use linear regression when polynomial regression seems to provide a more realistic depiction of the complexity of the data? To understand these variables’ dynamics better and to choose the best regression strategy for this particular dataset, more research is required and I am currently on it.

MTH 522 – 09/11/2023

I tried using Python to combine the three tables Diabetes, Obesity, and Inactivity utilizing a shared column called “FIPS”. To ensure consistent merging, I ran across a problem where the column titles in one of the tables were different. I had to fix this. The Diabetes and Inactivity tables were successfully combined to create a DataFrame, and I then set out to examine the data’s statistical characteristics, including mean, mode, median, standard deviation, skewness, and kurtosis.
Also I have revisited key concepts from our last class, including correlation, scatterplots, linear regression, residual analysis, heteroscedasticity, and the Breusch-Pagan Test. Having gained a solid understanding of these fundamental concepts, I’m eager to put them into practice using Python. I plan to apply these techniques to the Diabetes, Obesity, and Inactivity tables data and wanted to explore much insights.