Next Article in Journal
Manufacturing of Aluminum Nano-Composites Reinforced with Nano-Copper and High Graphene Ratios Using Hot Pressing Technique
Next Article in Special Issue
Micro-Scale Deformation Aspects of Additively Fabricated Stainless Steel 316L under Compression
Previous Article in Journal
Effect of Synthesis Factors on Microstructure and Thermoelectric Properties of FeTe2 Prepared by Solid-State Reaction
Previous Article in Special Issue
Functionality and Mechanical Performance of Miniaturized Non-Assembly Pin-Joints Fabricated in Ti6Al4V by Laser Powder Bed Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning Algorithms for Predicting Mechanical Stiffness of Lattice Structure-Based Polymer Foam

by
Mohammad Javad Hooshmand
,
Chowdhury Sakib-Uz-Zaman
and
Mohammad Abu Hasan Khondoker
*
Industrial Systems Engineering, Faculty of Engineering and Applied Science, University of Regina, Regina, SK S4S 0A2, Canada
*
Author to whom correspondence should be addressed.
Materials 2023, 16(22), 7173; https://doi.org/10.3390/ma16227173
Submission received: 26 September 2023 / Revised: 3 November 2023 / Accepted: 9 November 2023 / Published: 15 November 2023

Abstract

:
Polymer foams are extensively utilized because of their superior mechanical and energy-absorbing capabilities; however, foam materials of consistent geometry are difficult to produce because of their random microstructure and stochastic nature. Alternatively, lattice structures provide greater design freedom to achieve desired material properties by replicating mesoscale unit cells. Such complex lattice structures can only be manufactured effectively by additive manufacturing or 3D printing. The mechanical properties of lattice parts are greatly influenced by the lattice parameters that define the lattice geometries. To study the effect of lattice parameters on the mechanical stiffness of lattice parts, 360 lattice parts were designed by varying five lattice parameters, namely, lattice type, cell length along the X, Y, and Z axes, and cell wall thickness. Computational analyses were performed by applying the same loading condition on these lattice parts and recording corresponding strain deformations. To effectively capture the correlation between these lattice parameters and parts’ stiffness, five machine learning (ML) algorithms were compared. These are Linear Regression (LR), Polynomial Regression (PR), Decision Tree (DT), Random Forest (RF), and Artificial Neural Network (ANN). Using evaluation metrics such as mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE), all ML algorithms exhibited significantly low prediction errors during the training and testing phases; however, the Taylor diagram demonstrated that ANN surpassed other algorithms, with a correlation coefficient of 0.93. That finding was further supported by the relative error box plot and by comparing actual vs. predicted values plots. This study revealed the accurate prediction of the mechanical stiffness of lattice parts for the desired set of lattice parameters.

Graphical Abstract

1. Introduction

Polymer foams are used extensively for their mechanical properties, energy-absorption capabilities, low weight, exceptional cushioning qualities, and excellent insulating behavior [1,2]. Polymer foam can be defined as a two-phase system that consists of gas bubbles dispersed into a polymer matrix [3]. It has a wide range of application areas, including the automotive industry [4], engineering materials [5], packaging [6], thermal insulation [7,8], protection [9], housing decoration, mattresses, furniture, and electronic devices [10]. As polymer foams undergo large deformation under compression, understanding their mechanical behavior, especially deformation under different loading conditions, is crucial [10,11,12]. Functionally graded materials are inhomogeneous composites with modifiable features that are now employed extensively across a variety of industries [13]. Foam materials with varying degrees of functionality have been demonstrated to work well in shock-absorbing applications [14,15]. Recent studies have also asserted that these foam features can be significantly influenced by foam structure and morphology such as spatial distribution and gradient of cell size [16,17]. An asymmetric spatial feature can produce better mechanical and thermal-insulation outcomes, which can make them beneficial in a range of applications, such as impact resistance, high strength at low weight, and thermal or sound insulation [14,18]. It has been demonstrated that the use of functionally graded foam materials offers high performance in applications requiring compression resistance and shock absorption [14,15].
One important application of foam polymer is in the design of custom mattresses. As humans spend about one-third of their lives lying in bed, a custom mattress designed in accordance with body curvature and weight distribution is very important to relieve any back pain or discomfort [19]. While designing a mattress, three aspects are considered: the shape and mass distribution of the human body, the mechanical properties of the material of the mattress, and the interaction between the human body and the material [20,21]. Another prospective application is footwear due to the high impact load repetitively exerted on the feet, which is several times greater than body weight [22]. The use of proper footwear cushioning is necessary to prevent repetitive stress injuries since the high load is repeated during the walk [22,23]. Additionally, the right footwear can enhance exercise comfort and performance. Hence, with the right design of functionally graded foam materials, it is possible to create useful ergonomic items such as shoe soles. The soles of shoes should be lightweight and have adequate shock absorption and endurance [24,25]. Contemporary sports footwear is engineered to alter the viscoelastic midsole, which is commonly comprised of polymeric foam to reduce mechanical stress waves [26]. Similarly, athlete safety and the prevention of injuries are both crucial, which is why different foam constructions are employed for many areas in protective gear or for surfaces where sports activities can be practiced safely [2,22].
The drawback of foam materials, however, is that they are stochastic and have a random microstructure [27]. As the microstructure of these foam materials plays a crucial role in their global behavior and properties, researchers have tried to find predictable alternatives for foams [28,29]. Lattice structures, in particular, are the subject of substantial research due to their multi-functional properties, including load carrying [30], energy absorption [31], heat exchange [32], and building materials [33]. They are created by duplicating mesoscale unit cells in three dimensions. They offer extreme design freedom to alter the geometries of unit cells in order to attain desired macro-scale material attributes for a variety of applications [34]; however, producing these complex and intricate lattice structures can be infeasible using conventional manufacturing processes, in which case, the need for advanced manufacturing comes into play [35].
Additive manufacturing (AM), also known as 3D printing, is a cutting-edge technology that enables the production of complex geometries and near-net-shape components with minimal raw material consumption [36,37,38,39]. Utilizing the benefits of 3D-printing technology, functionally graded lattice materials can be manufactured with a uniform and ordered structure, and their unit cells can be manipulated and optimized to achieve the desired mechanical properties for a specific application [40,41]. Three-dimensional printed polymeric lattice structures have been studied for their uses in energy absorption [31], building materials [33], enhanced ductility [42], and mechanical properties [43].
A wide variety of factors can significantly impact the behavior of the 3D-printed lattice parts, which would, in turn, affect their mechanical behaviors. Therefore, understanding the relation between the lattice structural parameters and mechanical performance, such as stiffness, is of vital importance for the optimization of the lattice design [44]. In this context, machine learning (ML), a subset of artificial intelligence (AI), plays a vital role by analyzing the hidden links and patterns within a given dataset. ML uses data analysis to recognize patterns and connections, enabling it to perform specific functions. ML algorithms have a greater ability to detect non-linear interaction between the parameters of an AM process, and mechanical properties such as deformation, compared to conventional methods. AI- and ML-based tools play a crucial role in hastening the advancement of new materials, production methods, and processes [45]. The methods are divided into supervised learning, where the algorithm picks up knowledge from labeled training data and assists in making predictions for unforeseen data, and unsupervised learning, where the algorithm defines how to establish relationships between features of interest by working with unlabeled data [45]. Building connections and drawing conclusions from data, systems, or frameworks, with the ability to automatically learn and improve without explicit programming, can be facilitated using ML techniques [46]. In this study, a number of lattice structures were designed and computational analyses were performed to understand the effect of lattice geometries on their mechanical stiffness. Then, different ML algorithms were evaluated to study their performance.

2. Methodology

2.1. Data Generation

In this work, nTop (https://www.ntop.com/ (accessed on 13 November 2023), New York, NY, USA) was utilized with a non-commercial license to design a total of 360 lattice unit cells by changing five lattice parameters, namely, lattice type, cell length along the X, Y, and Z axes, and cell wall thickness. Once designed, these lattice structures were subjected to the same loading conditions using the nTop Simulation module. Then, corresponding strain deformations were recorded to form a dataset that was analyzed by ML algorithms to establish a correlation among them.

2.2. Designing Lattice Structures

Using the lattice parameters of unit cells listed in Table 1, lattice structures with a volume of 50 × 50 × 54 mm3 were designed in nTop by following the workflow outlined in Figure 1. nTop offers six walled triply periodic minimal surface or WTPMS-type unit cells and 29 graph-type unit cells, as presented in Table 2.
To design different lattice structures, a 50 × 50 × 50 mm3 cube was designed in nTop. Then, a 2 mm thick plate was added at the top and bottom surface of the cube using the “Boolean Union” block, which resulted in a single implicit body.
The next stage was to create lattice structures within that cubic body. In order to do so, the first step was to define the “Unit Cell” and the “Cell map”, both of which would be used as inputs into the “Periodic Lattice” block to create the lattices. Six types of unit cells from the “Walled TPMS (WTPMS) Unit cell” block and 23 unit cells from the “Graph Unit cell” block were used to define the unit cell of the lattices. The unit cells are listed in Table 2.
After that, the “Rectangular Cell map” block was used to create a rectangular cell map within the implicit body. The important parameter of this block was cell size, which could be varied along the X, Y, and Z axes. In our paper, 20, 25, and 30 mm were used as cell sizes along three axes. Later, the “Period lattice” block was used to generate the lattices by combining the unit cells and cell maps. Here, “thickness” is a vital parameter, and we have used 2, 3, and 4 mm in that field. Lastly, the final part for a 50 × 50 × 54 mm3 lattice structure was created by using the “Boolean Intersect” and the “Boolean Union” blocks, where the periodic lattice and single implicit body from the earlier steps were used as inputs. Figure 1 shows a 50 × 50 × 54 mm3 lattice structure with a face-centered cubic foam unit cell with a cell size of 25, 25, and 30 mm along the X, Y, and Z axes with a thickness of 3 mm.
Meshing is the method of dividing a 3D model into many elements in order to accurately define its shape. In nTop, Mesh (surface mesh), Volume Mesh, and Finite Element (FE) Mesh are the three primary types of meshes. FE Mesh is a solid mesh and is used for simulation. Our objective is to convert the implicit body designed in the previous step into an FE Mesh so that simulation can be run on that body.
nTop recommends several steps that need to be followed for the conversion process that is shown in Figure 1. First, a mesh from the implicit body was created; however, meshes usually need further refinement to reduce file size, decrease element (triangle) count, and capture fine details before they can be used for simulation. “Simplify Mesh by amount” is one such method, which reduces the number of triangles on the surface mesh, depending on the amount entered. For example, an amount input of 0.5 removes half of the mesh elements. Later, the “Remesh surface” option was used to clean the defects of the parts and to consolidate meshes into fewer elements. After that, the surface mesh was converted to the solid mesh by the “volume mesh” option and, finally, “FE Volume Mesh” was used to convert the solid mesh into FE Mesh, which was used for the simulation.

2.3. Computational Analysis

The material used in this study was polyethylene (PE). To simulate the properties of PE, the following parameters were used in the “Isotropic Material” block as shown in Table 3 [47].
The final FE Solid model was created by combining the material block and the FE volume mesh block.
After that, the bottom part of the solid model was restrained (shown in red spikes in Figure 2a) and 50 N force was applied to the top part of the solid body (shown in greenish spikes in Figure 2a). The overall boundary condition is shown in Figure 2a. The static analysis block in nTop was used to run the simulations. The maximum mid-strain value found in this example was 2.38985 × 10−5 as shown in the following Figure 2b. Similarly, 360 simulations were conducted by varying the type of lattice, the length of the cell along the X, Y, and Z axes, and the thickness of the cell.

2.4. Pre-Processing: Converting and Splitting the Dataset

Every ML algorithm follows similar steps to obtain a prediction model. Once the dataset is generated, it needs to be pre-processed in order for a statistical model to tackle the real-world issue [48]. Pre-processing the dataset for ML algorithms prepares a subset of the data for training purposes. The dataset for this study had two types of inputs: lattice type was a categorical input and four numerical inputs were cell X, Y, and Z-lengths, as well as wall thickness. As a first stage in the pre-processing step, the categorical input, i.e., lattice type was converted into a numerical input using the One-Hot encoding method. In this method, the categorical input vector is transferred to the number of categories, and each training sample must be assigned only one of these inputs. For numerical inputs, the normalization of data to scale inputs in the same range would facilitate faster prediction models as well as obviate numerical overflow. The following Equation (1) was used to normalize the dataset in order to realize a standard normal distribution [48].
x s j = x j μ j σ j
Here, x s j is the scaled data point of input j; μ j is the average of input j; and σ j is the standard deviation of input j. Table 4 shows a part of the converted and normalized dataset used in this study.
The next stage in the pre-processing step was splitting shuffled datasets into training and testing datasets. The purpose of the training dataset is to train the ML algorithms to establish the correlations between input and output data points; the testing dataset is used to evaluate the model developed using the training dataset [49]. In this study, for each ML algorithm, 80% and 20% of datasets were considered as training and testing datasets, dividing the entire dataset into two groups of 288 and 72 data points, respectively.

2.5. Training and Testing Datasets

There are two key methodologies in ML: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained using labeled data and generates predictions for new, unseen data; in unsupervised learning, it independently discovers relationships among the inputs in unlabeled data [50]. In this paper, since the five lattice parameters were varied to read associated strains as the output, the problem was considered supervised learning. Additionally, because the strain output is numerical, the applied ML algorithms must follow the rules associated with regression problems. The following sections describe the five ML algorithms that were evaluated in this study.
The ML algorithms were run on a system configuration consisting of an 11th Gen Intel® Core™ i5 processor with four cores and a clock speed of 2.40 GHz, as well as 8.00 GB of RAM, running on the Microsoft Windows 11 Home operating system. The ML algorithms were implemented using the Python programming language version 3.9.13 in the Jupyter Notebook environment, utilizing the Sklearn, Tensorflow, and Keras libraries. The hyperparameters for ANN were tuned using the gridsearchCV module in the Jupyter Notebook environment.

2.5.1. Linear Regression

Linear regression (LR) is a widely used statistical technique that models the correlation between specific inputs and numerical outputs. In supervised machine learning, LR models excel at discovering the optimal linear relationship between the predictors and the response variable, offering ease of interpretation, making them a preferred choice when a linear relationship is suspected or when a straightforward and computationally efficient regression model is sought [49]. If there are N samples with D inputs, and if the inputs are expressed as x i j , where i is the number of sample i = 1, …, N, j is the number of inputs j = 1, …, D, the output true or target values are y i . The LR model utilizes a function expressed by Equation (2) [48].
f w , b X = w X + ε
where f w , b X is the predictor matrix, X is the D-dimensional vector of inputs, w is the D-dimensional vector of coefficient, and ε is the total error. The aim is to find f w , b X by adjusting w and minimizing ε [48].
Squared error loss is a particular loss function that measures the penalty for mismatched predictions, which is commonly used in ML algorithms. In model-based learning algorithms, the objective is to attempt to minimize the cost function to find the best prediction model. The cost function for the LR model is determined by the average loss, which is the average of all penalties obtained by using the model on the training data. Therefore, the less the ε in Equation (2), the less the error that the prediction model has. There are various cost functions for evaluating the model learned by the algorithms that will be described in Section 3.

2.5.2. Polynomial Regression

Using a straight line to represent the relationship between the inputs and any outputs is insufficient for non-linear relationships. In such cases, exploring non-linear relationships between variables can result in a better model [51]. Polynomial regression (PR) is a useful method for capturing non-linear patterns in data by incorporating polynomial terms, thereby extending the capabilities of linear regression. The PR model is commonly employed to incorporate higher-order terms of the input parameters (independent variables), thereby facilitating a more comprehensive examination of non-linear associations within the dataset [52]. Hence, PR models should be able to better capture the true correlation between input/output parameters in our dataset. This model can be expressed by the following Equation (3).
f w , b X = w 0 + w 1 X + w 2 X 2 + w 3 X 3 + + w p X p + ε
Increasing the degree of the polynomials in the equation can make the model more complex which, in turn, can lead to overfitting [51]. Overfitting occurs when the model is trained on a particular dataset and shows high accuracy, but performs poorly when tested on a new dataset [51]. To avoid overfitting, it is important to be cautious when using this ML algorithm; however, this paper did not exhibit overfitting because of the small errors for both training and testing datasets. In this study, the model showed the lowest error for p = 1 degree of the PR model.

2.5.3. Decision Tree

The Decision Tree (DT) ML model can map inputs to output. The tree predicts the label of a data point by following a path from the root node to a leaf node. The root node, situated at the highest level of the decision tree, serves as the initial point of data division. This term refers to the complete set of data that is utilized for training purposes. A leaf node is the terminal or final node in a decision tree; it represents a specific numerical value in regression problems, which has been assigned to the data instance that reaches this node. At each node along the path, the tree uses a splitting rule to decide which child node to follow. The splitting rule typically involves checking the value of a particular input of the samples or applying a set of predefined rules [53].
DT is used for both classification and regression problems. In this study, DT for regression, commonly known as a regression tree, was applied. This is used for predicting continuous target variables. The process of building a regression tree involves binary recursive partitioning, which involves iteratively splitting data into partitions based on a selected splitting rule that minimizes the sum of squared deviations from the mean in the resulting subgroups. Initially, all training set records are grouped into the same partition; the algorithm then selects the best split for each partition based on the minimum sum of squared deviations [48].
DT is widely recognized for its versatility and proficiency in managing any non-linear and non-monotonic relationships present in data. Consequently, it is highly regarded as a valuable tool for the identification of essential features. These trees employ sophisticated split decisions and suitable stop** criteria, facilitating efficient decision making, event forecasting, and identification of consequences [54]. In this study, the maximum depth of trees considered was 35; however, the best tree with the minimum mean squared error (MSE) was found with a maximum depth of 9.

2.5.4. Random Forest

Random Forest (RF) was selected for this study primarily based on its robust predictive abilities. It exhibits a high degree of versatility as it can be effectively employed in both regression and classification tasks, rendering it a viable option for a wide range of data-analysis purposes. The RF algorithm is an ensemble technique that combines multiple decision trees, consolidates their predictions, and reduces overfitting. This approach provides several advantages, including enhanced robustness, resilience to outliers, and improved generalization capabilities compared to individual decision trees [55].
RF prevents correlation among trees by preventing strong predictors to split data points in multiple trees. In other words, the algorithm creates trees that are as independent as possible from each other. This is achieved by randomly selecting subsets of inputs and samples for each tree so that each tree learns to make predictions based on different combinations of inputs and samples. By doing so, the trees become less correlated and produce more diverse predictions, which can improve the accuracy and robustness of the RF model [48].
RF prediction considers individual trees that produce models with low variance and reduced risk of overfitting. This technique is widely used in ensemble learning [48]. In this research, an investigation was conducted to find out the maximum depth of trees for the RF algorithm. A depth of nine was found to be optimal for achieving the highest performance based on evaluation metrics such as MSE, mean absolute error (MAE), and root mean square error (RMSE). These findings suggest that the choice of hyperparameters, such as the maximum depth of trees, can significantly impact the effectiveness of the RF algorithm.

2.5.5. Artificial Neural Network

Artificial Neural Network (ANN) is a computational model inspired by the structure of neural networks in the brain. The network consists of a large number of interconnected computing devices called neurons, which carry out complex computations. A neural network is represented as a directed graph with neurons as nodes and edges as links between them. Neurons receive inputs from connected neurons and produce outputs that are passed on to other connected neurons [53].
A feedforward neural network, also known as a multi-layer perceptron, in which information flows in one direction, from the input layer to the output layer, is a stack of several hidden layers, with the final output being only one layer. Each neuron of each layer is associated with an activation function; the activation function of the last layer, which has only one neuron, determines the type of model. Linear activation function results in a regression model, which is used to predict numerical values; a logistic activation function creates a binary classification model, which is used to sort data into two classes. The type of model is selected based on the problem definition [48,53].
The ANN possesses significant efficacy in tackling intricate engineering problems due to its capacity to represent intricate, non-linear associations within data. With recent advancements in computing and algorithms, ANNs have been extensively employed to predict system behavior. These networks have proven to be highly effective, particularly in scenarios involving non-linear behavior. These computational systems are influenced by the biological neural networks found in the human brain, which consist of artificial neurons that receive and process input signals using mathematical operations. ANNs exhibit a high level of suitability for tasks that involve extensive datasets and the automatic acquisition of feature representations, and are highly adaptable for diverse applications [56,57].
This study employed the Grid Search method and Cross-Validation technique to optimize the hyperparameters of ANN. The utilization of this cross-validation technique can yield a more dependable estimation of a model’s performance compared to a solitary train-test split. Cross-validation can be employed to identify overfitting by evaluating the model on different subsets of the data [56]. The training dataset was subjected to a cross-validation approach using a 5-fold method.
The hyperparameters that were taken into consideration for each ML algorithm in this study, along with their respective values, are provided in Table 5. The consideration of the number of hidden layers was also taken into account for the purpose of tuning; nevertheless, the findings of this study indicate that the performance of the model did not exhibit a substantial enhancement when the number of hidden layers was increased, likely due to the limited size of the dataset. It is worth noting that this study exclusively employed a single hidden layer in its analysis.
Learning rate is a pivotal hyperparameter in a predictive model and should be prioritized for tuning. This factor plays a crucial role in the determination of the magnitude of the optimizer’s increments when modifying the weights of the network during the training process. The magnitude of weight updates, and the convergence rate of the network to the optimal solution, are influenced by the learning rate [58]. Furthermore, the remaining hyperparameters to be optimized, in a sequential fashion, encompassed the activation function, batch size, epochs, and the number of neurons within the hidden layer.
The results of hyperparameter optimization for ANN applied in the prediction of strain in AM are displayed in Table 6. This table presents the optimal values of the pre-determined hyperparameters that were evaluated in this study, leading to the selection of the most effective predictive model. The results emphasize the significance of precise hyperparameter selection and tuning in ANN models in order to attain optimal performance.
In this paper, the hyperparameters of ANN were fine-tuned; these included the number of layers, number of neurons, activation function for each layer, learning rate, batch size, and number of epochs (i.e., one full training cycle). Through experimentation, it was determined that the optimal configuration for these hyperparameters was as follows: 1 hidden layer with 3 neurons (which are shown in the hidden layer in Figure 3); a linear activation function for the hidden layer; a learning rate of 0.001; a batch size of 2; and 200 epochs, as shown in Figure 3. In addition, 33 neurons in the input layer is demonstrating the number of inputs which were shown in Table 1. These findings highlight the importance of carefully selecting and tuning the hyperparameters of ANN models to achieve optimal performance.

3. Error Metrics for ML Models

In statistical analysis, it is commonplace to use measures of error or accuracy to evaluate the performance of a predictive model. This study utilized three error metrics to assess the performance of a regression model: mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE).
MSE is a widely used measure of error in regression analysis; it calculates the average of the squared differences between predicted and actual values. The formula for MSE is provided in Equation (4):
M S E = 1 n i = 1 n ( y i y ^ i ) 2
where n is the sample size, yi is the actual value, and ŷi is the predicted value [53].
RMSE is the square root of MSE. A lower RMSE indicates a better fit between the predicted and actual values, meaning that the model has a higher degree of accuracy in estimating the dependent variable; however, the interpretation of the RMSE also depends on the scale of the dependent variable [59]. The formula for RMSE is expressed in Equation (5):
R M S E = 1 n i = 1 n ( y i y ^ i ) 2  
MAE is another commonly used measure of error in regression analysis. Similar to RMSE, the interpretation of the MAE also depends on the scale of the dependent variable. It measures the average of the absolute differences between the predicted and actual values [60]. The formula for MAE is represented by Equation (6):
M A E = 1 n i = 1 n y i y ^ i

4. Evaluation of ML Models

For analyzing and interpreting results, it is crucial to report the values of these error metrics to demonstrate the performance of a model. Selecting which measures to report depends on the research question, the type of data, and the specific analysis conducted. The results of training five ML algorithms concerning their error metrics for the training and testing phases are shown in Table 7.
The results presented in Table 7 indicate that the prediction errors for the metrics MSE, RMSE, and MAE are remarkably low during both the training and testing phases for all ML models. This leads to the creation of a dependable and credible model for each of the ML algorithms used. Furthermore, the LR and PR algorithms exhibit similar accuracy levels. This can be attributed to the fact that the degree of order obtained with the PR algorithm is 1, which implies that the optimal model for our dataset under the PR algorithm follows a linear model, similar to that of the LR algorithm.
The Taylor diagram is a visual aid that is used to compare models or observations to a reference dataset in terms of correlation, variability, and bias on a single chart [61]. The diagram is constructed using a polar coordinate system, with the actual dataset depicted as a point at the center. Each model is plotted as a point on the diagram, with the distance from the origin representing its correlation with the actual dataset and the angular position representing the ratio of standard deviations between the model and the actual dataset. The distance between a model and an actual dataset is visualized by arcs using RMSE. The closer a point is to the reference point, the better the model’s performance [61]. The Taylor diagram in Figure 4. illustrates the results of the comparison between the five ML algorithms in this study. The diagram plots the actual point, which represents the standard deviation of the test dataset, and each algorithm is represented by a point in the plot. The algorithm whose point is closest to the actual point on the diagram is ANN. As this plot demonstrates, ANN’s correlation is 0.93, which is followed by DT, which is 0.74. This indicates that the ANN model has a high correlation with the actual data in this research, and its RMSE is close to zero. Therefore, the Taylor diagram suggests that the ANN algorithm outperformed the rest of the ML algorithms in this study. As mentioned before, LR and PR provide identical outcomes, so their overlap in the Taylor diagram is also evident.
Nevertheless, while the correlation of ANN may be satisfactory, there are various approaches that can enhance the predictive capabilities of ML models. This study employed hyperparameter tuning as a means to reduce overfitting and enhance the accuracy of the models; however, an additional method to enhance prediction accuracy is to collect more data. The utilization of a broader and more representative sample in training models helps to alleviate the issue of overfitting. Furthermore, the process of identifying and selecting the features that are most relevant has the potential to enhance the precision of the prediction. The simplification of the model and enhancement of accuracy can be achieved by eliminating irrelevant or redundant features [62].
This research also utilized the relative error box plot to evaluate the accuracy of ML algorithms in predicting a model. This plot measures the percentage difference between the predicted value and true value. This is an important tool for assessing the precision of a model’s predictions. It can be used to compare different ML algorithms for a given dataset [63].
In this study, the box plot of relative error for each ML algorithm is presented in Figure 5, with the results indicating that the median value for the ANN algorithm was the lowest in comparison with other ML models. The ANN algorithm also exhibited a smaller interquartile range, indicating that its errors were more consistent across different data points. Conversely, the LR and PR algorithms had several error values falling outside of the box which are shown in diamond shape, indicating difficulties in accurately predicting certain types of data points. Additionally, the narrower box plot of the ANN algorithm suggests a more tightly clustered distribution compared to other algorithms.
Moreover, Figure 6 illustrates a comparative analysis of ML algorithms, focusing on their performance in predicting actual values vs. predicted values throughout the training and testing phases. It shows the superior performance of ANN compared to other algorithms. It is worth mentioning that the ANN model consistently demonstrates the highest level of agreement between observed and forecasted values, thus confirming its effectiveness as the preferred algorithm for precise predictions within this particular framework, other than the ML algorithms for this research. The presented visual evidence serves to emphasize the importance of this study’s findings and the potential implications of employing ANN in practical scenarios that require accurate prediction.
The SHapley Additive exPlanations (SHAP) method, initially proposed by Lundberg and Lee [64], was also utilized in this study to determine the individual contributions of each feature. This methodology, based on co-operative game theory, improves the clarity and comprehensibility of ML models [65]. In order to evaluate the importance of features within the entire dataset, this study employed a bee swarm plot. As depicted in Figure 7a, the variables have been organized based on their global feature importance, with the most significant variables positioned at the top and the least significant variables positioned at the end. With the given dataset and the best ANN model in this study, it was observed that the lattice structure feature had a significant positive effect when its values were high, while its impact was relatively minor and negative when the values were low. The influence of the feature’s Z-axis on strain predictions was found to be minimal, regardless of whether its values were high or low. The reason for showing the lattice structure feature with a different color than other features in Figure 7. is that this feature is a categorical feature while others are numerical.
Furthermore, the bar plot depicted in Figure 7b illustrates that the order of features is determined by their absolute SHAP values, regardless of their impact on predictions, be they positive or negative. In conclusion, the most important features for strain in this study are lattice structure, thickness, Y, X, and Z, respectively.

5. Conclusions

In conclusion, this study successfully developed a strain prediction model for designing lattice structures for AM-processed ordered foam material ML algorithms. First, a dataset of 360 data points was generated from 29 types of lattice structures, by varying the thickness and cell size of those structures along the X, Y, and Z axes. Then, by utilizing that dataset and employing supervised learning methods in ML with regression models, the study was able to accurately predict the mechanical deformation of the lattice structures, namely, strain. The study compared the performance of five ML algorithms, including Linear Regression, Polynomial Regression, Decision Tree, Random Forest, and Artificial Neural Network, and found that the ANN algorithm outperformed the others. Evaluation metrics such as mean squared error, root mean squared error, and mean absolute error, showed remarkably low prediction errors during both the training and testing phases, indicating a dependable and credible model for each of the ML algorithms used. The visualization of the system’s output through the Taylor diagram and relative error box plot, and comparison between the actual and predicted values of training and testing phases, further confirmed the superiority of the ANN algorithm; moreover, this study used the SHAP method to evaluate feature importance across the dataset and its contribution to the predictions, which showed that lattice structure had a significant positive effect when values were high, while the Z-axis had minimal influence. Overall, the results of this study have important implications for the development of accurate and reliable strain prediction models for lattice structures in AM, which could contribute to improving the quality and efficiency of AM processes in various industries.

Author Contributions

Conceptualization, M.A.H.K.; software, M.J.H. and C.S.-U.-Z.; methodology, C.S.-U.-Z. and M.J.H.; model simulation, C.S.-U.-Z.; data generation and collection, C.S.-U.-Z.; data curation, C.S.-U.-Z. and M.J.H.; validation, M.J.H. and M.A.H.K.; formal analysis, M.J.H.; investigation, M.A.H.K.; resources, M.J.H. and C.S.-U.-Z.; writing—original draft preparation, M.J.H. and C.S.-U.-Z.; writing—review and editing, M.J.H., C.S.-U.-Z. and M.A.H.K.; visualization, M.J.H. and C.S.-U.-Z.; supervision, M.A.H.K.; project administration, M.A.H.K.; funding acquisition, M.A.H.K. All authors have read and agreed to the published version of the manuscript.

Funding

Natural Sciences and Engineering Research Council of Canada (NSERC) Grant Number: RGPIN-2023-04388.

Data Availability Statement

Data beyond results discussed in this article can be made available upon request to the corresponding author.

Acknowledgments

The authors thank Abdul Bais for guidance on machine learning algorithms.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Altstädt, V.; Krausch, G. Special Issue—Polymer Foams. Polymer 2015, 56, 3–4. [Google Scholar] [CrossRef]
  2. Mills, N.J. Polymer Foams Handbook: Engineering and Biomechanics Applications and Design Guide; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  3. Shau-Tarng Lee, C.B.; Park, N.S.R. Polymeric Foams; Taylor & Francis and CRC: Abingdon, UK, 2006; Volume 11, ISBN 9780849330759. [Google Scholar]
  4. Zhang, Y.Z.; Zhang, G.C.; Sun, X.F.; He, Z.; Rajaratnam, M. Properties and Microstructure Study of Polyimide Foam Plastic. Cell. Polym. 2010, 29, 211–225. [Google Scholar] [CrossRef]
  5. Guo, Q.; Shi, D.; Yang, C.; Wu, G. Preparation of Polymer-Based Foam for Efficient Oil–Water Separation Based on Surface Engineering. Soft Matter 2022, 18, 3041–3051. [Google Scholar] [CrossRef]
  6. Pachori, S.; Sarkar, A.; Dutta, A.; Palanivelu, J.; Chidambaram, R. Synthesis Methods of Starch-Based Polymer Foams and Its Comparison with Conventional Polymer Foams for Food Packaging Applications. In Polymers for Agri-Food Applications; Springer International Publishing: Cham, Switzerland, 2019; pp. 317–338. [Google Scholar]
  7. Liu, S.; Duvigneau, J.; Vancso, G.J. Nanocellular Polymer Foams as Promising High Performance Thermal Insulation Materials. Eur. Polym. J. 2015, 65, 33–45. [Google Scholar] [CrossRef]
  8. Zambotti, A.; Ionescu, E.; Gargiulo, N.; Caputo, D.; Vakifahmetoglu, C.; Santhosh, B.; Biesuz, M.; Sorarù, G.D. Processing of Polymer-Derived, Aerogel-Filled, SiC Foams for High-temperature Insulation. J. Am. Ceram. Soc. 2023, 106, 4891–4901. [Google Scholar] [CrossRef]
  9. Mills, N. Polymer Foams for Personal Protection: Cushions, Shoes and Helmets. Compos. Sci. Technol. 2003, 63, 2389–2400. [Google Scholar] [CrossRef]
  10. Koohbor, B.; Kidane, A.; Lu, W.-Y.; Sutton, M.A. Investigation of the Dynamic Stress–Strain Response of Compressible Polymeric Foam Using a Non-Parametric Analysis. Int. J. Impact Eng. 2016, 91, 170–182. [Google Scholar] [CrossRef]
  11. Kossa, A.; Berezvai, S. Visco-Hyperelastic Characterization of Polymeric Foam Materials. Mater. Today Proc. 2016, 3, 1003–1008. [Google Scholar] [CrossRef]
  12. Koohbor, B.; Kidane, A.; Lu, W.-Y. Effect of Specimen Size, Compressibility and Inertia on the Response of Rigid Polymer Foams Subjected to High Velocity Direct Impact Loading. Int. J. Impact Eng. 2016, 98, 62–74. [Google Scholar] [CrossRef]
  13. Ikeda, Y. Preparation and Properties of Graded Styrene-butadiene Rubber Vulcanizates. J. Polym. Sci. B Polym. Phys. 2002, 40, 358–364. [Google Scholar] [CrossRef]
  14. Gupta, N. A Functionally Graded Syntactic Foam Material for High Energy Absorption under Compression. Mater. Lett. 2007, 61, 979–982. [Google Scholar] [CrossRef]
  15. Higuchi, M.; Adachi, T.; Yokochi, Y.; Fujimoto, K. Controlling of Distribution of Mechanical Properties in Functionally-Graded Syntactic Foams for Impact Energy Absorption. Mater. Sci. Forum 2012, 706–709, 729–734. [Google Scholar] [CrossRef]
  16. Suethao, S.; Shah, D.U.; Smitthipong, W. Recent Progress in Processing Functionally Graded Polymer Foams. Materials 2020, 13, 4060. [Google Scholar] [CrossRef]
  17. Duan, Y.; Ding, Y.; Liu, Z.; Hou, N.; Zhao, X.; Liu, H.; Zhao, Z.; Hou, B.; Li, Y. Effects of Cell Size vs. Cell-Wall Thickness Gradients on Compressive Behavior of Additively Manufactured Foams. Compos. Sci. Technol. 2020, 199, 108339. [Google Scholar] [CrossRef]
  18. Mannella, G.A.; Conoscenti, G.; Carfì Pavia, F.; La Carrubba, V.; Brucato, V. Preparation of Polymeric Foams with a Pore Size Gradient via Thermally Induced Phase Separation (TIPS). Mater. Lett. 2015, 160, 31–33. [Google Scholar] [CrossRef]
  19. Gracovetsky, S.; Farfan, H. The Optimum Spine. Spine 1986, 11, 543–573. [Google Scholar] [CrossRef] [PubMed]
  20. Denninger, M.; Martel, F.; Rancourt, D. A Single Step Process to Design a Custom Mattress That Relieves Trunk Shear Forces. Int. J. Mech. Mater. Des. 2011, 7, 1–16. [Google Scholar] [CrossRef]
  21. Haex, B. Back and Bed: Ergonomic Aspects of Slee**; SciTech Book News; CRC Press: Boca Raton, FL, USA, 2005. [Google Scholar]
  22. Pinzur, M.S. Levin and O’Neal’s The Diabetic Foot. 6th Ed. J. Bone Jt. Surg. Am. Vol. 2001, 83, 641–642. [Google Scholar] [CrossRef]
  23. Jeffcoate, W.J.; Harding, K.G. Diabetic Foot Ulcers. Lancet 2003, 361, 1545–1551. [Google Scholar] [CrossRef]
  24. Shimazaki, Y.; Nozu, S.; Inoue, T. Shock-Absorption Properties of Functionally Graded EVA Laminates for Footwear Design. Polym. Test. 2016, 54, 98–103. [Google Scholar] [CrossRef]
  25. Petre, M.T.; Erdemir, A.; Cavanagh, P.R. Determination of Elastomeric Foam Parameters for Simulations of Complex Loading. Comput. Methods Biomech. Biomed. Eng. 2006, 9, 231–242. [Google Scholar] [CrossRef]
  26. Even-Tzur, N.; Weisz, E.; Hirsch-Falk, Y.; Gefen, A. Role of EVA Viscoelastic Properties in the Protective Performance of a Sport Shoe: Computational Studies. Biomed. Mater. Eng. 2006, 16, 289–299. [Google Scholar]
  27. Duoss, E.B.; Weisgraber, T.H.; Hearon, K.; Zhu, C.; Small, W.; Metz, T.R.; Vericella, J.J.; Barth, H.D.; Kuntz, J.D.; Maxwell, R.S.; et al. Three-Dimensional Printing of Elastomeric, Cellular Architectures with Negative Stiffness. Adv. Funct. Mater. 2014, 24, 4905–4913. [Google Scholar] [CrossRef]
  28. Srivastava, V.; Srivastava, R. On the Polymeric Foams: Modeling and Properties. J. Mater. Sci. 2014, 49, 2681–2692. [Google Scholar] [CrossRef]
  29. Ashby, M.F. The Properties of Foams and Lattices. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2006, 364, 15–30. [Google Scholar] [CrossRef] [PubMed]
  30. Qin, D.; Sang, L.; Zhang, Z.; Lai, S.; Zhao, Y. Compression Performance and Deformation Behavior of 3D-Printed PLA-Based Lattice Structures. Polymers 2022, 14, 1062. [Google Scholar] [CrossRef] [PubMed]
  31. Sun, Z.P.; Guo, Y.B.; Shim, V.P.W. Characterisation and Modeling of Additively-Manufactured Polymeric Hybrid Lattice Structures for Energy Absorption. Int. J. Mech. Sci. 2021, 191, 106101. [Google Scholar] [CrossRef]
  32. Maloney, K.J.; Fink, K.D.; Schaedler, T.A.; Kolodziejska, J.A.; Jacobsen, A.J.; Roper, C.S. Multifunctional Heat Exchangers Derived from Three-Dimensional Micro-Lattice Structures. Int. J. Heat Mass Transf. 2012, 55, 2486–2493. [Google Scholar] [CrossRef]
  33. Alqahtani, S.; Ali, H.M.; Farukh, F.; Kandan, K. Experimental and Computational Analysis of Polymeric Lattice Structure for Efficient Building Materials. Appl. Therm. Eng. 2023, 218, 119366. [Google Scholar] [CrossRef]
  34. Ozdemir, Z.; Hernandez-Nava, E.; Tyas, A.; Warren, J.A.; Fay, S.D.; Goodall, R.; Todd, I.; Askes, H. Energy Absorption in Lattice Structures in Dynamics: Experiments. Int. J. Impact Eng. 2016, 89, 49–61. [Google Scholar] [CrossRef]
  35. Bonatti, C.; Mohr, D. Large Deformation Response of Additively-Manufactured FCC Metamaterials: From Octet Truss Lattices towards Continuous Shell Mesostructures. Int. J. Plast. 2017, 92, 122–147. [Google Scholar] [CrossRef]
  36. Sakib-Uz-Zaman, C.; Khondoker, M.A.H. Polymer-Based Additive Manufacturing for Orthotic and Prosthetic Devices: Industry Outlook in Canada. Polymers 2023, 15, 1506. [Google Scholar] [CrossRef] [PubMed]
  37. Khondoker, M.A.H.; Sameoto, D. Direct Coupling of Fixed Screw Extruders Using Flexible Heated Hoses for FDM Printing of Extremely Soft Thermoplastic Elastomers. Prog. Addit. Manuf. 2019, 4, 197–209. [Google Scholar] [CrossRef]
  38. Dinakaran, I.; Sakib-Uz-Zaman, C.; Rahman, A.; Khondoker, M.A.H. Controlling degree of foaming in extrusion 3D printing of porous polylactic acid. Rapid Prototyp. J. 2023, 29, 1958–1968. [Google Scholar] [CrossRef]
  39. Khondoker, M.A.H.; Asad, A.; Sameoto, D. Printing with mechanically interlocked extrudates using a custom bi-extruder for fused deposition modelling. Rapid Prototyp. J. 2018, 24, 921–934. [Google Scholar] [CrossRef]
  40. Kadirgama, K.; Harun, W.S.W.; Tarlochan, F.; Samykano, M.; Ramasamy, D.; Azir, M.Z.; Mehboob, H. Statistical and Optimize of Lattice Structures with Selective Laser Melting (SLM) of Ti6AL4V Material. Int. J. Adv. Manuf. Technol. 2018, 97, 495–510. [Google Scholar] [CrossRef]
  41. Habib, F.N.; Iovenitti, P.; Masood, S.H.; Nikzad, M. Fabrication of Polymeric Lattice Structures for Optimum Energy Absorption Using Multi Jet Fusion Technology. Mater. Des. 2018, 155, 86–98. [Google Scholar] [CrossRef]
  42. Xu, Y.; Zhang, H.; Gan, Y.; Šavija, B. Cementitious Composites Reinforced with 3D Printed Functionally Graded Polymeric Lattice Structures: Experiments and Modelling. Addit. Manuf. 2021, 39, 101887. [Google Scholar] [CrossRef]
  43. Ling, C.; Cernicchi, A.; Gilchrist, M.D.; Cardiff, P. Mechanical Behaviour of Additively-Manufactured Polymeric Octet-Truss Lattice Structures under Quasi-Static and Dynamic Compressive Loading. Mater. Des. 2019, 162, 106–118. [Google Scholar] [CrossRef]
  44. Muhammad, W.; Brahme, A.P.; Ibragimova, O.; Kang, J.; Inal, K. A Machine Learning Framework to Predict Local Strain Distribution and the Evolution of Plastic Anisotropy & Fracture in Additively Manufactured Alloys. Int. J. Plast. 2021, 136, 102867. [Google Scholar] [CrossRef]
  45. Olakanmi, E.O.; Cochrane, R.F.; Dalgarno, K.W. A Review on Selective Laser Sintering/Melting (SLS/SLM) of Aluminium Alloy Powders: Processing, Microstructure, and Properties. Prog. Mater. Sci. 2015, 74, 401–477. [Google Scholar] [CrossRef]
  46. Liu, J.; To, A.C. Quantitative Texture Prediction of Epitaxial Columnar Grains in Additive Manufacturing Using Selective Laser Melting. Addit. Manuf. 2017, 16, 58–64. [Google Scholar] [CrossRef]
  47. Engineering Data Sources: Ansys Workbench R2 2022. Available online: https://www.ansys.com/products/ansys-workbench (accessed on 24 November 2022).
  48. Vemuri, V.K. The Hundred-Page Machine Learning Book. J. Inf. Technol. Case Appl. Res. 2020, 22, 136–138. [Google Scholar] [CrossRef]
  49. Md, A.Q.; Kulkarni, S.; Joshua, C.J.; Vaichole, T.; Mohan, S.; Iwendi, C. Enhanced Preprocessing Approach Using Ensemble Machine Learning Algorithms for Detecting Liver Disease. Biomedicines 2023, 11, 581. [Google Scholar] [CrossRef] [PubMed]
  50. Jordan, M.I.; Mitchell, T.M. Machine Learning: Trends, Perspectives, and Prospects. Science 2015, 349, 255–260. [Google Scholar] [CrossRef] [PubMed]
  51. Rawlings, J.O.; Pantula, S.G.; Dickey, D.A. (Eds.) Applied Regression Analysis; Springer: New York, NY, USA, 1998; ISBN 0-387-98454-2. [Google Scholar]
  52. Bhattacharya, S.; Kalita, K.; Čep, R.; Chakraborty, S. A Comparative Analysis on Prediction Performance of Regression Models during Machining of Composite Materials. Materials 2021, 14, 6689. [Google Scholar] [CrossRef]
  53. Shalev-Shwartz, S.; Ben-David, S. Understanding Machine Learning; Cambridge University Press: Cambridge, UK, 2014; ISBN 9781107057135. [Google Scholar]
  54. Shaban, M.; Alsharekh, M.F.; Alsunaydih, F.N.; Alateyah, A.I.; Alawad, M.O.; BaQais, A.; Kamel, M.; Nassef, A.; El-Hadek, M.A.; El-Garaihy, W.H. Investigation of the Effect of ECAP Parameters on Hardness, Tensile Properties, Impact Toughness, and Electrical Conductivity of Pure Cu through Machine Learning Predictive Models. Materials 2022, 15, 9032. [Google Scholar] [CrossRef]
  55. Trzepieciński, T.; Najm, S.M.; Ibrahim, O.M.; Kowalik, M. Analysis of the Frictional Performance of AW-5251 Aluminium Alloy Sheets Using the Random Forest Machine Learning Algorithm and Multilayer Perceptron. Materials 2023, 16, 5207. [Google Scholar] [CrossRef]
  56. Masoudi Nejad, R.; Sina, N.; Ma, W.; Liu, Z.; Berto, F.; Gholami, A. Optimization of Fatigue Life of Pearlitic Grade 900A Steel Based on the Combination of Genetic Algorithm and Artificial Neural Network. Int. J. Fatigue 2022, 162, 106975. [Google Scholar] [CrossRef]
  57. Chai, M.; Liu, P.; He, Y.; Han, Z.; Duan, Q.; Song, Y.; Zhang, Z. Machine Learning-based Approach for Fatigue Crack Growth Prediction Using Acoustic Emission Technique. Fatigue Fract. Eng. Mater. Struct. 2023, 46, 2784–2797. [Google Scholar] [CrossRef]
  58. Greff, K.; Srivastava, R.K.; Koutnik, J.; Steunebrink, B.R.; Schmidhuber, J. LSTM: A Search Space Odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 2222–2232. [Google Scholar] [CrossRef]
  59. Robert, C. Machine Learning, a Probabilistic Perspective. Chance 2014, 27, 62–63. [Google Scholar] [CrossRef]
  60. Botchkarev, A. A New Typology Design of Performance Metrics to Measure Errors in Machine Learning Regression Algorithms. Interdiscip. J. Inf. Knowl. Manag. 2019, 14, 45–76. [Google Scholar] [CrossRef] [PubMed]
  61. Taylor, K.E. Summarizing Multiple Aspects of Model Performance in a Single Diagram. J. Geophys. Res. Atmos. 2001, 106, 7183–7192. [Google Scholar] [CrossRef]
  62. Pudjihartono, N.; Fadason, T.; Kempa-Liehr, A.W.; O’Sullivan, J.M. A Review of Feature Selection Methods for Machine Learning-Based Disease Risk Prediction. Front. Bioinform. 2022, 2, 927312. [Google Scholar] [CrossRef]
  63. Sim, C.H.; Gan, F.F.; Chang, T.C. Outlier Labeling with Boxplot Procedures. J. Am. Stat. Assoc. 2005, 100, 642–652. [Google Scholar] [CrossRef]
  64. Lundberg, S.; Lee, S.-I. A Unified Approach to Interpreting Model Predictions. In Proceedings of the NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  65. Chai, M.; He, Y.; Li, Y.; Song, Y.; Zhang, Z.; Duan, Q. Machine Learning-Based Framework for Predicting Creep Rupture Life of Modified 9Cr-1Mo Steel. Appl. Sci. 2023, 13, 4972. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the design process.
Figure 1. Flow chart of the design process.
Materials 16 07173 g001
Figure 2. (a) Solid body after applying the boundary conditions; (b) Solid model after the simulation showing strain distribution; (c) Scale bar.
Figure 2. (a) Solid body after applying the boundary conditions; (b) Solid model after the simulation showing strain distribution; (c) Scale bar.
Materials 16 07173 g002
Figure 3. Schematic of best model of ANN for this study.
Figure 3. Schematic of best model of ANN for this study.
Materials 16 07173 g003
Figure 4. Taylor Diagram.
Figure 4. Taylor Diagram.
Materials 16 07173 g004
Figure 5. Relative error box plot.
Figure 5. Relative error box plot.
Materials 16 07173 g005
Figure 6. Comparison between predicted vs. actual values of training and test models.
Figure 6. Comparison between predicted vs. actual values of training and test models.
Materials 16 07173 g006
Figure 7. The importance of each feature using SHAP with (a) beeswarm plot and (b) plot bar of absolute SHAP values.
Figure 7. The importance of each feature using SHAP with (a) beeswarm plot and (b) plot bar of absolute SHAP values.
Materials 16 07173 g007
Table 1. Lattice parameters and the levels used in designing lattice structures.
Table 1. Lattice parameters and the levels used in designing lattice structures.
Lattice ParametersNumber of LevelsPoint Levels
Lattice Types29Six (6) WTPMS unit cells and twenty-three (23) graph unit cells
Cell X-length320 mm, 25 mm, and 30 mm
Cell Y-length320 mm, 25 mm, and 30 mm
Cell Z-length320 mm, 25 mm, and 30 mm
Wall Thickness32 mm, 3 mm, and 4 mm
Table 2. Twenty-Nine (29) different types of lattice unit cells from nTop’s library.
Table 2. Twenty-Nine (29) different types of lattice unit cells from nTop’s library.
WTPMS Unit CellGraph Unit Cell
Materials 16 07173 i001Materials 16 07173 i002Materials 16 07173 i003Materials 16 07173 i004Materials 16 07173 i005
GyroidSimple CubicFluoriteRe-entrantSquare Honeycomb Rotated
Materials 16 07173 i006Materials 16 07173 i007Materials 16 07173 i008Materials 16 07173 i009Materials 16 07173 i010
SchwarzBody-Centered CubicOctateWeaire-PhelanSquare Honeycomb
Materials 16 07173 i011Materials 16 07173 i012Materials 16 07173 i013Materials 16 07173 i014Materials 16 07173 i015
Diamond—WTPMSFace Centered CubicTruncated CubeTriangular HoneycombFace Centered Cubic Foam
Materials 16 07173 i016Materials 16 07173 i017Materials 16 07173 i018Materials 16 07173 i019Materials 16 07173 i020
LidinoidColumnTruncated OctahedronTriangular Honeycomb RotatedBody-Centered Cubic Foam
Materials 16 07173 i021Materials 16 07173 i022Materials 16 07173 i023Materials 16 07173 i024Materials 16 07173 i025
Split PColumnsKelvin CellHexagonal HoneycombSimple Cubic Foam
Materials 16 07173 i026Materials 16 07173 i027Materials 16 07173 i028Materials 16 07173 i029
NeoviusDiamond—GraphIsotrussRe-entrant Honeycomb
Table 3. Material properties of polyethylene (PE).
Table 3. Material properties of polyethylene (PE).
PropertiesValue
Young’s Modulus1.1 × 109 Pa
Poisson’s ratio0.42
Density0.00095 gm/mm3
Table 4. The converted and normalized dataset.
Table 4. The converted and normalized dataset.
BCBFCLCSDGDWFCFFFLGDSZTCTHTOTRWPXYZThickness
00000000001000000−1.523839−0.746602−0.861621−0.856103
10000000000100000−1.523839−0.746602−0.861621−0.856103
20000010000000000−1.523839−0.746602−0.861621−0.856103
30000000000000000−1.523839−0.746602−0.861621−0.856103
40000000000000000−1.523839−0.746602−0.861621−0.856103
3550000000000001000−1.523839−0.7466021.2999370.551189
3560000000000000000−0.376178−0.746602−0.861621−0.856103
35700000000000000100.7714840.4270951.2999371.958481
3580000001000000000−0.3761780.427095−0.8616211.958481
3590000000000000000−0.3761780.4270951.2999371.958481
Table 5. Tuned hyperparameters for ANN and their values.
Table 5. Tuned hyperparameters for ANN and their values.
HyperparametersValues
Learning rate0.001, 0.005, 0.01, 0.02, 0.04, 0.05
Activation function for a hidden layersoftmax, softplus, softsign, relu, tanh, sigmoid, hard_sigmoid, linear
Number of batches1, 2, 3, 4, 5, 6, 7
Number of epochs200, 300, 400, 500
Number of neurons for a hidden layer3, 4, 5, 6, 10
Table 6. Values of the best model for hyperparameters of ANN.
Table 6. Values of the best model for hyperparameters of ANN.
HyperparametersValue
Learning rate0.001
Activation function for a hidden layerlinear
Number of batches2
Number of epochs200
Number of neurons for a hidden layer3
Table 7. The results of evaluation metrics.
Table 7. The results of evaluation metrics.
Training ResultsTesting Results
MSERMSEMAEMSERMSEMAE
LR1.264 × 10−61.124 × 10−36.677 × 10−48.624 × 10−62.937 × 10−31.191 × 10−3
PR1.264 × 10−61.124 × 10−36.677 × 10−48.624 × 10−62.937 × 10−31.191 × 10−3
DT2.007 × 10−74.480 × 10−42.262 × 10−46.659 × 10−62.580 × 10−39.644 × 10−4
RF1.467 × 10−73.830 × 10−41.761 × 10−48.103 × 10−62.847 × 10−39.873 × 10−4
ANN1.133 × 10−73.366 × 10−41.925 × 10−42.453 × 10−61.566 × 10−34.653 × 10−6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hooshmand, M.J.; Sakib-Uz-Zaman, C.; Khondoker, M.A.H. Machine Learning Algorithms for Predicting Mechanical Stiffness of Lattice Structure-Based Polymer Foam. Materials 2023, 16, 7173. https://doi.org/10.3390/ma16227173

AMA Style

Hooshmand MJ, Sakib-Uz-Zaman C, Khondoker MAH. Machine Learning Algorithms for Predicting Mechanical Stiffness of Lattice Structure-Based Polymer Foam. Materials. 2023; 16(22):7173. https://doi.org/10.3390/ma16227173

Chicago/Turabian Style

Hooshmand, Mohammad Javad, Chowdhury Sakib-Uz-Zaman, and Mohammad Abu Hasan Khondoker. 2023. "Machine Learning Algorithms for Predicting Mechanical Stiffness of Lattice Structure-Based Polymer Foam" Materials 16, no. 22: 7173. https://doi.org/10.3390/ma16227173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop