title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Unbalanced data loading for multi-task learning in PyTorch | by Omri Bar | Towards Data Science
Working on multi-task learning (MTL) problems require a unique training setup, mainly in terms of data handling, model architecture, and performance evaluation metrics. In this post, I am reviewing the data handling part. Specifically, how to train a multi-task learning model on multiple datasets and how to handle tasks with a highly unbalanced dataset. I will describe my suggestion in three steps: Combining two (or more) datasets into a single PyTorch Dataset. This dataset will be the input for a PyTorch DataLoader.Modifying the batch preparation process to produce either one task in each batch or alternatively mix samples from both tasks in each batch.Handling the highly unbalanced datasets at the batch level by using a batch sampler as part of the DataLoader. Combining two (or more) datasets into a single PyTorch Dataset. This dataset will be the input for a PyTorch DataLoader. Modifying the batch preparation process to produce either one task in each batch or alternatively mix samples from both tasks in each batch. Handling the highly unbalanced datasets at the batch level by using a batch sampler as part of the DataLoader. I am only reviewing Dataset and DataLoader related code, ignoring other important modules like the model, optimizer and metrics definition. For simplicity, I am using a generic two dataset example. However, the number of datasets and the type of data should not affect the main setup. We can even use several instances of the same dataset, in case we have more than one set of labels for the same set of samples. For example, a dataset of images with an object class and a spatial location, or a face emotions dataset with facial emotion and age labeling per image. A PyTorch Dataset class needs to implement the __getitem__() function. This function handles samples fetching and preparation for a given index. When using two datasets, it is then possible to have two different methods of creating samples. Hence, we can even use a single dataset, get samples with different labels, and change the samples processing scheme (the output samples should have the same shape since we stack them as a batch tensor). First, let’s define two datasets to work with: We define two (binary) datasets, one with ten samples of ±1 (equally distributed), and the second with 55 samples, 50 samples of the digit 5, and 5 samples of the digit -5. These datasets are only for illustration. In real datasets, you should have both the samples and the labels, you will probably read the data from a database or parse it from data folders, but these simple datasets are enough to understand the main concepts. Next, we need to define a DataLoader. We provide it with our concat_dataset and set the loader parameters, such as the batch size, and whether or not to shuffle the samples. The output of this part looks like: tensor([ 5., 5., 5., 5., -5., 5., -5., 5.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -5., 5., 1., 5., -1., 5., -1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([ 5., 5., 5., 5., -5., 1., 5., 5.])tensor([ 5., 5., 5., 1., 5., 5., 5., -1.])tensor([ 5., 5., 5., 5., -1., 5., 1., 5.])tensor([ 5., -5., 1., 5., 5., 5., 5., 5.])tensor([5.]) Each batch is a tensor of 8 samples from our concat_dataset. The order is set randomly, and samples are selected from the pool of samples. Until now, everything was relatively straight forward. The datasets are combined into a single one, and samples are randomly picked from both of the original datasets to construct the mini-batch. Now let’s try to control and manipulate the samples in each batch. We want to get samples from only one dataset in each mini-batch, switching between them every other batch. This is the definition of a BatchSchedulerSampler class, which creates a new samples iterator. First, by creating a RandomSampler for each internal dataset. And second by pulling samples (actually samples indexes) from each internal dataset iterator. Thus, building a new list of samples indexes. Using a batch size of 8 means that from each dataset we need to fetch 8 samples. Now let’s run and print the samples using a new DataLoader, which gets our BatchSchedulerSampler as an input sampler (shuffle can’t be set to True when working with a sampler). The output now looks like this: tensor([-1., -1., 1., 1., -1., 1., 1., -1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([ 1., -1., -1., -1., 1., 1., -1., 1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -1., 1., 1., 1., -1., 1., -1.])tensor([ 5., 5., -5., 5., 5., -5., 5., 5.])tensor([ 1., 1., -1., -1., 1., -1., 1., 1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -1., -1., -1., 1., 1., 1., -1.])tensor([ 5., -5., 5., 5., 5., 5., -5., 5.])tensor([-1., 1., -1., 1., -1., 1., 1., -1.])tensor([ 5., 5., 5., 5., 5., -5., 5., 5.])tensor([ 1., -1., -1., 1., 1., 1., 1., -1.])tensor([5., 5., 5., 5., 5., 5., 5.]) Hurray!!! For each mini-batch we now get only one dataset samples.We can play with this type of scheduling in order to downsample or upsample more important tasks. The remaining problem in our batches now comes from the second highly unbalanced dataset. This is often the case in MTL, having a main task and a few other satellite sub-tasks. Training the main task and sub-tasks together might lead to improve performance and contribute to the generalization of the overall model. The problem is that samples of the sub-tasks are often very sparse, having only a few positive (or negative) samples. Let’s use our previous logic but also forcing a balanced batch with respect to the distribution of samples in each task. To handle the unbalanced issue, we need to replace the random sampler in the BatchSchedulerSampler class with an ImbalancedDatasetSampler (I am using a great implementation from this repository). This class handles the balancing of the dataset. We can also mix and use RandomSampler for some tasks and ImbalancedDatasetSampler for others. We first create ExampleImbalancedDatasetSampler, which inherit from ImbalancedDatasetSampler, only modifying the _get_label function to fit our use case. Next, we use BalancedBatchSchedulerSampler, which is similar to the previous BatchSchedulerSampler class, only replacing the usage of RandomSampler for the unbalanced task with the ExampleImbalancedDatasetSampler. Let’s run the new DataLoader: The output looks like: tensor([-1., 1., 1., -1., -1., -1., 1., -1.])tensor([ 5., 5., 5., 5., -5., -5., -5., -5.])tensor([ 1., 1., 1., -1., 1., -1., 1., 1.])tensor([ 5., -5., 5., -5., -5., -5., 5., 5.])tensor([-1., -1., 1., -1., -1., -1., -1., 1.])tensor([-5., 5., 5., 5., 5., -5., 5., -5.])tensor([-1., -1., 1., 1., 1., 1., -1., -1.])tensor([-5., 5., 5., 5., 5., -5., 5., 5.])tensor([ 1., -1., 1., 1., 1., -1., 1., -1.])tensor([ 5., 5., 5., -5., 5., -5., 5., 5.])tensor([-1., -1., -1., -1., 1., 1., 1., 1.])tensor([-5., 5., 5., 5., 5., 5., -5., 5.])tensor([-1., 1., -1., 1., 1., 1., 1., 1.])tensor([-5., -5., 5., 5., -5., -5., 5.]) The mini-batches of the unbalanced task are now much more balanced. There is a lot of room to play with this setup even further. We can combine the tasks in a balanced way, and by setting the samples_to_grab to 4, which is half of the batch size, we can get a mixed mini-batch with 4 samples taken from each task. To produce a ratio of 1:2 toward a more important task, we can set samples_to_grab=2 for the first task and samples_to_grab=6 for the second task. That’s it. The full code can be downloaded from my repository.
[ { "code": null, "e": 340, "s": 171, "text": "Working on multi-task learning (MTL) problems require a unique training setup, mainly in terms of data handling, model architecture, and performance evaluation metrics." }, { "code": null, "e": 527, "s": 340, "text": "In this post, I am reviewing the data handling part. Specifically, how to train a multi-task learning model on multiple datasets and how to handle tasks with a highly unbalanced dataset." }, { "code": null, "e": 573, "s": 527, "text": "I will describe my suggestion in three steps:" }, { "code": null, "e": 944, "s": 573, "text": "Combining two (or more) datasets into a single PyTorch Dataset. This dataset will be the input for a PyTorch DataLoader.Modifying the batch preparation process to produce either one task in each batch or alternatively mix samples from both tasks in each batch.Handling the highly unbalanced datasets at the batch level by using a batch sampler as part of the DataLoader." }, { "code": null, "e": 1065, "s": 944, "text": "Combining two (or more) datasets into a single PyTorch Dataset. This dataset will be the input for a PyTorch DataLoader." }, { "code": null, "e": 1206, "s": 1065, "text": "Modifying the batch preparation process to produce either one task in each batch or alternatively mix samples from both tasks in each batch." }, { "code": null, "e": 1317, "s": 1206, "text": "Handling the highly unbalanced datasets at the batch level by using a batch sampler as part of the DataLoader." }, { "code": null, "e": 1457, "s": 1317, "text": "I am only reviewing Dataset and DataLoader related code, ignoring other important modules like the model, optimizer and metrics definition." }, { "code": null, "e": 1883, "s": 1457, "text": "For simplicity, I am using a generic two dataset example. However, the number of datasets and the type of data should not affect the main setup. We can even use several instances of the same dataset, in case we have more than one set of labels for the same set of samples. For example, a dataset of images with an object class and a spatial location, or a face emotions dataset with facial emotion and age labeling per image." }, { "code": null, "e": 2328, "s": 1883, "text": "A PyTorch Dataset class needs to implement the __getitem__() function. This function handles samples fetching and preparation for a given index. When using two datasets, it is then possible to have two different methods of creating samples. Hence, we can even use a single dataset, get samples with different labels, and change the samples processing scheme (the output samples should have the same shape since we stack them as a batch tensor)." }, { "code": null, "e": 2375, "s": 2328, "text": "First, let’s define two datasets to work with:" }, { "code": null, "e": 2806, "s": 2375, "text": "We define two (binary) datasets, one with ten samples of ±1 (equally distributed), and the second with 55 samples, 50 samples of the digit 5, and 5 samples of the digit -5. These datasets are only for illustration. In real datasets, you should have both the samples and the labels, you will probably read the data from a database or parse it from data folders, but these simple datasets are enough to understand the main concepts." }, { "code": null, "e": 2980, "s": 2806, "text": "Next, we need to define a DataLoader. We provide it with our concat_dataset and set the loader parameters, such as the batch size, and whether or not to shuffle the samples." }, { "code": null, "e": 3016, "s": 2980, "text": "The output of this part looks like:" }, { "code": null, "e": 3397, "s": 3016, "text": "tensor([ 5., 5., 5., 5., -5., 5., -5., 5.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -5., 5., 1., 5., -1., 5., -1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([ 5., 5., 5., 5., -5., 1., 5., 5.])tensor([ 5., 5., 5., 1., 5., 5., 5., -1.])tensor([ 5., 5., 5., 5., -1., 5., 1., 5.])tensor([ 5., -5., 1., 5., 5., 5., 5., 5.])tensor([5.])" }, { "code": null, "e": 3536, "s": 3397, "text": "Each batch is a tensor of 8 samples from our concat_dataset. The order is set randomly, and samples are selected from the pool of samples." }, { "code": null, "e": 3906, "s": 3536, "text": "Until now, everything was relatively straight forward. The datasets are combined into a single one, and samples are randomly picked from both of the original datasets to construct the mini-batch. Now let’s try to control and manipulate the samples in each batch. We want to get samples from only one dataset in each mini-batch, switching between them every other batch." }, { "code": null, "e": 4284, "s": 3906, "text": "This is the definition of a BatchSchedulerSampler class, which creates a new samples iterator. First, by creating a RandomSampler for each internal dataset. And second by pulling samples (actually samples indexes) from each internal dataset iterator. Thus, building a new list of samples indexes. Using a batch size of 8 means that from each dataset we need to fetch 8 samples." }, { "code": null, "e": 4461, "s": 4284, "text": "Now let’s run and print the samples using a new DataLoader, which gets our BatchSchedulerSampler as an input sampler (shuffle can’t be set to True when working with a sampler)." }, { "code": null, "e": 4493, "s": 4461, "text": "The output now looks like this:" }, { "code": null, "e": 5130, "s": 4493, "text": "tensor([-1., -1., 1., 1., -1., 1., 1., -1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([ 1., -1., -1., -1., 1., 1., -1., 1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -1., 1., 1., 1., -1., 1., -1.])tensor([ 5., 5., -5., 5., 5., -5., 5., 5.])tensor([ 1., 1., -1., -1., 1., -1., 1., 1.])tensor([5., 5., 5., 5., 5., 5., 5., 5.])tensor([-1., -1., -1., -1., 1., 1., 1., -1.])tensor([ 5., -5., 5., 5., 5., 5., -5., 5.])tensor([-1., 1., -1., 1., -1., 1., 1., -1.])tensor([ 5., 5., 5., 5., 5., -5., 5., 5.])tensor([ 1., -1., -1., 1., 1., 1., 1., -1.])tensor([5., 5., 5., 5., 5., 5., 5.])" }, { "code": null, "e": 5294, "s": 5130, "text": "Hurray!!! For each mini-batch we now get only one dataset samples.We can play with this type of scheduling in order to downsample or upsample more important tasks." }, { "code": null, "e": 5849, "s": 5294, "text": "The remaining problem in our batches now comes from the second highly unbalanced dataset. This is often the case in MTL, having a main task and a few other satellite sub-tasks. Training the main task and sub-tasks together might lead to improve performance and contribute to the generalization of the overall model. The problem is that samples of the sub-tasks are often very sparse, having only a few positive (or negative) samples. Let’s use our previous logic but also forcing a balanced batch with respect to the distribution of samples in each task." }, { "code": null, "e": 6188, "s": 5849, "text": "To handle the unbalanced issue, we need to replace the random sampler in the BatchSchedulerSampler class with an ImbalancedDatasetSampler (I am using a great implementation from this repository). This class handles the balancing of the dataset. We can also mix and use RandomSampler for some tasks and ImbalancedDatasetSampler for others." }, { "code": null, "e": 6342, "s": 6188, "text": "We first create ExampleImbalancedDatasetSampler, which inherit from ImbalancedDatasetSampler, only modifying the _get_label function to fit our use case." }, { "code": null, "e": 6556, "s": 6342, "text": "Next, we use BalancedBatchSchedulerSampler, which is similar to the previous BatchSchedulerSampler class, only replacing the usage of RandomSampler for the unbalanced task with the ExampleImbalancedDatasetSampler." }, { "code": null, "e": 6586, "s": 6556, "text": "Let’s run the new DataLoader:" }, { "code": null, "e": 6609, "s": 6586, "text": "The output looks like:" }, { "code": null, "e": 7277, "s": 6609, "text": "tensor([-1., 1., 1., -1., -1., -1., 1., -1.])tensor([ 5., 5., 5., 5., -5., -5., -5., -5.])tensor([ 1., 1., 1., -1., 1., -1., 1., 1.])tensor([ 5., -5., 5., -5., -5., -5., 5., 5.])tensor([-1., -1., 1., -1., -1., -1., -1., 1.])tensor([-5., 5., 5., 5., 5., -5., 5., -5.])tensor([-1., -1., 1., 1., 1., 1., -1., -1.])tensor([-5., 5., 5., 5., 5., -5., 5., 5.])tensor([ 1., -1., 1., 1., 1., -1., 1., -1.])tensor([ 5., 5., 5., -5., 5., -5., 5., 5.])tensor([-1., -1., -1., -1., 1., 1., 1., 1.])tensor([-5., 5., 5., 5., 5., 5., -5., 5.])tensor([-1., 1., -1., 1., 1., 1., 1., 1.])tensor([-5., -5., 5., 5., -5., -5., 5.])" }, { "code": null, "e": 7345, "s": 7277, "text": "The mini-batches of the unbalanced task are now much more balanced." }, { "code": null, "e": 7738, "s": 7345, "text": "There is a lot of room to play with this setup even further. We can combine the tasks in a balanced way, and by setting the samples_to_grab to 4, which is half of the batch size, we can get a mixed mini-batch with 4 samples taken from each task. To produce a ratio of 1:2 toward a more important task, we can set samples_to_grab=2 for the first task and samples_to_grab=6 for the second task." } ]
File uploading in Node.js - GeeksforGeeks
14 Jan, 2020 Introduction: File uploading means a user from client machine requests to upload file to the server. For example, users can upload images, videos, etc on Facebook, Instagram, etc. Features of Multer module: File can be uploaded to the server using Multer module. There are other modules in market but multer is very popular when it comes to file uploading. Multer is a node.js middleware which is used for handling multipart/form-data, which is mostly used library for uploading files. Note: Multer will process only those form which are multipart (multipart/form-data). So whenever you use multer, make sure you put multipart in form. Introduction: It’s easy to get started and easy to use.It is widely used and popular module for file uploading.Users can upload either single or multiple files at a time. It’s easy to get started and easy to use. It is widely used and popular module for file uploading. Users can upload either single or multiple files at a time. Installation of Multer module: You can visit the link Install multer module. You can install this package by using this command.npm install multerAfter installing multer you can check your multer version in command prompt using the command.npm version multerAfter that, you can just create a folder and add a file for example index.js, To run this file you need to run the following command.node index.jsRequiring module: You need to include multer module in your file by using these lines.var multer = require('multer');So Multer basically adds a file object or files object and a body object to the request object. The file/files object contains all the files which are uploaded through the form and all the values of the text fields of the form are contained in the body object. This is how multer binds the data whenever a form is submitted.Filename: Signup.ejs<!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action="/uploadProfilePicture" enctype="multipart/form-data" method="POST"> <span>Upload Profile Picture:</span> <input type="file" name="mypic" required/> <br> <input type="submit" value="submit"> </form></body> </html>Filename: index.jsconst express = require("express")const path = require("path")const multer = require("multer")const app = express() // View Engine Setupapp.set("views",path.join(__dirname,"views"))app.set("view engine","ejs") // var upload = multer({ dest: "Upload_folder_name" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, "uploads") }, filename: function (req, file, cb) { cb(null, file.fieldname + "-" + Date.now()+".jpg") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb("Error: File upload only supports the " + "following filetypes - " + filetypes); } // mypic is the name of file attribute}).single("mypic"); app.get("/",function(req,res){ res.render("Signup");}) app.post("/uploadProfilePicture",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send("Success, Image uploaded!") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log("Server created Successfully on PORT 8080")})Steps to run the program:The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc.My Personal Notes arrow_drop_upSave You can visit the link Install multer module. You can install this package by using this command.npm install multer npm install multer After installing multer you can check your multer version in command prompt using the command.npm version multer npm version multer After that, you can just create a folder and add a file for example index.js, To run this file you need to run the following command.node index.js node index.js Requiring module: You need to include multer module in your file by using these lines.var multer = require('multer'); var multer = require('multer'); So Multer basically adds a file object or files object and a body object to the request object. The file/files object contains all the files which are uploaded through the form and all the values of the text fields of the form are contained in the body object. This is how multer binds the data whenever a form is submitted.Filename: Signup.ejs<!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action="/uploadProfilePicture" enctype="multipart/form-data" method="POST"> <span>Upload Profile Picture:</span> <input type="file" name="mypic" required/> <br> <input type="submit" value="submit"> </form></body> </html>Filename: index.jsconst express = require("express")const path = require("path")const multer = require("multer")const app = express() // View Engine Setupapp.set("views",path.join(__dirname,"views"))app.set("view engine","ejs") // var upload = multer({ dest: "Upload_folder_name" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, "uploads") }, filename: function (req, file, cb) { cb(null, file.fieldname + "-" + Date.now()+".jpg") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb("Error: File upload only supports the " + "following filetypes - " + filetypes); } // mypic is the name of file attribute}).single("mypic"); app.get("/",function(req,res){ res.render("Signup");}) app.post("/uploadProfilePicture",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send("Success, Image uploaded!") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log("Server created Successfully on PORT 8080")})Steps to run the program:The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc.My Personal Notes arrow_drop_upSave Filename: Signup.ejs <!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action="/uploadProfilePicture" enctype="multipart/form-data" method="POST"> <span>Upload Profile Picture:</span> <input type="file" name="mypic" required/> <br> <input type="submit" value="submit"> </form></body> </html> Filename: index.js const express = require("express")const path = require("path")const multer = require("multer")const app = express() // View Engine Setupapp.set("views",path.join(__dirname,"views"))app.set("view engine","ejs") // var upload = multer({ dest: "Upload_folder_name" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, "uploads") }, filename: function (req, file, cb) { cb(null, file.fieldname + "-" + Date.now()+".jpg") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb("Error: File upload only supports the " + "following filetypes - " + filetypes); } // mypic is the name of file attribute}).single("mypic"); app.get("/",function(req,res){ res.render("Signup");}) app.post("/uploadProfilePicture",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send("Success, Image uploaded!") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log("Server created Successfully on PORT 8080")}) Steps to run the program: The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below: The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder. Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multer npm install ejs npm install express npm install multer Run index.js file using below command:node index.js node index.js Open browser and type this URL:http://localhost:8080/ http://localhost:8080/ Then you will see the Singup form as shown below: Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed: If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below: So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc. Node.js-Misc Node.js Technical Scripter Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between promise and async await in Node.js How to use an ES6 import in Node.js? Express.js res.sendFile() Function Mongoose | findByIdAndUpdate() Function Express.js res.redirect() Function Top 10 Front End Developer Skills That You Need in 2022 Top 10 Projects For Beginners To Practice HTML and CSS Skills How to fetch data from an API in ReactJS ? How to insert spaces/tabs in text using HTML/CSS? Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 24911, "s": 24883, "text": "\n14 Jan, 2020" }, { "code": null, "e": 25091, "s": 24911, "text": "Introduction: File uploading means a user from client machine requests to upload file to the server. For example, users can upload images, videos, etc on Facebook, Instagram, etc." }, { "code": null, "e": 25397, "s": 25091, "text": "Features of Multer module: File can be uploaded to the server using Multer module. There are other modules in market but multer is very popular when it comes to file uploading. Multer is a node.js middleware which is used for handling multipart/form-data, which is mostly used library for uploading files." }, { "code": null, "e": 25547, "s": 25397, "text": "Note: Multer will process only those form which are multipart (multipart/form-data). So whenever you use multer, make sure you put multipart in form." }, { "code": null, "e": 25561, "s": 25547, "text": "Introduction:" }, { "code": null, "e": 25718, "s": 25561, "text": "It’s easy to get started and easy to use.It is widely used and popular module for file uploading.Users can upload either single or multiple files at a time." }, { "code": null, "e": 25760, "s": 25718, "text": "It’s easy to get started and easy to use." }, { "code": null, "e": 25817, "s": 25760, "text": "It is widely used and popular module for file uploading." }, { "code": null, "e": 25877, "s": 25817, "text": "Users can upload either single or multiple files at a time." }, { "code": null, "e": 25908, "s": 25877, "text": "Installation of Multer module:" }, { "code": null, "e": 30350, "s": 25908, "text": "You can visit the link Install multer module. You can install this package by using this command.npm install multerAfter installing multer you can check your multer version in command prompt using the command.npm version multerAfter that, you can just create a folder and add a file for example index.js, To run this file you need to run the following command.node index.jsRequiring module: You need to include multer module in your file by using these lines.var multer = require('multer');So Multer basically adds a file object or files object and a body object to the request object. The file/files object contains all the files which are uploaded through the form and all the values of the text fields of the form are contained in the body object. This is how multer binds the data whenever a form is submitted.Filename: Signup.ejs<!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action=\"/uploadProfilePicture\" enctype=\"multipart/form-data\" method=\"POST\"> <span>Upload Profile Picture:</span> <input type=\"file\" name=\"mypic\" required/> <br> <input type=\"submit\" value=\"submit\"> </form></body> </html>Filename: index.jsconst express = require(\"express\")const path = require(\"path\")const multer = require(\"multer\")const app = express() // View Engine Setupapp.set(\"views\",path.join(__dirname,\"views\"))app.set(\"view engine\",\"ejs\") // var upload = multer({ dest: \"Upload_folder_name\" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, \"uploads\") }, filename: function (req, file, cb) { cb(null, file.fieldname + \"-\" + Date.now()+\".jpg\") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb(\"Error: File upload only supports the \" + \"following filetypes - \" + filetypes); } // mypic is the name of file attribute}).single(\"mypic\"); app.get(\"/\",function(req,res){ res.render(\"Signup\");}) app.post(\"/uploadProfilePicture\",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send(\"Success, Image uploaded!\") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log(\"Server created Successfully on PORT 8080\")})Steps to run the program:The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc.My Personal Notes\narrow_drop_upSave" }, { "code": null, "e": 30466, "s": 30350, "text": "You can visit the link Install multer module. You can install this package by using this command.npm install multer" }, { "code": null, "e": 30485, "s": 30466, "text": "npm install multer" }, { "code": null, "e": 30598, "s": 30485, "text": "After installing multer you can check your multer version in command prompt using the command.npm version multer" }, { "code": null, "e": 30617, "s": 30598, "text": "npm version multer" }, { "code": null, "e": 30764, "s": 30617, "text": "After that, you can just create a folder and add a file for example index.js, To run this file you need to run the following command.node index.js" }, { "code": null, "e": 30778, "s": 30764, "text": "node index.js" }, { "code": null, "e": 30896, "s": 30778, "text": "Requiring module: You need to include multer module in your file by using these lines.var multer = require('multer');" }, { "code": null, "e": 30928, "s": 30896, "text": "var multer = require('multer');" }, { "code": null, "e": 34880, "s": 30928, "text": "So Multer basically adds a file object or files object and a body object to the request object. The file/files object contains all the files which are uploaded through the form and all the values of the text fields of the form are contained in the body object. This is how multer binds the data whenever a form is submitted.Filename: Signup.ejs<!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action=\"/uploadProfilePicture\" enctype=\"multipart/form-data\" method=\"POST\"> <span>Upload Profile Picture:</span> <input type=\"file\" name=\"mypic\" required/> <br> <input type=\"submit\" value=\"submit\"> </form></body> </html>Filename: index.jsconst express = require(\"express\")const path = require(\"path\")const multer = require(\"multer\")const app = express() // View Engine Setupapp.set(\"views\",path.join(__dirname,\"views\"))app.set(\"view engine\",\"ejs\") // var upload = multer({ dest: \"Upload_folder_name\" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, \"uploads\") }, filename: function (req, file, cb) { cb(null, file.fieldname + \"-\" + Date.now()+\".jpg\") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb(\"Error: File upload only supports the \" + \"following filetypes - \" + filetypes); } // mypic is the name of file attribute}).single(\"mypic\"); app.get(\"/\",function(req,res){ res.render(\"Signup\");}) app.post(\"/uploadProfilePicture\",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send(\"Success, Image uploaded!\") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log(\"Server created Successfully on PORT 8080\")})Steps to run the program:The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc.My Personal Notes\narrow_drop_upSave" }, { "code": null, "e": 34901, "s": 34880, "text": "Filename: Signup.ejs" }, { "code": "<!DOCTYPE html><html> <head> <title>FILE UPLOAD DEMO</title></head> <body> <h1>Single File Upload Demo</h1> <form action=\"/uploadProfilePicture\" enctype=\"multipart/form-data\" method=\"POST\"> <span>Upload Profile Picture:</span> <input type=\"file\" name=\"mypic\" required/> <br> <input type=\"submit\" value=\"submit\"> </form></body> </html>", "e": 35290, "s": 34901, "text": null }, { "code": null, "e": 35309, "s": 35290, "text": "Filename: index.js" }, { "code": "const express = require(\"express\")const path = require(\"path\")const multer = require(\"multer\")const app = express() // View Engine Setupapp.set(\"views\",path.join(__dirname,\"views\"))app.set(\"view engine\",\"ejs\") // var upload = multer({ dest: \"Upload_folder_name\" })// If you do not want to use diskStorage then uncomment it var storage = multer.diskStorage({ destination: function (req, file, cb) { // Uploads is the Upload_folder_name cb(null, \"uploads\") }, filename: function (req, file, cb) { cb(null, file.fieldname + \"-\" + Date.now()+\".jpg\") } }) // Define the maximum size for uploading// picture i.e. 1 MB. it is optionalconst maxSize = 1 * 1000 * 1000; var upload = multer({ storage: storage, limits: { fileSize: maxSize }, fileFilter: function (req, file, cb){ // Set the filetypes, it is optional var filetypes = /jpeg|jpg|png/; var mimetype = filetypes.test(file.mimetype); var extname = filetypes.test(path.extname( file.originalname).toLowerCase()); if (mimetype && extname) { return cb(null, true); } cb(\"Error: File upload only supports the \" + \"following filetypes - \" + filetypes); } // mypic is the name of file attribute}).single(\"mypic\"); app.get(\"/\",function(req,res){ res.render(\"Signup\");}) app.post(\"/uploadProfilePicture\",function (req, res, next) { // Error MiddleWare for multer file upload, so if any // error occurs, the image would not be uploaded! upload(req,res,function(err) { if(err) { // ERROR occured (here it can be occured due // to uploading image of size greater than // 1MB or uploading different file type) res.send(err) } else { // SUCCESS, image successfully uploaded res.send(\"Success, Image uploaded!\") } })}) // Take any port number of your choice which// is not taken by any other processapp.listen(8080,function(error) { if(error) throw error console.log(\"Server created Successfully on PORT 8080\")})", "e": 37488, "s": 35309, "text": null }, { "code": null, "e": 37514, "s": 37488, "text": "Steps to run the program:" }, { "code": null, "e": 38314, "s": 37514, "text": "The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder.Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multerRun index.js file using below command:node index.jsOpen browser and type this URL:http://localhost:8080/Then you will see the Singup form as shown below:Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:" }, { "code": null, "e": 38488, "s": 38314, "text": "The project structure will look like this:Here “uploads” is the folder where our files will be uploaded, currently it is empty. The “Singup.ejs” is kept in the views folder." }, { "code": null, "e": 38658, "s": 38488, "text": "Make sure you have ‘view engine’ like I have used “ejs” and also install express and multer using following commands:npm install ejsnpm install expressnpm install multer" }, { "code": null, "e": 38674, "s": 38658, "text": "npm install ejs" }, { "code": null, "e": 38694, "s": 38674, "text": "npm install express" }, { "code": null, "e": 38713, "s": 38694, "text": "npm install multer" }, { "code": null, "e": 38765, "s": 38713, "text": "Run index.js file using below command:node index.js" }, { "code": null, "e": 38779, "s": 38765, "text": "node index.js" }, { "code": null, "e": 38833, "s": 38779, "text": "Open browser and type this URL:http://localhost:8080/" }, { "code": null, "e": 38856, "s": 38833, "text": "http://localhost:8080/" }, { "code": null, "e": 38906, "s": 38856, "text": "Then you will see the Singup form as shown below:" }, { "code": null, "e": 39092, "s": 38906, "text": "Then choose a file to be uploaded and click on submit button.If error occurs, then following message will be displayed:And if no errors occurs, then following message will be displayed:" }, { "code": null, "e": 39212, "s": 39092, "text": "If file uploading process successful, then you can go to the uploads folder and see your uploaded image as shown below:" }, { "code": null, "e": 39377, "s": 39212, "text": "So this is how you can upload file in Node.js using multer module. There are other modules in the market for file uploading like fileupload, express-fileupload etc." }, { "code": null, "e": 39390, "s": 39377, "text": "Node.js-Misc" }, { "code": null, "e": 39398, "s": 39390, "text": "Node.js" }, { "code": null, "e": 39417, "s": 39398, "text": "Technical Scripter" }, { "code": null, "e": 39434, "s": 39417, "text": "Web Technologies" }, { "code": null, "e": 39461, "s": 39434, "text": "Web technologies Questions" }, { "code": null, "e": 39559, "s": 39461, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 39568, "s": 39559, "text": "Comments" }, { "code": null, "e": 39581, "s": 39568, "text": "Old Comments" }, { "code": null, "e": 39635, "s": 39581, "text": "Difference between promise and async await in Node.js" }, { "code": null, "e": 39672, "s": 39635, "text": "How to use an ES6 import in Node.js?" }, { "code": null, "e": 39707, "s": 39672, "text": "Express.js res.sendFile() Function" }, { "code": null, "e": 39747, "s": 39707, "text": "Mongoose | findByIdAndUpdate() Function" }, { "code": null, "e": 39782, "s": 39747, "text": "Express.js res.redirect() Function" }, { "code": null, "e": 39838, "s": 39782, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 39900, "s": 39838, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 39943, "s": 39900, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 39993, "s": 39943, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
What is MySQL GENERATED COLUMN and how to use it while creating a table?
Basically generated columns are a feature that can be used in CREATE TABLE or ALTER TABLE statements and is a way of storing the data without actually sending it through the INSERT or UPDATE clause in SQL. This feature has been added in MySQL 5.7. A generated column works within the table domain. Its syntax would be as follows − column_name data_type [GENERATED ALWAYS] AS (expression) [VIRTUAL | STORED] [UNIQUE [KEY]] Here, first of all, specify the column name and its data type. Then add the GENERATED ALWAYS clause to indicate that the column is a generated column. Then, indicate whether the type of the generated column by using the corresponding option − VIRTUAL or STORED. By default, MySQL uses VIRTUAL if you don’t specify explicitly the type of the generated column. After that, specify the expression within the braces after the AS keyword. The expression can contain literals, built-in functions with no parameters, operators, or references to any column within the same table. If you use a function, it must be scalar and deterministic. Finally, if the generated column is stored, you can define a unique constraint for it. In this example, we are creating a table named employee_data having the details of employees along with a generated column as follows − mysql> Create table employee_data(ID INT AUTO_INCREMENT PRIMARY KEY, First_name VARCHAR(50) NOT NULL, Last_name VARCHAR(50) NOT NULL, FULL_NAME VARCHAR(90) GENERATED ALWAYS AS(CONCAT(First_name,' ',Last_name))); Query OK, 0 rows affected (0.55 sec) mysql> DESCRIBE employee_data; +------------+-------------+------+-----+---------+-------------------+ | Field | Type | Null | Key | Default | Extra | +------------+-------------+------+-----+---------+-------------------+ | ID | int(11) | NO | PRI | NULL | auto_increment | | First_name | varchar(50) | NO | | NULL | | | Last_name | varchar(50) | NO | | NULL | | | FULL_NAME | varchar(90) | YES | | NULL | VIRTUAL GENERATED | +------------+-------------+------+-----+---------+-------------------+ 4 rows in set (0.00 sec) mysql> INSERT INTO employee_data(first_name, Last_name) values('Yashpal','Sharma'); Query OK, 1 row affected (0.09 sec) mysql> INSERT INTO employee_data(first_name, Last_name) values('Krishan','Kumar'); Query OK, 1 row affected (0.09 sec) mysql> INSERT INTO employee_data(first_name, Last_name) values('Rakesh','Arora'); Query OK, 1 row affected (0.08 sec) mysql> Select * from employee_data; +----+------------+-----------+----------------+ | ID | First_name | Last_name | FULL_NAME | +----+------------+-----------+----------------+ | 1 | Yashpal | Sharma | Yashpal Sharma | | 2 | Krishan | Kumar | Krishan Kumar | | 3 | Rakesh | Arora | Rakesh Arora | +----+------------+-----------+----------------+ 3 rows in set (0.00 sec)
[ { "code": null, "e": 1393, "s": 1062, "text": "Basically generated columns are a feature that can be used in CREATE TABLE or ALTER TABLE statements and is a way of storing the data without actually sending it through the INSERT or UPDATE clause in SQL. This feature has been added in MySQL 5.7. A generated column works within the table domain. Its syntax would be as follows −" }, { "code": null, "e": 1484, "s": 1393, "text": "column_name data_type [GENERATED ALWAYS] AS (expression)\n[VIRTUAL | STORED] [UNIQUE [KEY]]" }, { "code": null, "e": 1547, "s": 1484, "text": "Here, first of all, specify the column name and its data type." }, { "code": null, "e": 1635, "s": 1547, "text": "Then add the GENERATED ALWAYS clause to indicate that the column is a generated column." }, { "code": null, "e": 1843, "s": 1635, "text": "Then, indicate whether the type of the generated column by using the corresponding option − VIRTUAL or STORED. By default, MySQL uses VIRTUAL if you don’t specify explicitly the type of the generated column." }, { "code": null, "e": 2116, "s": 1843, "text": "After that, specify the expression within the braces after the AS keyword. The expression can contain literals, built-in functions with no parameters, operators, or references to any column within the same table. If you use a function, it must be scalar and deterministic." }, { "code": null, "e": 2203, "s": 2116, "text": "Finally, if the generated column is stored, you can define a unique constraint for it." }, { "code": null, "e": 2339, "s": 2203, "text": "In this example, we are creating a table named employee_data having the details of employees along with a generated column as follows −" }, { "code": null, "e": 3986, "s": 2339, "text": "mysql> Create table employee_data(ID INT AUTO_INCREMENT PRIMARY KEY, First_name VARCHAR(50) NOT NULL, Last_name VARCHAR(50) NOT NULL, FULL_NAME VARCHAR(90) GENERATED ALWAYS AS(CONCAT(First_name,' ',Last_name)));\nQuery OK, 0 rows affected (0.55 sec)\n\nmysql> DESCRIBE employee_data;\n+------------+-------------+------+-----+---------+-------------------+\n| Field | Type | Null | Key | Default | Extra |\n+------------+-------------+------+-----+---------+-------------------+\n| ID | int(11) | NO | PRI | NULL | auto_increment |\n| First_name | varchar(50) | NO | | NULL | |\n| Last_name | varchar(50) | NO | | NULL | |\n| FULL_NAME | varchar(90) | YES | | NULL | VIRTUAL GENERATED |\n+------------+-------------+------+-----+---------+-------------------+\n4 rows in set (0.00 sec)\n\nmysql> INSERT INTO employee_data(first_name, Last_name) values('Yashpal','Sharma');\nQuery OK, 1 row affected (0.09 sec)\n\nmysql> INSERT INTO employee_data(first_name, Last_name) values('Krishan','Kumar');\nQuery OK, 1 row affected (0.09 sec)\n\nmysql> INSERT INTO employee_data(first_name, Last_name) values('Rakesh','Arora');\nQuery OK, 1 row affected (0.08 sec)\n\nmysql> Select * from employee_data;\n+----+------------+-----------+----------------+\n| ID | First_name | Last_name | FULL_NAME |\n+----+------------+-----------+----------------+\n| 1 | Yashpal | Sharma | Yashpal Sharma |\n| 2 | Krishan | Kumar | Krishan Kumar |\n| 3 | Rakesh | Arora | Rakesh Arora |\n+----+------------+-----------+----------------+\n3 rows in set (0.00 sec)" } ]
Deletion from a Circular Linked List - GeeksforGeeks
23 Nov, 2021 We have already discussed circular linked list and traversal in a circular linked list in the below articles: Introduction to circular linked list Traversal in a circular linked list In this article, we will learn about deleting a node from a circular linked list. Consider the linked list as shown below: We will be given a node and our task is to delete that node from the circular linked list. Examples: Input : 2->5->7->8->10->(head node) data = 5 Output : 2->7->8->10->(head node) Input : 2->5->7->8->10->(head node) 7 Output : 2->5->8->10->(head node) AlgorithmCase 1: List is empty. If the list is empty we will simply return. Case 2:List is not empty If the list is not empty then we define two pointers curr and prev and initialize the pointer curr with the head node. Traverse the list using curr to find the node to be deleted and before moving to curr to the next node, every time set prev = curr. If the node is found, check if it is the only node in the list. If yes, set head = NULL and free(curr). If the list has more than one node, check if it is the first node of the list. Condition to check this( curr == head). If yes, then move prev until it reaches the last node. After prev reaches the last node, set head = head -> next and prev -> next = head. Delete curr. If curr is not the first node, we check if it is the last node in the list. Condition to check this is (curr -> next == head). If curr is the last node. Set prev -> next = head and delete the node curr by free(curr). If the node to be deleted is neither the first node nor the last node, then set prev -> next = curr -> next and delete curr. Complete program to demonstrate deletion in Circular Linked List: C++14 C Java Python C# Javascript // C++ program to delete a given key from// linked list.#include <bits/stdc++.h>using namespace std; /* structure for a node */class Node {public: int data; Node* next;}; /* Function to insert a node at the beginning ofa Circular linked list */void push(Node** head_ref, int data){ // Create a new node and make head as next // of it. Node* ptr1 = new Node(); ptr1->data = data; ptr1->next = *head_ref; /* If linked list is not NULL then set the next of last node */ if (*head_ref != NULL) { // Find the node before head and update // next of it. Node* temp = *head_ref; while (temp->next != *head_ref) temp = temp->next; temp->next = ptr1; } else ptr1->next = ptr1; /*For the first node */ *head_ref = ptr1;} /* Function to print nodes in a givencircular linked list */void printList(Node* head){ Node* temp = head; if (head != NULL) { do { cout << temp->data << " "; temp = temp->next; } while (temp != head); } cout << endl;} /* Function to delete a given node from the list */void deleteNode(Node** head, int key){ // If linked list is empty if (*head == NULL) return; // If the list contains only a single node if((*head)->data==key && (*head)->next==*head) { free(*head); *head=NULL; return; } Node *last=*head,*d; // If head is to be deleted if((*head)->data==key) { // Find the last node of the list while(last->next!=*head) last=last->next; // Point last node to the next of head i.e. // the second node of the list last->next=(*head)->next; free(*head); *head=last->next; return; } // Either the node to be deleted is not found // or the end of list is not reached while(last->next!=*head&&last->next->data!=key) { last=last->next; } // If node to be deleted was found if(last->next->data==key) { d=last->next; last->next=d->next; free(d); } else cout<<"no such keyfound"; } /* Driver code */int main(){ /* Initialize lists as empty */ Node* head = NULL; /* Created linked list will be 2->5->7->8->10 */ push(&head, 2); push(&head, 5); push(&head, 7); push(&head, 8); push(&head, 10); cout << "List Before Deletion: "; printList(head); deleteNode(&head, 7); cout << "List After Deletion: "; printList(head); return 0;} // This is code is contributed by rathbhupendra // C program to delete a given key from// linked list.#include <stdio.h>#include <stdlib.h> /* structure for a node */struct Node { int data; struct Node* next;}; /* Function to insert a node at the beginning of a Circular linked list */void push(struct Node** head_ref, int data){ // Create a new node and make head as next // of it. struct Node* ptr1 = (struct Node*)malloc(sizeof(struct Node)); ptr1->data = data; ptr1->next = *head_ref; /* If linked list is not NULL then set the next of last node */ if (*head_ref != NULL) { // Find the node before head and update // next of it. struct Node* temp = *head_ref; while (temp->next != *head_ref) temp = temp->next; temp->next = ptr1; } else ptr1->next = ptr1; /*For the first node */ *head_ref = ptr1;} /* Function to print nodes in a given circular linked list */void printList(struct Node* head){ struct Node* temp = head; if (head != NULL) { do { printf("%d ", temp->data); temp = temp->next; } while (temp != head); } printf("\n");} /* Function to delete a given node from the list */void deleteNode(struct Node* head, int key){ if (head == NULL) return; // Find the required node struct Node *curr = head, *prev; while (curr->data != key) { if (curr->next == head) { printf("\nGiven node is not found" " in the list!!!"); break; } prev = curr; curr = curr->next; } // Check if node is only node if (curr->next == head) { head = NULL; free(curr); return; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev->next != head) prev = prev->next; head = curr->next; prev->next = head; free(curr); } // check if node is last node else if (curr->next == head && curr == head) { prev->next = head; free(curr); } else { prev->next = curr->next; free(curr); }} /* Driver code */int main(){ /* Initialize lists as empty */ struct Node* head = NULL; /* Created linked list will be 2->5->7->8->10 */ push(&head, 2); push(&head, 5); push(&head, 7); push(&head, 8); push(&head, 10); printf("List Before Deletion: "); printList(head); deleteNode(head, 7); printf("List After Deletion: "); printList(head); return 0;} // Java program to delete a given key from// linked list.class GFG { /* ure for a node */ static class Node { int data; Node next; }; /* Function to insert a node at the beginning ofa Circular linked list */ static Node push(Node head_ref, int data) { // Create a new node and make head as next // of it. Node ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. Node temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /*For the first node */ head_ref = ptr1; return head_ref; } /* Function to print nodes in a givencircular linked list */ static void printList(Node head) { Node temp = head; if (head != null) { do { System.out.printf("%d ", temp.data); temp = temp.next; } while (temp != head); } System.out.printf("\n"); } /* Function to delete a given node from the list */ static Node deleteNode(Node head, int key) { if (head == null) return null; // Find the required node Node curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { System.out.printf("\nGiven node is not found" + " in the list!!!"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr == head && curr.next == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ public static void main(String args[]) { /* Initialize lists as empty */ Node head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); System.out.printf("List Before Deletion: "); printList(head); head = deleteNode(head, 7); System.out.printf("List After Deletion: "); printList(head); }} // This code is contributed by Arnab Kundu # Python program to delete a given key from# linked list. # Node of a doubly linked listclass Node: def __init__(self, next = None, data = None): self.next = next self.data = data # Function to insert a node at the beginning of# a Circular linked listdef push(head_ref, data): # Create a new node and make head as next # of it. ptr1 = Node() ptr1.data = data ptr1.next = head_ref # If linked list is not None then set the # next of last node if (head_ref != None) : # Find the node before head and update # next of it. temp = head_ref while (temp.next != head_ref): temp = temp.next temp.next = ptr1 else: ptr1.next = ptr1 # For the first node head_ref = ptr1 return head_ref # Function to print nodes in a given# circular linked listdef printList( head): temp = head if (head != None) : while(True) : print( temp.data, end = " ") temp = temp.next if (temp == head): break print() # Function to delete a given node from the listdef deleteNode( head, key) : # If linked list is empty if (head == None): return # If the list contains only a single node if((head).data == key and (head).next == head): head = None last = head d = None # If head is to be deleted if((head).data == key) : # Find the last node of the list while(last.next != head): last = last.next # Point last node to the next of head i.e. # the second node of the list last.next = (head).next head = last.next # Either the node to be deleted is not found # or the end of list is not reached while(last.next != head and last.next.data != key) : last = last.next # If node to be deleted was found if(last.next.data == key) : d = last.next last.next = d.next else: print("no such keyfound") return head # Driver code # Initialize lists as emptyhead = None # Created linked list will be 2.5.7.8.10head = push(head, 2)head = push(head, 5)head = push(head, 7)head = push(head, 8)head = push(head, 10) print("List Before Deletion: ")printList(head) head = deleteNode(head, 7) print( "List After Deletion: ")printList(head) # This code is contributed by Arnab Kundu // C# program to delete a given key from// linked list.using System; class GFG { /* structure for a node */ public class Node { public int data; public Node next; }; /* Function to insert a node at the beginning ofa Circular linked list */ static Node push(Node head_ref, int data) { // Create a new node and make head as next // of it. Node ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. Node temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /*For the first node */ head_ref = ptr1; return head_ref; } /* Function to print nodes in a given circular linked list */ static void printList(Node head) { Node temp = head; if (head != null) { do { Console.Write("{0} ", temp.data); temp = temp.next; } while (temp != head); } Console.Write("\n"); } /* Function to delete a given node from the list */ static Node deleteNode(Node head, int key) { if (head == null) return null; // Find the required node Node curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { Console.Write("\nGiven node is not found" + " in the list!!!"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr.next == head && curr == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ public static void Main(String[] args) { /* Initialize lists as empty */ Node head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); Console.Write("List Before Deletion: "); printList(head); head = deleteNode(head, 7); Console.Write("List After Deletion: "); printList(head); }} // This code has been contributed by 29AjayKumar <script>// javascript program to delete a given key from// linked list. /* ure for a node */class Node { constructor() { this.data = 0; this.next = null; }} /* * Function to insert a node at the beginning of a Circular linked list */ function push(head_ref , data) { // Create a new node and make head as next // of it. var ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* * If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. var temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /* For the first node */ head_ref = ptr1; return head_ref; } /* * Function to print nodes in a given circular linked list */ function printList(head) { var temp = head; if (head != null) { do { document.write( temp.data+" "); temp = temp.next; } while (temp != head); } document.write("<br/>"); } /* Function to delete a given node from the list */ function deleteNode(head , key) { if (head == null) return null; // Find the required node var curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { document.write("<br/>Given node is not found" + " in the list!!!"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr == head && curr.next == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ /* Initialize lists as empty */ var head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); document.write("List Before Deletion: "); printList(head); head = deleteNode(head, 7); document.write("List After Deletion: "); printList(head); // This code contributed by umadevi9616</script> List Before Deletion: 10 8 7 5 2 List After Deletion: 10 8 5 2 This article is contributed by Harsh Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. andrew1234 rathbhupendra 29AjayKumar rudrathechamp007 yanshika00 vvkbisht sky8214 umadevi9616 paritoshdev26 ankita_saini gs0801it191075 circular linked list Linked List Linked List circular linked list Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Linked List | Set 1 (Introduction) Linked List | Set 2 (Inserting a node) Stack Data Structure (Introduction and Program) Linked List | Set 3 (Deleting a node) LinkedList in Java Linked List vs Array Delete a Linked List node at a given position Implementing a Linked List in Java using Class Detect loop in a linked list Merge two sorted linked lists
[ { "code": null, "e": 30930, "s": 30902, "text": "\n23 Nov, 2021" }, { "code": null, "e": 31238, "s": 30930, "text": "We have already discussed circular linked list and traversal in a circular linked list in the below articles: Introduction to circular linked list Traversal in a circular linked list In this article, we will learn about deleting a node from a circular linked list. Consider the linked list as shown below: " }, { "code": null, "e": 31329, "s": 31238, "text": "We will be given a node and our task is to delete that node from the circular linked list." }, { "code": null, "e": 31340, "s": 31329, "text": "Examples: " }, { "code": null, "e": 31509, "s": 31340, "text": "Input : 2->5->7->8->10->(head node)\n data = 5\nOutput : 2->7->8->10->(head node)\n\nInput : 2->5->7->8->10->(head node)\n 7\nOutput : 2->5->8->10->(head node)" }, { "code": null, "e": 31542, "s": 31509, "text": "AlgorithmCase 1: List is empty. " }, { "code": null, "e": 31586, "s": 31542, "text": "If the list is empty we will simply return." }, { "code": null, "e": 31613, "s": 31586, "text": "Case 2:List is not empty " }, { "code": null, "e": 31732, "s": 31613, "text": "If the list is not empty then we define two pointers curr and prev and initialize the pointer curr with the head node." }, { "code": null, "e": 31864, "s": 31732, "text": "Traverse the list using curr to find the node to be deleted and before moving to curr to the next node, every time set prev = curr." }, { "code": null, "e": 31968, "s": 31864, "text": "If the node is found, check if it is the only node in the list. If yes, set head = NULL and free(curr)." }, { "code": null, "e": 32238, "s": 31968, "text": "If the list has more than one node, check if it is the first node of the list. Condition to check this( curr == head). If yes, then move prev until it reaches the last node. After prev reaches the last node, set head = head -> next and prev -> next = head. Delete curr." }, { "code": null, "e": 32365, "s": 32238, "text": "If curr is not the first node, we check if it is the last node in the list. Condition to check this is (curr -> next == head)." }, { "code": null, "e": 32455, "s": 32365, "text": "If curr is the last node. Set prev -> next = head and delete the node curr by free(curr)." }, { "code": null, "e": 32580, "s": 32455, "text": "If the node to be deleted is neither the first node nor the last node, then set prev -> next = curr -> next and delete curr." }, { "code": null, "e": 32647, "s": 32580, "text": "Complete program to demonstrate deletion in Circular Linked List: " }, { "code": null, "e": 32653, "s": 32647, "text": "C++14" }, { "code": null, "e": 32655, "s": 32653, "text": "C" }, { "code": null, "e": 32660, "s": 32655, "text": "Java" }, { "code": null, "e": 32667, "s": 32660, "text": "Python" }, { "code": null, "e": 32670, "s": 32667, "text": "C#" }, { "code": null, "e": 32681, "s": 32670, "text": "Javascript" }, { "code": "// C++ program to delete a given key from// linked list.#include <bits/stdc++.h>using namespace std; /* structure for a node */class Node {public: int data; Node* next;}; /* Function to insert a node at the beginning ofa Circular linked list */void push(Node** head_ref, int data){ // Create a new node and make head as next // of it. Node* ptr1 = new Node(); ptr1->data = data; ptr1->next = *head_ref; /* If linked list is not NULL then set the next of last node */ if (*head_ref != NULL) { // Find the node before head and update // next of it. Node* temp = *head_ref; while (temp->next != *head_ref) temp = temp->next; temp->next = ptr1; } else ptr1->next = ptr1; /*For the first node */ *head_ref = ptr1;} /* Function to print nodes in a givencircular linked list */void printList(Node* head){ Node* temp = head; if (head != NULL) { do { cout << temp->data << \" \"; temp = temp->next; } while (temp != head); } cout << endl;} /* Function to delete a given node from the list */void deleteNode(Node** head, int key){ // If linked list is empty if (*head == NULL) return; // If the list contains only a single node if((*head)->data==key && (*head)->next==*head) { free(*head); *head=NULL; return; } Node *last=*head,*d; // If head is to be deleted if((*head)->data==key) { // Find the last node of the list while(last->next!=*head) last=last->next; // Point last node to the next of head i.e. // the second node of the list last->next=(*head)->next; free(*head); *head=last->next; return; } // Either the node to be deleted is not found // or the end of list is not reached while(last->next!=*head&&last->next->data!=key) { last=last->next; } // If node to be deleted was found if(last->next->data==key) { d=last->next; last->next=d->next; free(d); } else cout<<\"no such keyfound\"; } /* Driver code */int main(){ /* Initialize lists as empty */ Node* head = NULL; /* Created linked list will be 2->5->7->8->10 */ push(&head, 2); push(&head, 5); push(&head, 7); push(&head, 8); push(&head, 10); cout << \"List Before Deletion: \"; printList(head); deleteNode(&head, 7); cout << \"List After Deletion: \"; printList(head); return 0;} // This is code is contributed by rathbhupendra", "e": 35301, "s": 32681, "text": null }, { "code": "// C program to delete a given key from// linked list.#include <stdio.h>#include <stdlib.h> /* structure for a node */struct Node { int data; struct Node* next;}; /* Function to insert a node at the beginning of a Circular linked list */void push(struct Node** head_ref, int data){ // Create a new node and make head as next // of it. struct Node* ptr1 = (struct Node*)malloc(sizeof(struct Node)); ptr1->data = data; ptr1->next = *head_ref; /* If linked list is not NULL then set the next of last node */ if (*head_ref != NULL) { // Find the node before head and update // next of it. struct Node* temp = *head_ref; while (temp->next != *head_ref) temp = temp->next; temp->next = ptr1; } else ptr1->next = ptr1; /*For the first node */ *head_ref = ptr1;} /* Function to print nodes in a given circular linked list */void printList(struct Node* head){ struct Node* temp = head; if (head != NULL) { do { printf(\"%d \", temp->data); temp = temp->next; } while (temp != head); } printf(\"\\n\");} /* Function to delete a given node from the list */void deleteNode(struct Node* head, int key){ if (head == NULL) return; // Find the required node struct Node *curr = head, *prev; while (curr->data != key) { if (curr->next == head) { printf(\"\\nGiven node is not found\" \" in the list!!!\"); break; } prev = curr; curr = curr->next; } // Check if node is only node if (curr->next == head) { head = NULL; free(curr); return; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev->next != head) prev = prev->next; head = curr->next; prev->next = head; free(curr); } // check if node is last node else if (curr->next == head && curr == head) { prev->next = head; free(curr); } else { prev->next = curr->next; free(curr); }} /* Driver code */int main(){ /* Initialize lists as empty */ struct Node* head = NULL; /* Created linked list will be 2->5->7->8->10 */ push(&head, 2); push(&head, 5); push(&head, 7); push(&head, 8); push(&head, 10); printf(\"List Before Deletion: \"); printList(head); deleteNode(head, 7); printf(\"List After Deletion: \"); printList(head); return 0;}", "e": 37851, "s": 35301, "text": null }, { "code": "// Java program to delete a given key from// linked list.class GFG { /* ure for a node */ static class Node { int data; Node next; }; /* Function to insert a node at the beginning ofa Circular linked list */ static Node push(Node head_ref, int data) { // Create a new node and make head as next // of it. Node ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. Node temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /*For the first node */ head_ref = ptr1; return head_ref; } /* Function to print nodes in a givencircular linked list */ static void printList(Node head) { Node temp = head; if (head != null) { do { System.out.printf(\"%d \", temp.data); temp = temp.next; } while (temp != head); } System.out.printf(\"\\n\"); } /* Function to delete a given node from the list */ static Node deleteNode(Node head, int key) { if (head == null) return null; // Find the required node Node curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { System.out.printf(\"\\nGiven node is not found\" + \" in the list!!!\"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr == head && curr.next == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ public static void main(String args[]) { /* Initialize lists as empty */ Node head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); System.out.printf(\"List Before Deletion: \"); printList(head); head = deleteNode(head, 7); System.out.printf(\"List After Deletion: \"); printList(head); }} // This code is contributed by Arnab Kundu", "e": 40761, "s": 37851, "text": null }, { "code": "# Python program to delete a given key from# linked list. # Node of a doubly linked listclass Node: def __init__(self, next = None, data = None): self.next = next self.data = data # Function to insert a node at the beginning of# a Circular linked listdef push(head_ref, data): # Create a new node and make head as next # of it. ptr1 = Node() ptr1.data = data ptr1.next = head_ref # If linked list is not None then set the # next of last node if (head_ref != None) : # Find the node before head and update # next of it. temp = head_ref while (temp.next != head_ref): temp = temp.next temp.next = ptr1 else: ptr1.next = ptr1 # For the first node head_ref = ptr1 return head_ref # Function to print nodes in a given# circular linked listdef printList( head): temp = head if (head != None) : while(True) : print( temp.data, end = \" \") temp = temp.next if (temp == head): break print() # Function to delete a given node from the listdef deleteNode( head, key) : # If linked list is empty if (head == None): return # If the list contains only a single node if((head).data == key and (head).next == head): head = None last = head d = None # If head is to be deleted if((head).data == key) : # Find the last node of the list while(last.next != head): last = last.next # Point last node to the next of head i.e. # the second node of the list last.next = (head).next head = last.next # Either the node to be deleted is not found # or the end of list is not reached while(last.next != head and last.next.data != key) : last = last.next # If node to be deleted was found if(last.next.data == key) : d = last.next last.next = d.next else: print(\"no such keyfound\") return head # Driver code # Initialize lists as emptyhead = None # Created linked list will be 2.5.7.8.10head = push(head, 2)head = push(head, 5)head = push(head, 7)head = push(head, 8)head = push(head, 10) print(\"List Before Deletion: \")printList(head) head = deleteNode(head, 7) print( \"List After Deletion: \")printList(head) # This code is contributed by Arnab Kundu", "e": 43171, "s": 40761, "text": null }, { "code": "// C# program to delete a given key from// linked list.using System; class GFG { /* structure for a node */ public class Node { public int data; public Node next; }; /* Function to insert a node at the beginning ofa Circular linked list */ static Node push(Node head_ref, int data) { // Create a new node and make head as next // of it. Node ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. Node temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /*For the first node */ head_ref = ptr1; return head_ref; } /* Function to print nodes in a given circular linked list */ static void printList(Node head) { Node temp = head; if (head != null) { do { Console.Write(\"{0} \", temp.data); temp = temp.next; } while (temp != head); } Console.Write(\"\\n\"); } /* Function to delete a given node from the list */ static Node deleteNode(Node head, int key) { if (head == null) return null; // Find the required node Node curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { Console.Write(\"\\nGiven node is not found\" + \" in the list!!!\"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr.next == head && curr == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ public static void Main(String[] args) { /* Initialize lists as empty */ Node head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); Console.Write(\"List Before Deletion: \"); printList(head); head = deleteNode(head, 7); Console.Write(\"List After Deletion: \"); printList(head); }} // This code has been contributed by 29AjayKumar", "e": 46179, "s": 43171, "text": null }, { "code": "<script>// javascript program to delete a given key from// linked list. /* ure for a node */class Node { constructor() { this.data = 0; this.next = null; }} /* * Function to insert a node at the beginning of a Circular linked list */ function push(head_ref , data) { // Create a new node and make head as next // of it. var ptr1 = new Node(); ptr1.data = data; ptr1.next = head_ref; /* * If linked list is not null then set the next of last node */ if (head_ref != null) { // Find the node before head and update // next of it. var temp = head_ref; while (temp.next != head_ref) temp = temp.next; temp.next = ptr1; } else ptr1.next = ptr1; /* For the first node */ head_ref = ptr1; return head_ref; } /* * Function to print nodes in a given circular linked list */ function printList(head) { var temp = head; if (head != null) { do { document.write( temp.data+\" \"); temp = temp.next; } while (temp != head); } document.write(\"<br/>\"); } /* Function to delete a given node from the list */ function deleteNode(head , key) { if (head == null) return null; // Find the required node var curr = head, prev = new Node(); while (curr.data != key) { if (curr.next == head) { document.write(\"<br/>Given node is not found\" + \" in the list!!!\"); break; } prev = curr; curr = curr.next; } // Check if node is only node if (curr == head && curr.next == head) { head = null; return head; } // If more than one node, check if // it is first node if (curr == head) { prev = head; while (prev.next != head) prev = prev.next; head = curr.next; prev.next = head; } // check if node is last node else if (curr.next == head) { prev.next = head; } else { prev.next = curr.next; } return head; } /* Driver code */ /* Initialize lists as empty */ var head = null; /* Created linked list will be 2.5.7.8.10 */ head = push(head, 2); head = push(head, 5); head = push(head, 7); head = push(head, 8); head = push(head, 10); document.write(\"List Before Deletion: \"); printList(head); head = deleteNode(head, 7); document.write(\"List After Deletion: \"); printList(head); // This code contributed by umadevi9616</script>", "e": 49004, "s": 46179, "text": null }, { "code": null, "e": 49070, "s": 49004, "text": "List Before Deletion: 10 8 7 5 2 \nList After Deletion: 10 8 5 2 \n" }, { "code": null, "e": 49367, "s": 49070, "text": "This article is contributed by Harsh Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 49492, "s": 49367, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 49503, "s": 49492, "text": "andrew1234" }, { "code": null, "e": 49517, "s": 49503, "text": "rathbhupendra" }, { "code": null, "e": 49529, "s": 49517, "text": "29AjayKumar" }, { "code": null, "e": 49546, "s": 49529, "text": "rudrathechamp007" }, { "code": null, "e": 49557, "s": 49546, "text": "yanshika00" }, { "code": null, "e": 49566, "s": 49557, "text": "vvkbisht" }, { "code": null, "e": 49574, "s": 49566, "text": "sky8214" }, { "code": null, "e": 49586, "s": 49574, "text": "umadevi9616" }, { "code": null, "e": 49600, "s": 49586, "text": "paritoshdev26" }, { "code": null, "e": 49613, "s": 49600, "text": "ankita_saini" }, { "code": null, "e": 49628, "s": 49613, "text": "gs0801it191075" }, { "code": null, "e": 49649, "s": 49628, "text": "circular linked list" }, { "code": null, "e": 49661, "s": 49649, "text": "Linked List" }, { "code": null, "e": 49673, "s": 49661, "text": "Linked List" }, { "code": null, "e": 49694, "s": 49673, "text": "circular linked list" }, { "code": null, "e": 49792, "s": 49694, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 49801, "s": 49792, "text": "Comments" }, { "code": null, "e": 49814, "s": 49801, "text": "Old Comments" }, { "code": null, "e": 49849, "s": 49814, "text": "Linked List | Set 1 (Introduction)" }, { "code": null, "e": 49888, "s": 49849, "text": "Linked List | Set 2 (Inserting a node)" }, { "code": null, "e": 49936, "s": 49888, "text": "Stack Data Structure (Introduction and Program)" }, { "code": null, "e": 49974, "s": 49936, "text": "Linked List | Set 3 (Deleting a node)" }, { "code": null, "e": 49993, "s": 49974, "text": "LinkedList in Java" }, { "code": null, "e": 50014, "s": 49993, "text": "Linked List vs Array" }, { "code": null, "e": 50060, "s": 50014, "text": "Delete a Linked List node at a given position" }, { "code": null, "e": 50107, "s": 50060, "text": "Implementing a Linked List in Java using Class" }, { "code": null, "e": 50136, "s": 50107, "text": "Detect loop in a linked list" } ]
Putting arrowheads on vectors in Matplotlib's 3D plot
To draw arrow heads vectors in 3D matplotlb's plot, we can take the following steps − Create a 2D array, where x, y, z, u, v and w are the coordinates of the arrow locations and direction components of arrow vectors. Create a 2D array, where x, y, z, u, v and w are the coordinates of the arrow locations and direction components of arrow vectors. Using figure() method, create a new figure or activate an existing figure. Using figure() method, create a new figure or activate an existing figure. Add an '~.axes.Axes' to the figure as part of a subplot arrangement, using add_subplot() method. Add an '~.axes.Axes' to the figure as part of a subplot arrangement, using add_subplot() method. Plot a 3D field of arrows, using quiver() method. Plot a 3D field of arrows, using quiver() method. Using ylim, xlim, zlim, limit the range of the axes. Using ylim, xlim, zlim, limit the range of the axes. Set the title of the plot. Set the title of the plot. To display the figure, use show() method. To display the figure, use show() method. import matplotlib.pyplot as plt import numpy as np plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True soa = np.array([[0, 0, 1, 1, -2, 0], [0, 0, 2, 1, 1, 0], [0, 0, 3, 2, 1, 0], [0, 0, 4, 0.5, 0.7, 0]]) X, Y, Z, U, V, W = zip(*soa) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.quiver(X, Y, Z, U, V, W, color='red') ax.set_xlim([-1, 0.5]) ax.set_ylim([-1, 1.5]) ax.set_zlim([-1, 8]) ax.set_title("Vectors") plt.show()
[ { "code": null, "e": 1148, "s": 1062, "text": "To draw arrow heads vectors in 3D matplotlb's plot, we can take the following steps −" }, { "code": null, "e": 1279, "s": 1148, "text": "Create a 2D array, where x, y, z, u, v and w are the coordinates of the arrow locations and direction components of arrow vectors." }, { "code": null, "e": 1410, "s": 1279, "text": "Create a 2D array, where x, y, z, u, v and w are the coordinates of the arrow locations and direction components of arrow vectors." }, { "code": null, "e": 1485, "s": 1410, "text": "Using figure() method, create a new figure or activate an existing figure." }, { "code": null, "e": 1560, "s": 1485, "text": "Using figure() method, create a new figure or activate an existing figure." }, { "code": null, "e": 1657, "s": 1560, "text": "Add an '~.axes.Axes' to the figure as part of a subplot arrangement, using add_subplot() method." }, { "code": null, "e": 1754, "s": 1657, "text": "Add an '~.axes.Axes' to the figure as part of a subplot arrangement, using add_subplot() method." }, { "code": null, "e": 1804, "s": 1754, "text": "Plot a 3D field of arrows, using quiver() method." }, { "code": null, "e": 1854, "s": 1804, "text": "Plot a 3D field of arrows, using quiver() method." }, { "code": null, "e": 1907, "s": 1854, "text": "Using ylim, xlim, zlim, limit the range of the axes." }, { "code": null, "e": 1960, "s": 1907, "text": "Using ylim, xlim, zlim, limit the range of the axes." }, { "code": null, "e": 1987, "s": 1960, "text": "Set the title of the plot." }, { "code": null, "e": 2014, "s": 1987, "text": "Set the title of the plot." }, { "code": null, "e": 2056, "s": 2014, "text": "To display the figure, use show() method." }, { "code": null, "e": 2098, "s": 2056, "text": "To display the figure, use show() method." }, { "code": null, "e": 2567, "s": 2098, "text": "import matplotlib.pyplot as plt\nimport numpy as np\nplt.rcParams[\"figure.figsize\"] = [7.00, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nsoa = np.array([[0, 0, 1, 1, -2, 0], [0, 0, 2, 1, 1, 0], [0, 0, 3, 2, 1, 0], [0, 0, 4, 0.5, 0.7, 0]])\nX, Y, Z, U, V, W = zip(*soa)\nfig = plt.figure()\nax = fig.add_subplot(projection='3d')\nax.quiver(X, Y, Z, U, V, W, color='red')\nax.set_xlim([-1, 0.5])\nax.set_ylim([-1, 1.5])\nax.set_zlim([-1, 8])\nax.set_title(\"Vectors\")\nplt.show()" } ]
Print prime numbers with prime sum of digits in an array
Given with an array of elements and the task is to print those numbers whose digit sum is also prime and return -1 is not such digit exists in an array Input: arr[]={2,4,3,19,25,6,11,12,18,7} Output : 2, 3, 25, 11, 12, 7 Here, the given output is generated as it contains those additive numbers whose sum is also prime like − 2, 3, 7 are prime but 25(2+5=7), 11(1+1=2), 12(1+2=3) are also prime whereas numbers like 19(1+9=10) are not prime. START Step 1 -> Take array of int with values Step 2 -> declare start variables as i, m, flag, flag1, sum, r, d, j, tem Step 3 -> store size of array in m as sizeof(arr)/sizeof(arr[0]) Step 4 -> Loop For i=1 and i<m and i++ Set flag=flag1=sum=0 Set d=int(arr[i]/2 Loop For j=2 and j<=d and j++ IF arr[i]%j==0 Set flag=1 Break End IF End IF flag=0 Set tem=arr[i] Loop While tem Set r=tem%10 Set sum=sum+r Set tem=tem/10 End Set d=int(sum/2) Loop For j=2 and j<=d and j++ IF sum%j=0 Set flag1=1 break End End IF flag1=0 Print arr[i] End End End STOP #include<iostream> using namespace std; int main(){ int arr[]={2,4,3,19,25,6,11,12,18,7}; int i,m,flag,flag1,sum,r,d,j,tem; m=sizeof(arr)/sizeof(arr[0]); for(i=0;i<m;i++) { flag=flag1=sum=0; d=int(arr[i]/2); for(j=2;j<=d;j++){ if(arr[i]%j==0) { flag=1; break; } } if(flag==0) { tem=arr[i]; while(tem) { r=tem%10; sum=sum+r; tem=tem/10; } d=int(sum/2); for(j=2;j<=d;j++) { if(sum%j==0){ flag1=1; break; } } if(flag1==0){ cout<<arr[i]<<" "; } } } } if we run the above program then it will generate the following output 2 3 11 25 12 7
[ { "code": null, "e": 1214, "s": 1062, "text": "Given with an array of elements and the task is to print those numbers whose digit sum is also prime and return -1 is not such digit exists in an array" }, { "code": null, "e": 1283, "s": 1214, "text": "Input: arr[]={2,4,3,19,25,6,11,12,18,7}\nOutput : 2, 3, 25, 11, 12, 7" }, { "code": null, "e": 1504, "s": 1283, "text": "Here, the given output is generated as it contains those additive numbers whose sum is also prime like − 2, 3, 7 are prime but 25(2+5=7), 11(1+1=2), 12(1+2=3) are also prime whereas numbers like 19(1+9=10) are not prime." }, { "code": null, "e": 2262, "s": 1504, "text": "START\nStep 1 -> Take array of int with values\nStep 2 -> declare start variables as i, m, flag, flag1, sum, r, d, j, tem\nStep 3 -> store size of array in m as sizeof(arr)/sizeof(arr[0])\nStep 4 -> Loop For i=1 and i<m and i++\n Set flag=flag1=sum=0\n Set d=int(arr[i]/2\n Loop For j=2 and j<=d and j++\n IF arr[i]%j==0\n Set flag=1\n Break\n End IF\n End\n IF flag=0\n Set tem=arr[i]\n Loop While tem\n Set r=tem%10\n Set sum=sum+r\n Set tem=tem/10\n End\n Set d=int(sum/2)\n Loop For j=2 and j<=d and j++\n IF sum%j=0\n Set flag1=1\n break\n End\n End\n IF flag1=0\n Print arr[i]\n End\n End\nEnd\nSTOP" }, { "code": null, "e": 2976, "s": 2262, "text": "#include<iostream>\nusing namespace std;\nint main(){\n int arr[]={2,4,3,19,25,6,11,12,18,7};\n int i,m,flag,flag1,sum,r,d,j,tem;\n m=sizeof(arr)/sizeof(arr[0]);\n for(i=0;i<m;i++) {\n flag=flag1=sum=0;\n d=int(arr[i]/2);\n for(j=2;j<=d;j++){\n if(arr[i]%j==0) {\n flag=1;\n break;\n }\n }\n if(flag==0) {\n tem=arr[i];\n while(tem) {\n r=tem%10;\n sum=sum+r;\n tem=tem/10;\n }\n d=int(sum/2);\n for(j=2;j<=d;j++) {\n if(sum%j==0){\n flag1=1;\n break;\n }\n }\n if(flag1==0){\n cout<<arr[i]<<\" \";\n }\n }\n }\n}" }, { "code": null, "e": 3047, "s": 2976, "text": "if we run the above program then it will generate the following output" }, { "code": null, "e": 3062, "s": 3047, "text": "2 3 11 25 12 7" } ]
Lodash - get method
_.get(object, path, [defaultValue]) Gets the value at path of object. If the resolved value is undefined, the defaultValue is returned in its place. object (Object) − The object to query. object (Object) − The object to query. path (Array|string) − The path of the property to get. path (Array|string) − The path of the property to get. [defaultValue] (*) − The value returned for undefined resolved values. [defaultValue] (*) − The value returned for undefined resolved values. (*) − Returns the resolved value. (*) − Returns the resolved value. var _ = require('lodash'); var object = { 'a': [{ 'b': { 'c': 3 } }] }; var result = _.get(object, 'a[0].b.c'); console.log(result); result = _.get(object, ['a', '0', 'b', 'c']); console.log(result); Save the above program in tester.js. Run the following command to execute this program. \>node tester.js 3 3 Print Add Notes Bookmark this page
[ { "code": null, "e": 1864, "s": 1827, "text": "_.get(object, path, [defaultValue])\n" }, { "code": null, "e": 1977, "s": 1864, "text": "Gets the value at path of object. If the resolved value is undefined, the defaultValue is returned in its place." }, { "code": null, "e": 2016, "s": 1977, "text": "object (Object) − The object to query." }, { "code": null, "e": 2055, "s": 2016, "text": "object (Object) − The object to query." }, { "code": null, "e": 2110, "s": 2055, "text": "path (Array|string) − The path of the property to get." }, { "code": null, "e": 2165, "s": 2110, "text": "path (Array|string) − The path of the property to get." }, { "code": null, "e": 2236, "s": 2165, "text": "[defaultValue] (*) − The value returned for undefined resolved values." }, { "code": null, "e": 2307, "s": 2236, "text": "[defaultValue] (*) − The value returned for undefined resolved values." }, { "code": null, "e": 2341, "s": 2307, "text": "(*) − Returns the resolved value." }, { "code": null, "e": 2375, "s": 2341, "text": "(*) − Returns the resolved value." }, { "code": null, "e": 2579, "s": 2375, "text": "var _ = require('lodash');\nvar object = { 'a': [{ 'b': { 'c': 3 } }] };\nvar result = _.get(object, 'a[0].b.c');\n\nconsole.log(result);\n\nresult = _.get(object, ['a', '0', 'b', 'c']);\nconsole.log(result);" }, { "code": null, "e": 2667, "s": 2579, "text": "Save the above program in tester.js. Run the following command to execute this program." }, { "code": null, "e": 2685, "s": 2667, "text": "\\>node tester.js\n" }, { "code": null, "e": 2690, "s": 2685, "text": "3\n3\n" }, { "code": null, "e": 2697, "s": 2690, "text": " Print" }, { "code": null, "e": 2708, "s": 2697, "text": " Add Notes" } ]
AWT Menu Class
The Menu class represents pull-down menu component which is deployed from a menu bar. Following is the declaration for java.awt.Menu class: public class Menu extends MenuItem implements MenuContainer, Accessible Menu() Constructs a new menu with an empty label. Menu(String label) Constructs a new menu with the specified label. Menu(String label, boolean tearOff) Constructs a new menu with the specified label, indicating whether the menu can be torn off. MenuItem add(MenuItem mi) Adds the specified menu item to this menu. void add(String label) Adds an item with the specified label to this menu. void addNotify() Creates the menu's peer. void addSeparator() Adds a separator line, or a hypen, to the menu at the current position. int countItems() Deprecated. As of JDK version 1.1, replaced by getItemCount(). AccessibleContext getAccessibleContext() Gets the AccessibleContext associated with this Menu. MenuItem getItem(int index) Gets the item located at the specified index of this menu. int getItemCount() Get the number of items in this menu. void insert(MenuItem menuitem, int index) Inserts a menu item into this menu at the specified position. void insert(String label, int index) Inserts a menu item with the specified label into this menu at the specified position. void insertSeparator(int index) Inserts a separator at the specified position. boolean isTearOff() Indicates whether this menu is a tear-off menu. String paramString() Returns a string representing the state of this Menu. void remove(int index) Removes the menu item at the specified index from this menu. void remove(MenuComponent item) Removes the specified menu item from this menu. void removeAll() Removes all items from this menu. void removeNotify() Removes the menu's peer. This class inherits methods from the following classes: java.awt.MenuItem java.awt.MenuItem java.awt.MenuComponent java.awt.MenuComponent java.lang.Object java.lang.Object Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui > package com.tutorialspoint.gui; import java.awt.*; import java.awt.event.*; public class AWTMenuDemo { private Frame mainFrame; private Label headerLabel; private Label statusLabel; private Panel controlPanel; public AWTMenuDemo(){ prepareGUI(); } public static void main(String[] args){ AWTMenuDemo awtMenuDemo = new AWTMenuDemo(); awtMenuDemo.showMenuDemo(); } private void prepareGUI(){ mainFrame = new Frame("Java AWT Examples"); mainFrame.setSize(400,400); mainFrame.setLayout(new GridLayout(3, 1)); mainFrame.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent windowEvent){ System.exit(0); } }); headerLabel = new Label(); headerLabel.setAlignment(Label.CENTER); statusLabel = new Label(); statusLabel.setAlignment(Label.CENTER); statusLabel.setSize(350,100); controlPanel = new Panel(); controlPanel.setLayout(new FlowLayout()); mainFrame.add(headerLabel); mainFrame.add(controlPanel); mainFrame.add(statusLabel); mainFrame.setVisible(true); } private void showMenuDemo(){ //create a menu bar final MenuBar menuBar = new MenuBar(); //create menus Menu fileMenu = new Menu("File"); Menu editMenu = new Menu("Edit"); final Menu aboutMenu = new Menu("About"); //create menu items MenuItem newMenuItem = new MenuItem("New",new MenuShortcut(KeyEvent.VK_N)); newMenuItem.setActionCommand("New"); MenuItem openMenuItem = new MenuItem("Open"); openMenuItem.setActionCommand("Open"); MenuItem saveMenuItem = new MenuItem("Save"); saveMenuItem.setActionCommand("Save"); MenuItem exitMenuItem = new MenuItem("Exit"); exitMenuItem.setActionCommand("Exit"); MenuItem cutMenuItem = new MenuItem("Cut"); cutMenuItem.setActionCommand("Cut"); MenuItem copyMenuItem = new MenuItem("Copy"); copyMenuItem.setActionCommand("Copy"); MenuItem pasteMenuItem = new MenuItem("Paste"); pasteMenuItem.setActionCommand("Paste"); MenuItemListener menuItemListener = new MenuItemListener(); newMenuItem.addActionListener(menuItemListener); openMenuItem.addActionListener(menuItemListener); saveMenuItem.addActionListener(menuItemListener); exitMenuItem.addActionListener(menuItemListener); cutMenuItem.addActionListener(menuItemListener); copyMenuItem.addActionListener(menuItemListener); pasteMenuItem.addActionListener(menuItemListener); final CheckboxMenuItem showWindowMenu = new CheckboxMenuItem("Show About", true); showWindowMenu.addItemListener(new ItemListener() { public void itemStateChanged(ItemEvent e) { if(showWindowMenu.getState()){ menuBar.add(aboutMenu); }else{ menuBar.remove(aboutMenu); } } }); //add menu items to menus fileMenu.add(newMenuItem); fileMenu.add(openMenuItem); fileMenu.add(saveMenuItem); fileMenu.addSeparator(); fileMenu.add(showWindowMenu); fileMenu.addSeparator(); fileMenu.add(exitMenuItem); editMenu.add(cutMenuItem); editMenu.add(copyMenuItem); editMenu.add(pasteMenuItem); //add menu to menubar menuBar.add(fileMenu); menuBar.add(editMenu); menuBar.add(aboutMenu); //add menubar to the frame mainFrame.setMenuBar(menuBar); mainFrame.setVisible(true); } class MenuItemListener implements ActionListener { public void actionPerformed(ActionEvent e) { statusLabel.setText(e.getActionCommand() + " MenuItem clicked."); } } } Compile the program using command prompt. Go to D:/ > AWT and type the following command. D:\AWT>javac com\tutorialspoint\gui\AWTMenuDemo.java If no error comes that means compilation is successful. Run the program using following command. D:\AWT>java com.tutorialspoint.gui.AWTMenuDemo Verify the following output. (Click on File Menu.) 13 Lectures 2 hours EduOLC Print Add Notes Bookmark this page
[ { "code": null, "e": 1833, "s": 1747, "text": "The Menu class represents pull-down menu component which is deployed from a menu bar." }, { "code": null, "e": 1887, "s": 1833, "text": "Following is the declaration for java.awt.Menu class:" }, { "code": null, "e": 1968, "s": 1887, "text": "public class Menu\n extends MenuItem\n implements MenuContainer, Accessible" }, { "code": null, "e": 1976, "s": 1968, "text": "Menu() " }, { "code": null, "e": 2019, "s": 1976, "text": "Constructs a new menu with an empty label." }, { "code": null, "e": 2039, "s": 2019, "text": "Menu(String label) " }, { "code": null, "e": 2087, "s": 2039, "text": "Constructs a new menu with the specified label." }, { "code": null, "e": 2124, "s": 2087, "text": "Menu(String label, boolean tearOff) " }, { "code": null, "e": 2217, "s": 2124, "text": "Constructs a new menu with the specified label, indicating whether the menu can be torn off." }, { "code": null, "e": 2244, "s": 2217, "text": "MenuItem add(MenuItem mi) " }, { "code": null, "e": 2287, "s": 2244, "text": "Adds the specified menu item to this menu." }, { "code": null, "e": 2311, "s": 2287, "text": "void add(String label) " }, { "code": null, "e": 2363, "s": 2311, "text": "Adds an item with the specified label to this menu." }, { "code": null, "e": 2381, "s": 2363, "text": "void addNotify() " }, { "code": null, "e": 2406, "s": 2381, "text": "Creates the menu's peer." }, { "code": null, "e": 2427, "s": 2406, "text": "void addSeparator() " }, { "code": null, "e": 2499, "s": 2427, "text": "Adds a separator line, or a hypen, to the menu at the current position." }, { "code": null, "e": 2517, "s": 2499, "text": "int\tcountItems() " }, { "code": null, "e": 2580, "s": 2517, "text": "Deprecated. As of JDK version 1.1, replaced by getItemCount()." }, { "code": null, "e": 2622, "s": 2580, "text": "AccessibleContext getAccessibleContext() " }, { "code": null, "e": 2676, "s": 2622, "text": "Gets the AccessibleContext associated with this Menu." }, { "code": null, "e": 2705, "s": 2676, "text": "MenuItem getItem(int index) " }, { "code": null, "e": 2764, "s": 2705, "text": "Gets the item located at the specified index of this menu." }, { "code": null, "e": 2784, "s": 2764, "text": "int getItemCount() " }, { "code": null, "e": 2822, "s": 2784, "text": "Get the number of items in this menu." }, { "code": null, "e": 2865, "s": 2822, "text": "void insert(MenuItem menuitem, int index) " }, { "code": null, "e": 2927, "s": 2865, "text": "Inserts a menu item into this menu at the specified position." }, { "code": null, "e": 2965, "s": 2927, "text": "void insert(String label, int index) " }, { "code": null, "e": 3052, "s": 2965, "text": "Inserts a menu item with the specified label into this menu at the specified position." }, { "code": null, "e": 3085, "s": 3052, "text": "void insertSeparator(int index) " }, { "code": null, "e": 3132, "s": 3085, "text": "Inserts a separator at the specified position." }, { "code": null, "e": 3153, "s": 3132, "text": "boolean\tisTearOff() " }, { "code": null, "e": 3201, "s": 3153, "text": "Indicates whether this menu is a tear-off menu." }, { "code": null, "e": 3223, "s": 3201, "text": "String paramString() " }, { "code": null, "e": 3277, "s": 3223, "text": "Returns a string representing the state of this Menu." }, { "code": null, "e": 3301, "s": 3277, "text": "void remove(int index) " }, { "code": null, "e": 3362, "s": 3301, "text": "Removes the menu item at the specified index from this menu." }, { "code": null, "e": 3395, "s": 3362, "text": "void remove(MenuComponent item) " }, { "code": null, "e": 3443, "s": 3395, "text": "Removes the specified menu item from this menu." }, { "code": null, "e": 3461, "s": 3443, "text": "void removeAll() " }, { "code": null, "e": 3495, "s": 3461, "text": "Removes all items from this menu." }, { "code": null, "e": 3516, "s": 3495, "text": "void removeNotify() " }, { "code": null, "e": 3541, "s": 3516, "text": "Removes the menu's peer." }, { "code": null, "e": 3597, "s": 3541, "text": "This class inherits methods from the following classes:" }, { "code": null, "e": 3615, "s": 3597, "text": "java.awt.MenuItem" }, { "code": null, "e": 3633, "s": 3615, "text": "java.awt.MenuItem" }, { "code": null, "e": 3656, "s": 3633, "text": "java.awt.MenuComponent" }, { "code": null, "e": 3679, "s": 3656, "text": "java.awt.MenuComponent" }, { "code": null, "e": 3696, "s": 3679, "text": "java.lang.Object" }, { "code": null, "e": 3713, "s": 3696, "text": "java.lang.Object" }, { "code": null, "e": 3827, "s": 3713, "text": "Create the following java program using any editor of your choice in say D:/ > AWT > com > tutorialspoint > gui >" }, { "code": null, "e": 7692, "s": 3827, "text": "package com.tutorialspoint.gui;\n\nimport java.awt.*;\nimport java.awt.event.*;\n\npublic class AWTMenuDemo {\n private Frame mainFrame;\n private Label headerLabel;\n private Label statusLabel;\n private Panel controlPanel;\n\n public AWTMenuDemo(){\n prepareGUI();\n }\n\n public static void main(String[] args){\n AWTMenuDemo awtMenuDemo = new AWTMenuDemo(); \n awtMenuDemo.showMenuDemo();\n }\n\n private void prepareGUI(){\n mainFrame = new Frame(\"Java AWT Examples\");\n mainFrame.setSize(400,400);\n mainFrame.setLayout(new GridLayout(3, 1));\n mainFrame.addWindowListener(new WindowAdapter() {\n public void windowClosing(WindowEvent windowEvent){\n System.exit(0);\n } \n }); \n headerLabel = new Label();\n headerLabel.setAlignment(Label.CENTER);\n statusLabel = new Label(); \n statusLabel.setAlignment(Label.CENTER);\n statusLabel.setSize(350,100);\n\n controlPanel = new Panel();\n controlPanel.setLayout(new FlowLayout());\n\n mainFrame.add(headerLabel);\n mainFrame.add(controlPanel);\n mainFrame.add(statusLabel);\n mainFrame.setVisible(true); \n }\n\n private void showMenuDemo(){\n //create a menu bar\n final MenuBar menuBar = new MenuBar();\n\n //create menus\n Menu fileMenu = new Menu(\"File\");\n Menu editMenu = new Menu(\"Edit\"); \n final Menu aboutMenu = new Menu(\"About\");\n\n //create menu items\n MenuItem newMenuItem = \n new MenuItem(\"New\",new MenuShortcut(KeyEvent.VK_N));\n newMenuItem.setActionCommand(\"New\");\n\n MenuItem openMenuItem = new MenuItem(\"Open\");\n openMenuItem.setActionCommand(\"Open\");\n\n MenuItem saveMenuItem = new MenuItem(\"Save\");\n saveMenuItem.setActionCommand(\"Save\");\n\n MenuItem exitMenuItem = new MenuItem(\"Exit\");\n exitMenuItem.setActionCommand(\"Exit\");\n\n MenuItem cutMenuItem = new MenuItem(\"Cut\");\n cutMenuItem.setActionCommand(\"Cut\");\n\n MenuItem copyMenuItem = new MenuItem(\"Copy\");\n copyMenuItem.setActionCommand(\"Copy\");\n\n MenuItem pasteMenuItem = new MenuItem(\"Paste\");\n pasteMenuItem.setActionCommand(\"Paste\");\n \n MenuItemListener menuItemListener = new MenuItemListener();\n\n newMenuItem.addActionListener(menuItemListener);\n openMenuItem.addActionListener(menuItemListener);\n saveMenuItem.addActionListener(menuItemListener);\n exitMenuItem.addActionListener(menuItemListener);\n cutMenuItem.addActionListener(menuItemListener);\n copyMenuItem.addActionListener(menuItemListener);\n pasteMenuItem.addActionListener(menuItemListener);\n\n final CheckboxMenuItem showWindowMenu = \n new CheckboxMenuItem(\"Show About\", true);\n showWindowMenu.addItemListener(new ItemListener() {\n public void itemStateChanged(ItemEvent e) {\n if(showWindowMenu.getState()){\n menuBar.add(aboutMenu);\n }else{\n menuBar.remove(aboutMenu);\n }\n }\n });\n\n //add menu items to menus\n fileMenu.add(newMenuItem);\n fileMenu.add(openMenuItem);\n fileMenu.add(saveMenuItem);\n fileMenu.addSeparator();\n fileMenu.add(showWindowMenu);\n fileMenu.addSeparator();\n fileMenu.add(exitMenuItem);\n\n editMenu.add(cutMenuItem);\n editMenu.add(copyMenuItem);\n editMenu.add(pasteMenuItem);\n\n //add menu to menubar\n menuBar.add(fileMenu);\n menuBar.add(editMenu);\n menuBar.add(aboutMenu);\n\n //add menubar to the frame\n mainFrame.setMenuBar(menuBar);\n mainFrame.setVisible(true); \n }\n\n class MenuItemListener implements ActionListener {\n public void actionPerformed(ActionEvent e) { \n statusLabel.setText(e.getActionCommand() \n + \" MenuItem clicked.\");\n } \n }\n}" }, { "code": null, "e": 7783, "s": 7692, "text": "Compile the program using command prompt. Go to D:/ > AWT and type the following command." }, { "code": null, "e": 7836, "s": 7783, "text": "D:\\AWT>javac com\\tutorialspoint\\gui\\AWTMenuDemo.java" }, { "code": null, "e": 7933, "s": 7836, "text": "If no error comes that means compilation is successful. Run the program using following command." }, { "code": null, "e": 7980, "s": 7933, "text": "D:\\AWT>java com.tutorialspoint.gui.AWTMenuDemo" }, { "code": null, "e": 8031, "s": 7980, "text": "Verify the following output. (Click on File Menu.)" }, { "code": null, "e": 8064, "s": 8031, "text": "\n 13 Lectures \n 2 hours \n" }, { "code": null, "e": 8072, "s": 8064, "text": " EduOLC" }, { "code": null, "e": 8079, "s": 8072, "text": " Print" }, { "code": null, "e": 8090, "s": 8079, "text": " Add Notes" } ]
Everything you need to know about Regular Expressions | by Slawomir Chodnicki | Towards Data Science
After reading this article you will have a solid understanding of what regular expressions are, what they can do, and what they can’t do. You’ll be able to judge when to use them and — more importantly — when not to. Let’s start at the beginning. On an abstract level a regular expression, regex for short, is a shorthand representation for a set. A set of strings. Say we have a list of all valid zip codes. Instead of keeping that long and unwieldy list around, it’s often more practical to have a short and precise pattern that completely describes that set. Whenever you want to check whether a string is a valid zip code, you can match it against the pattern. You’ll get a true or false result indicating whether the string belongs to the set of zip codes the regex pattern represents. Let’s expand on the set of zip codes. A list of zip codes is finite, consists of rather short strings, and is not particularly challenging computationally. What about the set of strings that end in .csv? Can be quite useful when looking for data files. This set is infinite. You can’t make a list up front. And the only way to test for membership is to go to the end of the string and compare the last four characters. Regular expressions are a way of encoding such patterns in a standardized way. The following is a regular expression pattern that represents our set of strings ending in .csv ^.*\.csv$ Let’s leave the mechanics of this particular pattern aside, and look at practicalities: a regex engine can test a pattern against an input string to see if it matches. The above pattern matches foo.csv, but does not match bar.txt or my_csv_file. Before you use regular expressions in your code, you can test them using an online regex evaluator, and experiment with a friendly UI. I like regex101.com: you can pick the flavor of the regex engine, and patterns are nicely decomposed for you, so you get a good understanding of what your pattern actually does. Regex patterns can be cryptic. I’d recommend you open regex101.com in another window or tab and experiment with the examples presented in this article interactively. You’ll get a much better feel for regex patterns this way, I promise. Regular expressions are useful in any scenario that benefits from full or partial pattern matches on strings. These are some common use cases: verify the structure of strings extract substrings form structured strings search / replace / rearrange parts of the string split a string into tokens All of these come up regularly when doing data preparation work. A regular expression pattern is constructed from distinct building blocks. It may contain literals, character classes, boundary matchers, quantifiers, groups and the OR operator. Let’s dive in and look at some examples. The most basic building block in a regular expression is a character a.k.a. literal. Most characters in a regex pattern do not have a special meaning, they simply match themselves. Consider the following pattern: I am a harmless regex pattern None of the characters in this pattern has special meaning. Thus each character of the pattern matches itself. Therefore there is only one string that matches this pattern, and it is identical to the pattern string itself. What are the characters that do have special meaning? The following list shows characters that have special meaning in a regular expression. They must be escaped by a backslash if they are meant to represent themselves. Consider the following pattern: \+21\.5 The pattern consists of literals only — the + has special meaning and has been escaped, so has the .— and thus the pattern matches only one string: +21.5 Sometimes it’s necessary to refer to some non-printable character like the tab character ⇥ or a newline ↩ It’s best to use the proper escape sequences for them: If you need to match a line break, they usually come in one of two flavors: \n often referred to as the unix-style newline \r\n often referred to as the windows-style newline To catch both possibilities you can match on \r?\n which means: optional \r followed by \n Sometimes you have to match characters that are best expressed by using their Unicode index. Sometimes a character simply cannot be typed— like control characters such as ASCII NUL, ESC, VT etc. Sometimes your programming language simply does not support putting certain characters into patterns. Characters outside the BMP, such as 𝄞 or emojis are often not supported verbatim. In many regex engines — such as Java, JavaScript, Python, and Ruby — you can use the \uHexIndex escape syntax to match any character by its Unicode index. Say we want to match the symbol for natural numbers: N - U+2115 The pattern to match this character is: \u2115 Other engines often provide an equivalent escape syntax. In Go, you would use \x{2115} to match N Unicode support and escape syntax varies across engines. If you plan on matching technical symbols, musical symbols, or emojis — especially outside the BMP — check the documentation of the regex engine you use to be sure of adequate support for your use-case. Sometimes a pattern requires consecutive characters to be escaped as literals. Say it’s supposed to match the following string: +???+ The pattern would look like this: \+\?\?\?\+ The need to escape every character as literal makes it harder to read and to understand. Depending on your regex engine, there might be a way to start and end a literal section in your pattern. Check your docs. In Java and Perl sequences of characters that should be interpreted literally can be enclosed by \Q and \E. The following pattern is equivalent to the above: \Q+???+\E Escaping parts of a pattern can also be useful if it is constructed from parts, some of which are to be interpreted literally, like user-supplied search words. If your regex engine does not have this feature, the ecosystem often provides a function to escape all characters with special meaning from a pattern string, such as lodash escapeRegExp. The pipe character | is the selection operator. It matches alternatives. Suppose a pattern should match the strings 1 and 2 The following pattern does the trick: 1|2 The patterns left and right of the operator are the allowed alternatives. The following pattern matches William Turner and Bill Turner William Turner|Bill Turner The second part of the alternatives is consistently Turner. Would be convenient to put the alternatives William and Bill up front, and mention Turner only once. The following pattern does that: (William|Bill) Turner It looks more readable. It also introduces a new concept: Groups. You can group sub-patterns in sections enclosed in round brackets. They group the contained expressions into a single unit. Grouping parts of a pattern has several uses: simplify regex notation, making intent clerer apply quantifiers to sub-expressions extract sub-strings matching a group replace sub-strings matching a group Let’s look at a regex with a group:(William|Bill) Turner Groups are sometimes referred to as “capturing groups” because in case of a match, each group’s matched sub-string is captured, and is available for extraction. How captured groups are made available depends on the API you use. In JavaScript, calling "my string".match(/pattern/) returns an array of matches. The first item is the entire matched string and subsequent items are the sub-strings matching pattern groups in order of appearance in the pattern. Consider a string identifying a chess board field. Fields on a chess board can be identified as A1-A8 for the first column, B1-B8 for the second column and so on until H1-H8 for the last column. Suppose a string containing this notation should be validated and the components (the letter and the digit) extracted using capture groups. The following regular expression would do that. (A|B|C|D|E|F|G|H)(1|2|3|4|5|6|7|8) While the above regular expression is valid and does the job, it is somewhat clunky. This one works just as well, and it is a bit more concise: ([A-H])([1-8]) This sure looks more concise. But it introduces a new concept: Character Classes. Character classes are used to define a set of allowed characters. The set of allowed characters is put in square brackets, and each allowed character is listed. The character class [abcdef] is equivalent to (a|b|c|d|e|f). Since the class contains alternatives, it matches exactly one character. The pattern [ab][cd] matches exactly 4 strings ac, ad, bc, and bd. It does not match ab, the first character matches, but the second character must be either c or d . Suppose a pattern should match a two digit code. A pattern to match this could look like this: [0123456789][0123456789] This pattern matches all 100 two digit strings in the range from 00 to 99. It is often tedious and error-prone to list all possible characters in a character class. Consecutive characters can be included in a character class as ranges using the dash operator: [0-9][0-9] Characters are ordered by a numeric index— in 2019 that is almost always the Unicode index. If you’re working with numbers, Latin characters and basic punctuation, you can instead look at the much smaller historical subset of Unicode: ASCII. The digits zero through nine are encoded sequentially through code-points: U+0030 for 0 to code point U+0039 for 9, so a character set of [0–9] is a valid range. Lower case and upper case letters of the Latin alphabet are encoded consecutively as well, so character classes for alphabetic characters are often seen too. The following character set matches any lower case Latin character: [a-z] You can define multiple ranges within the same character class. The following character class matches all lower case and upper case Latin characters: [A-Za-z] You might get the impression that the above pattern could be abbreviated to: [A-z] That is a valid character class, but it matches not only A-Z and a-z, it also matches all characters defined between Z and a, such as [, \, and ^. If you’re tearing your hair out cursing the stupidity of the people who defined ASCII and introduce this mind-boggling discontinuity, hold your horses for a bit. ASCII was defined at a time where computing capacity was much more precious than today. Look at A hex: 0x41 bin: 0100 0001 and a hex: 0x61 bin: 0110 0001 How do you convert between upper and lower case? You flip one bit. That is true for the entire alphabet. ASCII is optimized to simplify case conversion. The people defining ASCII were very thoughtful. Some desirable qualities had to be sacrificed for others. You’re welcome. You might wonder how to put the - character into a character class. After all, it is used to define ranges. Most engines interpret the - character literally if placed as the first or last character in the class: [-+0–9] or [+0–9-]. Some few engines require escaping with a backslash: [\-+0–9] Sometimes it’s useful to define a character class that matches most characters, except for a few defined exceptions. If a character class definition begins with a ^, the set of listed characters is inverted. As an example, the following class allows any character as long as it’s neither a digit nor an underscore. [^0-9_] Please note that the ^ character is interpreted as a literal if it is not the first character of the class, as in [f^o], and that it is a boundary matcher if used outside character classes. Some character classes are used so frequently that there are shorthand notations defined for them. Consider the character class [0–9]. It matches any digit character and is used so often that there is a mnemonic notation for it: \d. The following list shows character classes with most common shorthand notations, likely to be supported by any regex engine you use. Most engines come with a comprehensive list of predefined character classes matching certain blocks or categories of the Unicode standard, punctuation, specific alphabets, etc. These additional character classes are often specific to the engine at hand, and not very portable. The most ubiquitous predefined character class is the dot, and it deserves a small section on its own. It matches any character except for line terminators like \r and \n. The following pattern matches any three character string ending with a lower case x: ..x In practice the dot is often used to create “anything might go in here” sections in a pattern. It is frequently combined with a quantifier and .* is used to match “anything” or “don’t care” sections. Please note that the . character loses its special meaning, when used inside a character class. The character class [.,] simply matches two characters, the dot and the comma. Depending on the regex engine you use you may be able to set the dotAll execution flag in which case . will match anything including line terminators. Boundary matchers — also known as “anchors” — do not match a character as such, they match a boundary. They match the positions between characters, if you will. The most common anchors are ^ and $. They match the beginning and end of a line respectively. The following table shows the most commonly supported anchors. Consider a search operation for digits on a multi-line text. The pattern [0–9] finds every digit in the text, no matter where it is located. The pattern ^[0–9] finds every digit that is the first character on a line. The same idea applies to line endings with $. The \A and \Z or \z anchors are useful matching multi-line strings. They anchor to the beginning and end of the entire input. The upper case \Z variant is tolerant of trailing newlines and matches just before that, effectively discarding any trailing newline in the match. The \A and \Z anchors are supported by most mainstream regex engines, with the notable exception of JavaScript. Suppose the requirement is to check whether a text is a two-line record specifying a chess position. This is what the input strings looks like: Column: FRow: 7 The following pattern matches the above structure: \AColumn: [A-H]\r?\nRow: [1-8]\Z The \b anchor matches the edge of any alphanumeric sequence. This is useful if you want to do “whole word” matches. The following pattern looks for a standalone upper case I. \bI\b The pattern does not match the first letter of Illinois because there is no word boundary to the right. The next letter is a word letter — defined by the character class \w as [a-zA-Z0–9_] —and not a non-word letter, which would constitute a boundary. Let’s replace Illinois with I!linois. The exclamation point is not a word character, and thus constitutes a boundary. The somewhat esoteric non-word boundary \B is the negation of \b. It matches any position that is not matched by \b. It matches every position between characters within white space and alphanumeric sequences. Some regex engines support the \G boundary matcher. It is useful when using regular expressions programmatically, and a pattern is applied repeatedly to a string, trying to find pattern all matches in a loop. It anchors to the position of the last match found. Any literal or character group matches the occurrence of exactly one character. The pattern [0–9][0–9] matches exactly two digits. Quantifiers help specifying the expected number of matches of a pattern. They are notated using curly braces. The following is equivalent to [0–9][0–9] [0-9]{2} The basic notation can be extended to provide upper and lower bounds. Say it’s necessary to match between two and six digits. The exact number varies, but it must be between two and six. The following notation does that: [0-9]{2,6} The upper bound is optional, if omitted any number of occurrences equal to or greater than the lower bound is acceptable. The following sample matches two or more consecutive digits. [0-9]{2,} There are some predefined shorthands for common quantifiers that are very frequently used in practice. The ? quantifier is equivalent to {0, 1}, which means: optional single occurrence. The preceding pattern may not match, or match once. Let’s find integers, optionally prefixed with a plus or minus sign: [-+]?\d{1,} The + quantifier is equivalent to {1,}, which means: at least one occurrence. We can modify our integer matching pattern from above to be more idiomatic by replacing {1,} with +and we get:[-+]?\d+ The * quantifier is equivalent to {0,}, which means: zero or more occurrences. You’ll see it very often in conjunction with the dot as .*, which means: any character don’t care how often. Let’s match an comma separated list of integers. Whitespace between entries is not allowed, and at least one integer must be present:\d+(,\d+)* We’re matching an integer followed by any number of groups containing a comma followed by an integer. Suppose the requirement is to match the domain part from a http URL in a capture group. The following seems like a good idea: match the protocol, then capture the domain, then an optional path. The idea translates roughly to this: http://(.*)/? If you’re using an engine that uses /regex/ notation like JavaScript, you have to escape the forward slashes: http:\/\/(.*)\/?.* It matches the protocol, captures what comes after the protocol as domain and it allows for an optional slash and some arbitrary text after that, which would be the resource path. Strangely enough, the following is captured by the group given some input strings: The results are somewhat surprising, as the pattern was designed to capture the domain part only, but it seems to be capturing everything till the end of the URL. This happens because each quantifier encountered in the pattern tries to match as much of the string as possible. The quantifiers are called greedy for this reason. Let’s check the of matching behaviour of: http://(.*)/?.* The greedy * in the capturing group is the first encountered quantifier. The . character class it applies to matches any character, so the quantifier extends to the end of the string. Thus the capture group captures everything. But wait, you say, there’s the /?.* part at the end. Well, yes, and it matches what’s left of the string — nothing, a.k.a the empty string — perfectly. The slash is optional, and is followed by zero or more characters. The empty string fits. The entire pattern matches just fine. Greedy is the default, but not the only flavor of quantifiers. Each quantifier has a reluctant version, that matches the least possible amount of characters. The greedy versions of the quantifiers are converted to reluctant versions by appending a ? to them. The following table gives the notations for all quantifiers. The quanfier{n} is equvalent in both greedy and reluctant versions. For the others the number of matched characters may vary. Let’s revisit the example from above and change the capture group to match as little as possible, in the hopes of getting the domain name captured properly. http://(.*?)/?.* Using this pattern, nothing — more precisely the empty string — is captured by the group. Why is that? The capture group now captures as little as possible: nothing. The (.*?) captures nothing, the /? matches nothing, and the .* matches the entirety of what’s left of the string. So again, this pattern does not work as intended. So far the capture group matches too little or too much. Let’s revert back to the greedy quantifier, but disallow the slash character in the domain name, and also require that the domain name be at least one character long. http://([^/]+)/?.* This pattern greedily captures one or more non slash characters after the protocol as the domain, and if finally any optional slash occurs it may be followed by any number of characters in the path. Both greedy and reluctant quantifiers imply some runtime overhead. If only a few such quantifiers are present, there are no issues. But if mutliple nested groups are each quantified as greedy or reluctant, determining the longest or shortest possible matches is a nontrivial operation that implies running back and forth on the input string adjusting the length of each quantifier’s match to determine whether the expression as a whole matches. Pathological cases of catastrophic backtracking may occur. If performance or malicious input is a concern, it’s best to prefer reluctant quantifiers and also have a look at a third kind of quantifiers: possessive quantifiers. Possessive quantifiers, if supported by your engine, act much like greedy quantifiers, with the distinction that they do not support backtracking. They try to match as many characters as possible, and once they do, they never yield any matched characters to accommodate possible matches for any other parts of the pattern. They are notated by appending a + to the base greedy quantifier. They are a fast performing version of “greedy-like” quantifiers, which makes them a good choice for performance sensitive operations. Let’s look at them in the PHP engine. First, let’s look at simple greedy matches. Let’s match some digits, followed by a nine: ([0–9]+)9 Matched against the input string: 123456789 the greedy quantifier will first match the entire input, then give back the 9, so the rest of the pattern has a chance to match. Now, when we replace the greedy with the possessive quantifier, it will match the entire input, then refuse to give back the 9 to avoid backtracking, and that will cause the entire pattern to not match at all. When would you want possessive behaviour? When you know that you always want the longest conceivable match. Let’s say you want to extract the filename part of filesystem paths. Let’s assume / as the path separator. Then what we effectively want is the last bit of the string after the last occurrence of a /. A possessive pattern works well here, because we always want to consume all folder names before capturing the file name. There is no need for the part of the pattern consuming folder names to ever give characters back. A corresponding pattern might look like this: \/?(?:[^\/]+\/)++(.*) Note: using PHP /regex/ notation here, so the forward slashes are escaped. We want to allow absolute paths, so we allow the input to start with an optional forward slash. We then possessively consume folder names consisting of a series of non-slash characters followed by a slash. I’ve used a non-capturing group for that — so it’s notated as (?:pattern) instead of just (pattern). Anything that is left over after the last slash is what we capture into a group for extraction. Non-capturing groups match exactly the way normal groups do. However, they do not make their matched content available. If there’s no need to capture the content, they can be used to improve matching performance. Non-capturing groups are written as: (?:pattern) Suppose we want to verify that a hex-string is valid. It needs to consist of an even number of hexadecimal digits each between 0–9 or a-f. The following expression does the job using a group: ([0-9a-f][0-9a-f])+ Since the point of the group in the pattern is to make sure that the digits come in pairs, and the digits actually matched are not of any relevance, the group may just as well be replaced with the faster performing non-capturing group: (?:[0-9a-f][0-9a-f])+ There is also a fast-performing version of a non-capturing group, that does not support backtracking. It is called the “independent non-capturing group” or “atomic group”. It is written as (?>pattern) An atomic group is a non-capturing group that can be used to optimize pattern matching for speed. Typically it is supported by regex engines that also support possessive quantifiers. Its behavior is also similar to possessive quantifiers: once an atomic group has matched a part of the string, that first match is permanent. The group will never try to re-match in another way to accommodate other parts of the pattern. a(?>bc|b)c matches abcc but it does not match abc. The atomic group’s first successful match is on bc and it stays that way. A normal group would re-match during backtracking to accommodate the c at the end of the pattern for a successful match. But an atomic group’s first match is permanent, it won’t change. This is useful if you want to match as fast as possible, and don’t want any backtracking to take place anyway. Say we’re matching the file name part of a path. We can match an atomic group of any characters followed by a slash. Then capture the rest: (?>.*\/)(.*) Note: using PHP /regex/ notation here, so the forward slashes are escaped. A normal group would have done the job just as well, but eliminating the possibility of backtracking improves performance. If you’re matching millions of inputs against non-trivial regex patterns, you’ll start noticing the difference. It also improves resilience against malicious input designed to DoS-Attack a service by triggering catastrophic backtracking scenarios. Sometimes it’s useful to refer to something that matched earlier in the string. Suppose a string value is only valid if it starts and ends with the same letter. The words “alpha”, “radar”, “kick”, “level” and “stars” are examples. It is possible to capture part of a string in a group and refer to that group later in the pattern pattern: a back reference. Back references in a regex pattern are notated using \n syntax, where n is the number of the capture group. The numbering is left to right starting with 1. If groups are nested, they are numbered in the order their opening parenthesis is encountered. Group 0 always means the entire expression. The following pattern matches inputs that have at least 3 characters and start and end with the same letter: ([a-zA-Z]).+\1 In words: an lower or upper case letter — that letter is captured into a group — followed by any non-empty string, followed by the letter we captured at the beginning of the match. Let’s expand a bit. An input string is matched if it contains any alphanumeric sequence — think: word — more then once. Word boundaries are used to ensure that whole words are matched. \b(\w+)\b.*\b\1\b Regular expressions are useful in search and replace operations. The typical use case is to look for a sub-string that matches a pattern and replace it with something else. Most APIs using regular expressions allow you to reference capture groups from the search pattern in the replacement string. These back references effectively allow to rearrange parts of the input string. Consider the following scenario: the input string contains an A-Z character prefix followed by an optional space followed by a 3–6 digit number. Strings like A321, B86562, F 8753, and L 287. The task is to convert it to another string consisting of the number, followed by a dash, followed by the character prefix. Input OutputA321 321-AB86562 86562-BF 8753 8753-FL 287 287-L The first step to transform one string to the other is to capture each part of the string in a capture group. The search pattern looks like this: ([A-Z])\s?([0-9]{3,6}) It captures the prefix into a group, allows for an optional space character, then captures the digits into a second group. Back references in a replacement string are notated using $n syntax, where n is the number of the capture group. The replacement string for this operation should first reference the group containing the numbers, then a literal dash, then the first group containing the letter prefix. This gives the following replacement string: $2-$1 Thus A321 is matched by the search pattern, putting A into $1 and 312 into $2. The replacement string is arranged to yield the desired result: The number comes first, then a dash, then the letter prefix. Please note that, since the $ character carries special meaning in a replacement string, it must be escaped as $$ if it should be inserted as a character. This kind of regex-enabled search and replace is often offered by text editors. Suppose you have a list of paths in your editor, and the task at hand is to prefix the file name of each file with an underscore. The path /foo/bar/file.txt should become /foo/bar/_file.txt With all we learned so far, we can do it like this: It is sometimes useful to assert that a string has a certain structure, without actually matching it. How is that useful? Let’s write a pattern that matches all words that are followed by a word beginning with an a Let’s try \b(\w+)\s+a it anchors to a word boundary, and matches word characters until it sees some space followed by an a. In the above example, we match love, swat, fly, and to, but fail to capture the an before ant. This is because the a starting an has been consumed as part of the match of to. We’ve scanned past that a, and the word an has no chance of matching. Would be great if there was a way to assert properties of the first character of the next word without actually consuming it. Constructs asserting existence, but not consuming the input are called “lookahead” and “lookbehind”. Lookaheads are used to assert that a pattern matches ahead. They are written as (?=pattern) Let’s use it to fix our pattern: \b(\w+)(?=\s+a) We’ve put the space and initial a of the next word into a lookahead, so when scanning a string for matches, they are checked but not consumed. A negative lookahead asserts that its pattern does not match ahead. It is notated as (?!pattern) Let’s find all words not followed by a word that starts with an a. \b(\w+)\b(?!\s+a) We match whole words which are not followed by some space and an a. The lookbehind serves the same purpose as the lookahead, but it applies to the left of the current position, not to the right. Many regex engines limit the kind of pattern you can use in a lookbehind, because applying a pattern backwards is something that they are not optimized for. Check your docs! A lookbehind is written as (?<=pattern) It asserts the existence of something before the current position. Let’s find all words that come after a word ending with an r or t. (?<=[rt]\s)(\w+) We assert that there is an r or t followed by a space, then we capture the sequence of word characters that follows. There’s also a negative lookbehind asserting the non-existence of a pattern to the left. It is written as (?<!pattern) Let’s invert the words found: We want to match all words that come after words not ending with r or t. (?<![rt]\s)\b(\w+) We match all words by \b(\w+), and by prepending (?<![rt]\s) we ensure that any words we match are not preceded by a word ending in r or t. If you’re working with an API that allows you to split a string by pattern, it is often useful to keep lookaheads and lookbehinds in mind. A regex split typically uses the pattern as a delimiter, and removes the delimiter from the parts. Putting lookahead or lookbehind sections in a delimiter makes it match without removing the parts that were merely looked at. Suppose you have a string delimited by :, in which some of the parts are labels consisting of alphabetic characters, and some are time stamps in the format HH:mm. Let’s look at input string time_a:9:23:time_b:10:11 If we just split on :, we get the parts: [time_a, 9, 32, time_b, 10, 11] Let’s say we want to improve by splitting only if the : has a letter on either side. The delimiter is now [a-z]:|:[a-z] We get the parts: [time_, 9:32, ime_, 10:11] We’ve lost the adjacent characters, since they were part of the delimiter. If we refine the delimiter to use lookahead and lookbehind for the adjacent characters, their existence will be verified, but they won’t match as part of the delimiter: (?<[a-z]):|:(?=[a-z]) Finally we get the parts we want: [time_a, 9:32, time_b, 10:11] Most regex engines allow setting flags or modifiers to tune aspects of the pattern matching process. Be sure to familiarise yourself with the way your engine of choice handles such modifiers. They often make the difference between a impractically complex pattern and a trival one. You can expect to find case (in-)sensitivity modifiers, anchoring options, full match vs. patial match mode, and a dotAll mode which lets the .character class match anything including line terminators. JavaScript, Python, Java, Ruby, .NET Let’s look at JavaScript, for example. If you want case insensitive mode and only the first match found, you can use the i modifier, and make sure to omit the g modifier. Arriving at the end of this article you may feel that all possible string parsing problems can be dealt with, once you get regular expressions under your belt. Well, no. This article introduces regular expressions as a shorthand notation for sets of strings. If you happen to have the exact regular expression for zip codes, you have a shorthand notation for the set of all strings representing valid zip codes. You can easily test an input string to check if it is an element of that set. There is a problem however. There are many meaningful sets of strings for which there is no regular expression! The set of valid JavaScript programs has no regex representation, for example. There will never be a regex pattern that can check if a JavaScript source is syntactically correct. This is mostly due to regex’ inherent inability to deal with nested structures of arbitrary depth. Regular expressions are inherently non-recursive. XML and JSON are nested structures, so is the source code of many programming languages. Palindromes are another example— words that read the same forwards and backwards like racecar — are a very simple form of nested structure. Each character is opening or closing a nesting level. You can construct patterns that will match nested structures up to a certain depth but you can’t write a pattern that matches arbitrary depth nesting. Nested structures often turn out to be not regular. If you’re interested in computation theory and classifications of languages — that is, sets of strings — have a glimpse at the Chomsky Hiararchy, Formal Grammars and Formal Languages. Let me conclude with a word of caution. I sometimes see attempts trying to use regular expressions not only for lexical analysis — the identification and extraction of tokens from a string — but also for semantic analysis trying to interpret and validate each token’s meaning as well. While lexical analysis is a perfectly valid use case for regular expressions, attempting semantic validation more often than not leads towards creating another problem. The plural of “regex” is “regrets” Let me illustrate with an example. Suppose a string shall be an IPv4 address in decimal notation with dots separating the numbers. A regular expression should validate that an input string is indeed an IPv4 address. The first attempt may look something like this: ([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3})\.([0-9]{1,3}) It matches four groups of one to three digits separated by a dot. Some readers may feel that this pattern falls short. It matches 111.222.333.444 for example, which is not a valid IP address. If you now feel the urge to change the pattern so it tests for each group of digits that the encoded number be between 0 and 255 — with possible leading zeros — then you’re on your way to creating the second problem, and regrets. Trying to do that leads away from lexical analysis — identifying four groups of digits — to a semantic analysis verifying that the groups of digits translate to admissible numbers. This yields a dramatically more complex regular expression, examples of which is found here. I’d recommend solving a problem like this by capturing each group of digits using a regex pattern, then converting captured items to integers and validating their range in a separate logical step. When working with regular expressions, the trade-off between complexity, maintainability, performance, and correctness should always be a conscious decision. After all, a regex pattern is as “write-only” as computing syntax can get. It is difficult to read regular expression patterns correctly, let alone debug and extend them. My advice is to embrace them as a powerful string processing tool, but to neither overestimate their possibilities, nor the ability of human beings to handle them. When in doubt, consider reaching for another hammer in the box.
[ { "code": null, "e": 310, "s": 172, "text": "After reading this article you will have a solid understanding of what regular expressions are, what they can do, and what they can’t do." }, { "code": null, "e": 389, "s": 310, "text": "You’ll be able to judge when to use them and — more importantly — when not to." }, { "code": null, "e": 419, "s": 389, "text": "Let’s start at the beginning." }, { "code": null, "e": 538, "s": 419, "text": "On an abstract level a regular expression, regex for short, is a shorthand representation for a set. A set of strings." }, { "code": null, "e": 963, "s": 538, "text": "Say we have a list of all valid zip codes. Instead of keeping that long and unwieldy list around, it’s often more practical to have a short and precise pattern that completely describes that set. Whenever you want to check whether a string is a valid zip code, you can match it against the pattern. You’ll get a true or false result indicating whether the string belongs to the set of zip codes the regex pattern represents." }, { "code": null, "e": 1119, "s": 963, "text": "Let’s expand on the set of zip codes. A list of zip codes is finite, consists of rather short strings, and is not particularly challenging computationally." }, { "code": null, "e": 1461, "s": 1119, "text": "What about the set of strings that end in .csv? Can be quite useful when looking for data files. This set is infinite. You can’t make a list up front. And the only way to test for membership is to go to the end of the string and compare the last four characters. Regular expressions are a way of encoding such patterns in a standardized way." }, { "code": null, "e": 1557, "s": 1461, "text": "The following is a regular expression pattern that represents our set of strings ending in .csv" }, { "code": null, "e": 1567, "s": 1557, "text": "^.*\\.csv$" }, { "code": null, "e": 1813, "s": 1567, "text": "Let’s leave the mechanics of this particular pattern aside, and look at practicalities: a regex engine can test a pattern against an input string to see if it matches. The above pattern matches foo.csv, but does not match bar.txt or my_csv_file." }, { "code": null, "e": 1948, "s": 1813, "text": "Before you use regular expressions in your code, you can test them using an online regex evaluator, and experiment with a friendly UI." }, { "code": null, "e": 2157, "s": 1948, "text": "I like regex101.com: you can pick the flavor of the regex engine, and patterns are nicely decomposed for you, so you get a good understanding of what your pattern actually does. Regex patterns can be cryptic." }, { "code": null, "e": 2362, "s": 2157, "text": "I’d recommend you open regex101.com in another window or tab and experiment with the examples presented in this article interactively. You’ll get a much better feel for regex patterns this way, I promise." }, { "code": null, "e": 2505, "s": 2362, "text": "Regular expressions are useful in any scenario that benefits from full or partial pattern matches on strings. These are some common use cases:" }, { "code": null, "e": 2537, "s": 2505, "text": "verify the structure of strings" }, { "code": null, "e": 2580, "s": 2537, "text": "extract substrings form structured strings" }, { "code": null, "e": 2629, "s": 2580, "text": "search / replace / rearrange parts of the string" }, { "code": null, "e": 2656, "s": 2629, "text": "split a string into tokens" }, { "code": null, "e": 2721, "s": 2656, "text": "All of these come up regularly when doing data preparation work." }, { "code": null, "e": 2900, "s": 2721, "text": "A regular expression pattern is constructed from distinct building blocks. It may contain literals, character classes, boundary matchers, quantifiers, groups and the OR operator." }, { "code": null, "e": 2941, "s": 2900, "text": "Let’s dive in and look at some examples." }, { "code": null, "e": 3154, "s": 2941, "text": "The most basic building block in a regular expression is a character a.k.a. literal. Most characters in a regex pattern do not have a special meaning, they simply match themselves. Consider the following pattern:" }, { "code": null, "e": 3184, "s": 3154, "text": "I am a harmless regex pattern" }, { "code": null, "e": 3407, "s": 3184, "text": "None of the characters in this pattern has special meaning. Thus each character of the pattern matches itself. Therefore there is only one string that matches this pattern, and it is identical to the pattern string itself." }, { "code": null, "e": 3627, "s": 3407, "text": "What are the characters that do have special meaning? The following list shows characters that have special meaning in a regular expression. They must be escaped by a backslash if they are meant to represent themselves." }, { "code": null, "e": 3659, "s": 3627, "text": "Consider the following pattern:" }, { "code": null, "e": 3667, "s": 3659, "text": "\\+21\\.5" }, { "code": null, "e": 3821, "s": 3667, "text": "The pattern consists of literals only — the + has special meaning and has been escaped, so has the .— and thus the pattern matches only one string: +21.5" }, { "code": null, "e": 3927, "s": 3821, "text": "Sometimes it’s necessary to refer to some non-printable character like the tab character ⇥ or a newline ↩" }, { "code": null, "e": 3982, "s": 3927, "text": "It’s best to use the proper escape sequences for them:" }, { "code": null, "e": 4058, "s": 3982, "text": "If you need to match a line break, they usually come in one of two flavors:" }, { "code": null, "e": 4105, "s": 4058, "text": "\\n often referred to as the unix-style newline" }, { "code": null, "e": 4157, "s": 4105, "text": "\\r\\n often referred to as the windows-style newline" }, { "code": null, "e": 4248, "s": 4157, "text": "To catch both possibilities you can match on \\r?\\n which means: optional \\r followed by \\n" }, { "code": null, "e": 4443, "s": 4248, "text": "Sometimes you have to match characters that are best expressed by using their Unicode index. Sometimes a character simply cannot be typed— like control characters such as ASCII NUL, ESC, VT etc." }, { "code": null, "e": 4627, "s": 4443, "text": "Sometimes your programming language simply does not support putting certain characters into patterns. Characters outside the BMP, such as 𝄞 or emojis are often not supported verbatim." }, { "code": null, "e": 4846, "s": 4627, "text": "In many regex engines — such as Java, JavaScript, Python, and Ruby — you can use the \\uHexIndex escape syntax to match any character by its Unicode index. Say we want to match the symbol for natural numbers: N - U+2115" }, { "code": null, "e": 4893, "s": 4846, "text": "The pattern to match this character is: \\u2115" }, { "code": null, "e": 4991, "s": 4893, "text": "Other engines often provide an equivalent escape syntax. In Go, you would use \\x{2115} to match N" }, { "code": null, "e": 5251, "s": 4991, "text": "Unicode support and escape syntax varies across engines. If you plan on matching technical symbols, musical symbols, or emojis — especially outside the BMP — check the documentation of the regex engine you use to be sure of adequate support for your use-case." }, { "code": null, "e": 5385, "s": 5251, "text": "Sometimes a pattern requires consecutive characters to be escaped as literals. Say it’s supposed to match the following string: +???+" }, { "code": null, "e": 5419, "s": 5385, "text": "The pattern would look like this:" }, { "code": null, "e": 5430, "s": 5419, "text": "\\+\\?\\?\\?\\+" }, { "code": null, "e": 5519, "s": 5430, "text": "The need to escape every character as literal makes it harder to read and to understand." }, { "code": null, "e": 5799, "s": 5519, "text": "Depending on your regex engine, there might be a way to start and end a literal section in your pattern. Check your docs. In Java and Perl sequences of characters that should be interpreted literally can be enclosed by \\Q and \\E. The following pattern is equivalent to the above:" }, { "code": null, "e": 5809, "s": 5799, "text": "\\Q+???+\\E" }, { "code": null, "e": 5969, "s": 5809, "text": "Escaping parts of a pattern can also be useful if it is constructed from parts, some of which are to be interpreted literally, like user-supplied search words." }, { "code": null, "e": 6156, "s": 5969, "text": "If your regex engine does not have this feature, the ecosystem often provides a function to escape all characters with special meaning from a pattern string, such as lodash escapeRegExp." }, { "code": null, "e": 6280, "s": 6156, "text": "The pipe character | is the selection operator. It matches alternatives. Suppose a pattern should match the strings 1 and 2" }, { "code": null, "e": 6318, "s": 6280, "text": "The following pattern does the trick:" }, { "code": null, "e": 6322, "s": 6318, "text": "1|2" }, { "code": null, "e": 6396, "s": 6322, "text": "The patterns left and right of the operator are the allowed alternatives." }, { "code": null, "e": 6457, "s": 6396, "text": "The following pattern matches William Turner and Bill Turner" }, { "code": null, "e": 6484, "s": 6457, "text": "William Turner|Bill Turner" }, { "code": null, "e": 6678, "s": 6484, "text": "The second part of the alternatives is consistently Turner. Would be convenient to put the alternatives William and Bill up front, and mention Turner only once. The following pattern does that:" }, { "code": null, "e": 6700, "s": 6678, "text": "(William|Bill) Turner" }, { "code": null, "e": 6766, "s": 6700, "text": "It looks more readable. It also introduces a new concept: Groups." }, { "code": null, "e": 6936, "s": 6766, "text": "You can group sub-patterns in sections enclosed in round brackets. They group the contained expressions into a single unit. Grouping parts of a pattern has several uses:" }, { "code": null, "e": 6982, "s": 6936, "text": "simplify regex notation, making intent clerer" }, { "code": null, "e": 7019, "s": 6982, "text": "apply quantifiers to sub-expressions" }, { "code": null, "e": 7056, "s": 7019, "text": "extract sub-strings matching a group" }, { "code": null, "e": 7093, "s": 7056, "text": "replace sub-strings matching a group" }, { "code": null, "e": 7150, "s": 7093, "text": "Let’s look at a regex with a group:(William|Bill) Turner" }, { "code": null, "e": 7311, "s": 7150, "text": "Groups are sometimes referred to as “capturing groups” because in case of a match, each group’s matched sub-string is captured, and is available for extraction." }, { "code": null, "e": 7607, "s": 7311, "text": "How captured groups are made available depends on the API you use. In JavaScript, calling \"my string\".match(/pattern/) returns an array of matches. The first item is the entire matched string and subsequent items are the sub-strings matching pattern groups in order of appearance in the pattern." }, { "code": null, "e": 7990, "s": 7607, "text": "Consider a string identifying a chess board field. Fields on a chess board can be identified as A1-A8 for the first column, B1-B8 for the second column and so on until H1-H8 for the last column. Suppose a string containing this notation should be validated and the components (the letter and the digit) extracted using capture groups. The following regular expression would do that." }, { "code": null, "e": 8025, "s": 7990, "text": "(A|B|C|D|E|F|G|H)(1|2|3|4|5|6|7|8)" }, { "code": null, "e": 8169, "s": 8025, "text": "While the above regular expression is valid and does the job, it is somewhat clunky. This one works just as well, and it is a bit more concise:" }, { "code": null, "e": 8184, "s": 8169, "text": "([A-H])([1-8])" }, { "code": null, "e": 8266, "s": 8184, "text": "This sure looks more concise. But it introduces a new concept: Character Classes." }, { "code": null, "e": 8561, "s": 8266, "text": "Character classes are used to define a set of allowed characters. The set of allowed characters is put in square brackets, and each allowed character is listed. The character class [abcdef] is equivalent to (a|b|c|d|e|f). Since the class contains alternatives, it matches exactly one character." }, { "code": null, "e": 8728, "s": 8561, "text": "The pattern [ab][cd] matches exactly 4 strings ac, ad, bc, and bd. It does not match ab, the first character matches, but the second character must be either c or d ." }, { "code": null, "e": 8823, "s": 8728, "text": "Suppose a pattern should match a two digit code. A pattern to match this could look like this:" }, { "code": null, "e": 8848, "s": 8823, "text": "[0123456789][0123456789]" }, { "code": null, "e": 8923, "s": 8848, "text": "This pattern matches all 100 two digit strings in the range from 00 to 99." }, { "code": null, "e": 9119, "s": 8923, "text": "It is often tedious and error-prone to list all possible characters in a character class. Consecutive characters can be included in a character class as ranges using the dash operator: [0-9][0-9]" }, { "code": null, "e": 9361, "s": 9119, "text": "Characters are ordered by a numeric index— in 2019 that is almost always the Unicode index. If you’re working with numbers, Latin characters and basic punctuation, you can instead look at the much smaller historical subset of Unicode: ASCII." }, { "code": null, "e": 9523, "s": 9361, "text": "The digits zero through nine are encoded sequentially through code-points: U+0030 for 0 to code point U+0039 for 9, so a character set of [0–9] is a valid range." }, { "code": null, "e": 9749, "s": 9523, "text": "Lower case and upper case letters of the Latin alphabet are encoded consecutively as well, so character classes for alphabetic characters are often seen too. The following character set matches any lower case Latin character:" }, { "code": null, "e": 9755, "s": 9749, "text": "[a-z]" }, { "code": null, "e": 9905, "s": 9755, "text": "You can define multiple ranges within the same character class. The following character class matches all lower case and upper case Latin characters:" }, { "code": null, "e": 9914, "s": 9905, "text": "[A-Za-z]" }, { "code": null, "e": 9991, "s": 9914, "text": "You might get the impression that the above pattern could be abbreviated to:" }, { "code": null, "e": 9997, "s": 9991, "text": "[A-z]" }, { "code": null, "e": 10144, "s": 9997, "text": "That is a valid character class, but it matches not only A-Z and a-z, it also matches all characters defined between Z and a, such as [, \\, and ^." }, { "code": null, "e": 10394, "s": 10144, "text": "If you’re tearing your hair out cursing the stupidity of the people who defined ASCII and introduce this mind-boggling discontinuity, hold your horses for a bit. ASCII was defined at a time where computing capacity was much more precious than today." }, { "code": null, "e": 10460, "s": 10394, "text": "Look at A hex: 0x41 bin: 0100 0001 and a hex: 0x61 bin: 0110 0001" }, { "code": null, "e": 10735, "s": 10460, "text": "How do you convert between upper and lower case? You flip one bit. That is true for the entire alphabet. ASCII is optimized to simplify case conversion. The people defining ASCII were very thoughtful. Some desirable qualities had to be sacrificed for others. You’re welcome." }, { "code": null, "e": 11028, "s": 10735, "text": "You might wonder how to put the - character into a character class. After all, it is used to define ranges. Most engines interpret the - character literally if placed as the first or last character in the class: [-+0–9] or [+0–9-]. Some few engines require escaping with a backslash: [\\-+0–9]" }, { "code": null, "e": 11343, "s": 11028, "text": "Sometimes it’s useful to define a character class that matches most characters, except for a few defined exceptions. If a character class definition begins with a ^, the set of listed characters is inverted. As an example, the following class allows any character as long as it’s neither a digit nor an underscore." }, { "code": null, "e": 11351, "s": 11343, "text": "[^0-9_]" }, { "code": null, "e": 11541, "s": 11351, "text": "Please note that the ^ character is interpreted as a literal if it is not the first character of the class, as in [f^o], and that it is a boundary matcher if used outside character classes." }, { "code": null, "e": 11774, "s": 11541, "text": "Some character classes are used so frequently that there are shorthand notations defined for them. Consider the character class [0–9]. It matches any digit character and is used so often that there is a mnemonic notation for it: \\d." }, { "code": null, "e": 11907, "s": 11774, "text": "The following list shows character classes with most common shorthand notations, likely to be supported by any regex engine you use." }, { "code": null, "e": 12184, "s": 11907, "text": "Most engines come with a comprehensive list of predefined character classes matching certain blocks or categories of the Unicode standard, punctuation, specific alphabets, etc. These additional character classes are often specific to the engine at hand, and not very portable." }, { "code": null, "e": 12356, "s": 12184, "text": "The most ubiquitous predefined character class is the dot, and it deserves a small section on its own. It matches any character except for line terminators like \\r and \\n." }, { "code": null, "e": 12441, "s": 12356, "text": "The following pattern matches any three character string ending with a lower case x:" }, { "code": null, "e": 12445, "s": 12441, "text": "..x" }, { "code": null, "e": 12645, "s": 12445, "text": "In practice the dot is often used to create “anything might go in here” sections in a pattern. It is frequently combined with a quantifier and .* is used to match “anything” or “don’t care” sections." }, { "code": null, "e": 12820, "s": 12645, "text": "Please note that the . character loses its special meaning, when used inside a character class. The character class [.,] simply matches two characters, the dot and the comma." }, { "code": null, "e": 12971, "s": 12820, "text": "Depending on the regex engine you use you may be able to set the dotAll execution flag in which case . will match anything including line terminators." }, { "code": null, "e": 13289, "s": 12971, "text": "Boundary matchers — also known as “anchors” — do not match a character as such, they match a boundary. They match the positions between characters, if you will. The most common anchors are ^ and $. They match the beginning and end of a line respectively. The following table shows the most commonly supported anchors." }, { "code": null, "e": 13506, "s": 13289, "text": "Consider a search operation for digits on a multi-line text. The pattern [0–9] finds every digit in the text, no matter where it is located. The pattern ^[0–9] finds every digit that is the first character on a line." }, { "code": null, "e": 13552, "s": 13506, "text": "The same idea applies to line endings with $." }, { "code": null, "e": 13825, "s": 13552, "text": "The \\A and \\Z or \\z anchors are useful matching multi-line strings. They anchor to the beginning and end of the entire input. The upper case \\Z variant is tolerant of trailing newlines and matches just before that, effectively discarding any trailing newline in the match." }, { "code": null, "e": 13937, "s": 13825, "text": "The \\A and \\Z anchors are supported by most mainstream regex engines, with the notable exception of JavaScript." }, { "code": null, "e": 14081, "s": 13937, "text": "Suppose the requirement is to check whether a text is a two-line record specifying a chess position. This is what the input strings looks like:" }, { "code": null, "e": 14097, "s": 14081, "text": "Column: FRow: 7" }, { "code": null, "e": 14148, "s": 14097, "text": "The following pattern matches the above structure:" }, { "code": null, "e": 14181, "s": 14148, "text": "\\AColumn: [A-H]\\r?\\nRow: [1-8]\\Z" }, { "code": null, "e": 14356, "s": 14181, "text": "The \\b anchor matches the edge of any alphanumeric sequence. This is useful if you want to do “whole word” matches. The following pattern looks for a standalone upper case I." }, { "code": null, "e": 14362, "s": 14356, "text": "\\bI\\b" }, { "code": null, "e": 14614, "s": 14362, "text": "The pattern does not match the first letter of Illinois because there is no word boundary to the right. The next letter is a word letter — defined by the character class \\w as [a-zA-Z0–9_] —and not a non-word letter, which would constitute a boundary." }, { "code": null, "e": 14732, "s": 14614, "text": "Let’s replace Illinois with I!linois. The exclamation point is not a word character, and thus constitutes a boundary." }, { "code": null, "e": 14941, "s": 14732, "text": "The somewhat esoteric non-word boundary \\B is the negation of \\b. It matches any position that is not matched by \\b. It matches every position between characters within white space and alphanumeric sequences." }, { "code": null, "e": 15202, "s": 14941, "text": "Some regex engines support the \\G boundary matcher. It is useful when using regular expressions programmatically, and a pattern is applied repeatedly to a string, trying to find pattern all matches in a loop. It anchors to the position of the last match found." }, { "code": null, "e": 15485, "s": 15202, "text": "Any literal or character group matches the occurrence of exactly one character. The pattern [0–9][0–9] matches exactly two digits. Quantifiers help specifying the expected number of matches of a pattern. They are notated using curly braces. The following is equivalent to [0–9][0–9]" }, { "code": null, "e": 15494, "s": 15485, "text": "[0-9]{2}" }, { "code": null, "e": 15715, "s": 15494, "text": "The basic notation can be extended to provide upper and lower bounds. Say it’s necessary to match between two and six digits. The exact number varies, but it must be between two and six. The following notation does that:" }, { "code": null, "e": 15726, "s": 15715, "text": "[0-9]{2,6}" }, { "code": null, "e": 15909, "s": 15726, "text": "The upper bound is optional, if omitted any number of occurrences equal to or greater than the lower bound is acceptable. The following sample matches two or more consecutive digits." }, { "code": null, "e": 15919, "s": 15909, "text": "[0-9]{2,}" }, { "code": null, "e": 16022, "s": 15919, "text": "There are some predefined shorthands for common quantifiers that are very frequently used in practice." }, { "code": null, "e": 16157, "s": 16022, "text": "The ? quantifier is equivalent to {0, 1}, which means: optional single occurrence. The preceding pattern may not match, or match once." }, { "code": null, "e": 16237, "s": 16157, "text": "Let’s find integers, optionally prefixed with a plus or minus sign: [-+]?\\d{1,}" }, { "code": null, "e": 16315, "s": 16237, "text": "The + quantifier is equivalent to {1,}, which means: at least one occurrence." }, { "code": null, "e": 16434, "s": 16315, "text": "We can modify our integer matching pattern from above to be more idiomatic by replacing {1,} with +and we get:[-+]?\\d+" }, { "code": null, "e": 16622, "s": 16434, "text": "The * quantifier is equivalent to {0,}, which means: zero or more occurrences. You’ll see it very often in conjunction with the dot as .*, which means: any character don’t care how often." }, { "code": null, "e": 16766, "s": 16622, "text": "Let’s match an comma separated list of integers. Whitespace between entries is not allowed, and at least one integer must be present:\\d+(,\\d+)*" }, { "code": null, "e": 16868, "s": 16766, "text": "We’re matching an integer followed by any number of groups containing a comma followed by an integer." }, { "code": null, "e": 17099, "s": 16868, "text": "Suppose the requirement is to match the domain part from a http URL in a capture group. The following seems like a good idea: match the protocol, then capture the domain, then an optional path. The idea translates roughly to this:" }, { "code": null, "e": 17113, "s": 17099, "text": "http://(.*)/?" }, { "code": null, "e": 17242, "s": 17113, "text": "If you’re using an engine that uses /regex/ notation like JavaScript, you have to escape the forward slashes: http:\\/\\/(.*)\\/?.*" }, { "code": null, "e": 17422, "s": 17242, "text": "It matches the protocol, captures what comes after the protocol as domain and it allows for an optional slash and some arbitrary text after that, which would be the resource path." }, { "code": null, "e": 17505, "s": 17422, "text": "Strangely enough, the following is captured by the group given some input strings:" }, { "code": null, "e": 17668, "s": 17505, "text": "The results are somewhat surprising, as the pattern was designed to capture the domain part only, but it seems to be capturing everything till the end of the URL." }, { "code": null, "e": 17833, "s": 17668, "text": "This happens because each quantifier encountered in the pattern tries to match as much of the string as possible. The quantifiers are called greedy for this reason." }, { "code": null, "e": 17891, "s": 17833, "text": "Let’s check the of matching behaviour of: http://(.*)/?.*" }, { "code": null, "e": 18399, "s": 17891, "text": "The greedy * in the capturing group is the first encountered quantifier. The . character class it applies to matches any character, so the quantifier extends to the end of the string. Thus the capture group captures everything. But wait, you say, there’s the /?.* part at the end. Well, yes, and it matches what’s left of the string — nothing, a.k.a the empty string — perfectly. The slash is optional, and is followed by zero or more characters. The empty string fits. The entire pattern matches just fine." }, { "code": null, "e": 18658, "s": 18399, "text": "Greedy is the default, but not the only flavor of quantifiers. Each quantifier has a reluctant version, that matches the least possible amount of characters. The greedy versions of the quantifiers are converted to reluctant versions by appending a ? to them." }, { "code": null, "e": 18719, "s": 18658, "text": "The following table gives the notations for all quantifiers." }, { "code": null, "e": 19002, "s": 18719, "text": "The quanfier{n} is equvalent in both greedy and reluctant versions. For the others the number of matched characters may vary. Let’s revisit the example from above and change the capture group to match as little as possible, in the hopes of getting the domain name captured properly." }, { "code": null, "e": 19019, "s": 19002, "text": "http://(.*?)/?.*" }, { "code": null, "e": 19349, "s": 19019, "text": "Using this pattern, nothing — more precisely the empty string — is captured by the group. Why is that? The capture group now captures as little as possible: nothing. The (.*?) captures nothing, the /? matches nothing, and the .* matches the entirety of what’s left of the string. So again, this pattern does not work as intended." }, { "code": null, "e": 19573, "s": 19349, "text": "So far the capture group matches too little or too much. Let’s revert back to the greedy quantifier, but disallow the slash character in the domain name, and also require that the domain name be at least one character long." }, { "code": null, "e": 19592, "s": 19573, "text": "http://([^/]+)/?.*" }, { "code": null, "e": 19791, "s": 19592, "text": "This pattern greedily captures one or more non slash characters after the protocol as the domain, and if finally any optional slash occurs it may be followed by any number of characters in the path." }, { "code": null, "e": 20236, "s": 19791, "text": "Both greedy and reluctant quantifiers imply some runtime overhead. If only a few such quantifiers are present, there are no issues. But if mutliple nested groups are each quantified as greedy or reluctant, determining the longest or shortest possible matches is a nontrivial operation that implies running back and forth on the input string adjusting the length of each quantifier’s match to determine whether the expression as a whole matches." }, { "code": null, "e": 20462, "s": 20236, "text": "Pathological cases of catastrophic backtracking may occur. If performance or malicious input is a concern, it’s best to prefer reluctant quantifiers and also have a look at a third kind of quantifiers: possessive quantifiers." }, { "code": null, "e": 20785, "s": 20462, "text": "Possessive quantifiers, if supported by your engine, act much like greedy quantifiers, with the distinction that they do not support backtracking. They try to match as many characters as possible, and once they do, they never yield any matched characters to accommodate possible matches for any other parts of the pattern." }, { "code": null, "e": 20850, "s": 20785, "text": "They are notated by appending a + to the base greedy quantifier." }, { "code": null, "e": 20984, "s": 20850, "text": "They are a fast performing version of “greedy-like” quantifiers, which makes them a good choice for performance sensitive operations." }, { "code": null, "e": 21121, "s": 20984, "text": "Let’s look at them in the PHP engine. First, let’s look at simple greedy matches. Let’s match some digits, followed by a nine: ([0–9]+)9" }, { "code": null, "e": 21294, "s": 21121, "text": "Matched against the input string: 123456789 the greedy quantifier will first match the entire input, then give back the 9, so the rest of the pattern has a chance to match." }, { "code": null, "e": 21504, "s": 21294, "text": "Now, when we replace the greedy with the possessive quantifier, it will match the entire input, then refuse to give back the 9 to avoid backtracking, and that will cause the entire pattern to not match at all." }, { "code": null, "e": 21612, "s": 21504, "text": "When would you want possessive behaviour? When you know that you always want the longest conceivable match." }, { "code": null, "e": 21813, "s": 21612, "text": "Let’s say you want to extract the filename part of filesystem paths. Let’s assume / as the path separator. Then what we effectively want is the last bit of the string after the last occurrence of a /." }, { "code": null, "e": 22078, "s": 21813, "text": "A possessive pattern works well here, because we always want to consume all folder names before capturing the file name. There is no need for the part of the pattern consuming folder names to ever give characters back. A corresponding pattern might look like this:" }, { "code": null, "e": 22100, "s": 22078, "text": "\\/?(?:[^\\/]+\\/)++(.*)" }, { "code": null, "e": 22175, "s": 22100, "text": "Note: using PHP /regex/ notation here, so the forward slashes are escaped." }, { "code": null, "e": 22578, "s": 22175, "text": "We want to allow absolute paths, so we allow the input to start with an optional forward slash. We then possessively consume folder names consisting of a series of non-slash characters followed by a slash. I’ve used a non-capturing group for that — so it’s notated as (?:pattern) instead of just (pattern). Anything that is left over after the last slash is what we capture into a group for extraction." }, { "code": null, "e": 22840, "s": 22578, "text": "Non-capturing groups match exactly the way normal groups do. However, they do not make their matched content available. If there’s no need to capture the content, they can be used to improve matching performance. Non-capturing groups are written as: (?:pattern)" }, { "code": null, "e": 23032, "s": 22840, "text": "Suppose we want to verify that a hex-string is valid. It needs to consist of an even number of hexadecimal digits each between 0–9 or a-f. The following expression does the job using a group:" }, { "code": null, "e": 23052, "s": 23032, "text": "([0-9a-f][0-9a-f])+" }, { "code": null, "e": 23288, "s": 23052, "text": "Since the point of the group in the pattern is to make sure that the digits come in pairs, and the digits actually matched are not of any relevance, the group may just as well be replaced with the faster performing non-capturing group:" }, { "code": null, "e": 23310, "s": 23288, "text": "(?:[0-9a-f][0-9a-f])+" }, { "code": null, "e": 23482, "s": 23310, "text": "There is also a fast-performing version of a non-capturing group, that does not support backtracking. It is called the “independent non-capturing group” or “atomic group”." }, { "code": null, "e": 23511, "s": 23482, "text": "It is written as (?>pattern)" }, { "code": null, "e": 23694, "s": 23511, "text": "An atomic group is a non-capturing group that can be used to optimize pattern matching for speed. Typically it is supported by regex engines that also support possessive quantifiers." }, { "code": null, "e": 23931, "s": 23694, "text": "Its behavior is also similar to possessive quantifiers: once an atomic group has matched a part of the string, that first match is permanent. The group will never try to re-match in another way to accommodate other parts of the pattern." }, { "code": null, "e": 23982, "s": 23931, "text": "a(?>bc|b)c matches abcc but it does not match abc." }, { "code": null, "e": 24242, "s": 23982, "text": "The atomic group’s first successful match is on bc and it stays that way. A normal group would re-match during backtracking to accommodate the c at the end of the pattern for a successful match. But an atomic group’s first match is permanent, it won’t change." }, { "code": null, "e": 24353, "s": 24242, "text": "This is useful if you want to match as fast as possible, and don’t want any backtracking to take place anyway." }, { "code": null, "e": 24493, "s": 24353, "text": "Say we’re matching the file name part of a path. We can match an atomic group of any characters followed by a slash. Then capture the rest:" }, { "code": null, "e": 24506, "s": 24493, "text": "(?>.*\\/)(.*)" }, { "code": null, "e": 24581, "s": 24506, "text": "Note: using PHP /regex/ notation here, so the forward slashes are escaped." }, { "code": null, "e": 24816, "s": 24581, "text": "A normal group would have done the job just as well, but eliminating the possibility of backtracking improves performance. If you’re matching millions of inputs against non-trivial regex patterns, you’ll start noticing the difference." }, { "code": null, "e": 24952, "s": 24816, "text": "It also improves resilience against malicious input designed to DoS-Attack a service by triggering catastrophic backtracking scenarios." }, { "code": null, "e": 25309, "s": 24952, "text": "Sometimes it’s useful to refer to something that matched earlier in the string. Suppose a string value is only valid if it starts and ends with the same letter. The words “alpha”, “radar”, “kick”, “level” and “stars” are examples. It is possible to capture part of a string in a group and refer to that group later in the pattern pattern: a back reference." }, { "code": null, "e": 25604, "s": 25309, "text": "Back references in a regex pattern are notated using \\n syntax, where n is the number of the capture group. The numbering is left to right starting with 1. If groups are nested, they are numbered in the order their opening parenthesis is encountered. Group 0 always means the entire expression." }, { "code": null, "e": 25713, "s": 25604, "text": "The following pattern matches inputs that have at least 3 characters and start and end with the same letter:" }, { "code": null, "e": 25728, "s": 25713, "text": "([a-zA-Z]).+\\1" }, { "code": null, "e": 25909, "s": 25728, "text": "In words: an lower or upper case letter — that letter is captured into a group — followed by any non-empty string, followed by the letter we captured at the beginning of the match." }, { "code": null, "e": 26094, "s": 25909, "text": "Let’s expand a bit. An input string is matched if it contains any alphanumeric sequence — think: word — more then once. Word boundaries are used to ensure that whole words are matched." }, { "code": null, "e": 26112, "s": 26094, "text": "\\b(\\w+)\\b.*\\b\\1\\b" }, { "code": null, "e": 26410, "s": 26112, "text": "Regular expressions are useful in search and replace operations. The typical use case is to look for a sub-string that matches a pattern and replace it with something else. Most APIs using regular expressions allow you to reference capture groups from the search pattern in the replacement string." }, { "code": null, "e": 26490, "s": 26410, "text": "These back references effectively allow to rearrange parts of the input string." }, { "code": null, "e": 26681, "s": 26490, "text": "Consider the following scenario: the input string contains an A-Z character prefix followed by an optional space followed by a 3–6 digit number. Strings like A321, B86562, F 8753, and L 287." }, { "code": null, "e": 26805, "s": 26681, "text": "The task is to convert it to another string consisting of the number, followed by a dash, followed by the character prefix." }, { "code": null, "e": 26875, "s": 26805, "text": "Input OutputA321 321-AB86562 86562-BF 8753 8753-FL 287 287-L" }, { "code": null, "e": 27021, "s": 26875, "text": "The first step to transform one string to the other is to capture each part of the string in a capture group. The search pattern looks like this:" }, { "code": null, "e": 27044, "s": 27021, "text": "([A-Z])\\s?([0-9]{3,6})" }, { "code": null, "e": 27496, "s": 27044, "text": "It captures the prefix into a group, allows for an optional space character, then captures the digits into a second group. Back references in a replacement string are notated using $n syntax, where n is the number of the capture group. The replacement string for this operation should first reference the group containing the numbers, then a literal dash, then the first group containing the letter prefix. This gives the following replacement string:" }, { "code": null, "e": 27502, "s": 27496, "text": "$2-$1" }, { "code": null, "e": 27706, "s": 27502, "text": "Thus A321 is matched by the search pattern, putting A into $1 and 312 into $2. The replacement string is arranged to yield the desired result: The number comes first, then a dash, then the letter prefix." }, { "code": null, "e": 27861, "s": 27706, "text": "Please note that, since the $ character carries special meaning in a replacement string, it must be escaped as $$ if it should be inserted as a character." }, { "code": null, "e": 28131, "s": 27861, "text": "This kind of regex-enabled search and replace is often offered by text editors. Suppose you have a list of paths in your editor, and the task at hand is to prefix the file name of each file with an underscore. The path /foo/bar/file.txt should become /foo/bar/_file.txt" }, { "code": null, "e": 28183, "s": 28131, "text": "With all we learned so far, we can do it like this:" }, { "code": null, "e": 28305, "s": 28183, "text": "It is sometimes useful to assert that a string has a certain structure, without actually matching it. How is that useful?" }, { "code": null, "e": 28398, "s": 28305, "text": "Let’s write a pattern that matches all words that are followed by a word beginning with an a" }, { "code": null, "e": 28522, "s": 28398, "text": "Let’s try \\b(\\w+)\\s+a it anchors to a word boundary, and matches word characters until it sees some space followed by an a." }, { "code": null, "e": 28767, "s": 28522, "text": "In the above example, we match love, swat, fly, and to, but fail to capture the an before ant. This is because the a starting an has been consumed as part of the match of to. We’ve scanned past that a, and the word an has no chance of matching." }, { "code": null, "e": 28893, "s": 28767, "text": "Would be great if there was a way to assert properties of the first character of the next word without actually consuming it." }, { "code": null, "e": 28994, "s": 28893, "text": "Constructs asserting existence, but not consuming the input are called “lookahead” and “lookbehind”." }, { "code": null, "e": 29086, "s": 28994, "text": "Lookaheads are used to assert that a pattern matches ahead. They are written as (?=pattern)" }, { "code": null, "e": 29119, "s": 29086, "text": "Let’s use it to fix our pattern:" }, { "code": null, "e": 29135, "s": 29119, "text": "\\b(\\w+)(?=\\s+a)" }, { "code": null, "e": 29278, "s": 29135, "text": "We’ve put the space and initial a of the next word into a lookahead, so when scanning a string for matches, they are checked but not consumed." }, { "code": null, "e": 29375, "s": 29278, "text": "A negative lookahead asserts that its pattern does not match ahead. It is notated as (?!pattern)" }, { "code": null, "e": 29442, "s": 29375, "text": "Let’s find all words not followed by a word that starts with an a." }, { "code": null, "e": 29460, "s": 29442, "text": "\\b(\\w+)\\b(?!\\s+a)" }, { "code": null, "e": 29528, "s": 29460, "text": "We match whole words which are not followed by some space and an a." }, { "code": null, "e": 29829, "s": 29528, "text": "The lookbehind serves the same purpose as the lookahead, but it applies to the left of the current position, not to the right. Many regex engines limit the kind of pattern you can use in a lookbehind, because applying a pattern backwards is something that they are not optimized for. Check your docs!" }, { "code": null, "e": 29869, "s": 29829, "text": "A lookbehind is written as (?<=pattern)" }, { "code": null, "e": 30003, "s": 29869, "text": "It asserts the existence of something before the current position. Let’s find all words that come after a word ending with an r or t." }, { "code": null, "e": 30020, "s": 30003, "text": "(?<=[rt]\\s)(\\w+)" }, { "code": null, "e": 30137, "s": 30020, "text": "We assert that there is an r or t followed by a space, then we capture the sequence of word characters that follows." }, { "code": null, "e": 30256, "s": 30137, "text": "There’s also a negative lookbehind asserting the non-existence of a pattern to the left. It is written as (?<!pattern)" }, { "code": null, "e": 30359, "s": 30256, "text": "Let’s invert the words found: We want to match all words that come after words not ending with r or t." }, { "code": null, "e": 30378, "s": 30359, "text": "(?<![rt]\\s)\\b(\\w+)" }, { "code": null, "e": 30518, "s": 30378, "text": "We match all words by \\b(\\w+), and by prepending (?<![rt]\\s) we ensure that any words we match are not preceded by a word ending in r or t." }, { "code": null, "e": 30657, "s": 30518, "text": "If you’re working with an API that allows you to split a string by pattern, it is often useful to keep lookaheads and lookbehinds in mind." }, { "code": null, "e": 30882, "s": 30657, "text": "A regex split typically uses the pattern as a delimiter, and removes the delimiter from the parts. Putting lookahead or lookbehind sections in a delimiter makes it match without removing the parts that were merely looked at." }, { "code": null, "e": 31045, "s": 30882, "text": "Suppose you have a string delimited by :, in which some of the parts are labels consisting of alphabetic characters, and some are time stamps in the format HH:mm." }, { "code": null, "e": 31097, "s": 31045, "text": "Let’s look at input string time_a:9:23:time_b:10:11" }, { "code": null, "e": 31170, "s": 31097, "text": "If we just split on :, we get the parts: [time_a, 9, 32, time_b, 10, 11]" }, { "code": null, "e": 31290, "s": 31170, "text": "Let’s say we want to improve by splitting only if the : has a letter on either side. The delimiter is now [a-z]:|:[a-z]" }, { "code": null, "e": 31410, "s": 31290, "text": "We get the parts: [time_, 9:32, ime_, 10:11] We’ve lost the adjacent characters, since they were part of the delimiter." }, { "code": null, "e": 31601, "s": 31410, "text": "If we refine the delimiter to use lookahead and lookbehind for the adjacent characters, their existence will be verified, but they won’t match as part of the delimiter: (?<[a-z]):|:(?=[a-z])" }, { "code": null, "e": 31665, "s": 31601, "text": "Finally we get the parts we want: [time_a, 9:32, time_b, 10:11]" }, { "code": null, "e": 31857, "s": 31665, "text": "Most regex engines allow setting flags or modifiers to tune aspects of the pattern matching process. Be sure to familiarise yourself with the way your engine of choice handles such modifiers." }, { "code": null, "e": 31946, "s": 31857, "text": "They often make the difference between a impractically complex pattern and a trival one." }, { "code": null, "e": 32148, "s": 31946, "text": "You can expect to find case (in-)sensitivity modifiers, anchoring options, full match vs. patial match mode, and a dotAll mode which lets the .character class match anything including line terminators." }, { "code": null, "e": 32185, "s": 32148, "text": "JavaScript, Python, Java, Ruby, .NET" }, { "code": null, "e": 32356, "s": 32185, "text": "Let’s look at JavaScript, for example. If you want case insensitive mode and only the first match found, you can use the i modifier, and make sure to omit the g modifier." }, { "code": null, "e": 32516, "s": 32356, "text": "Arriving at the end of this article you may feel that all possible string parsing problems can be dealt with, once you get regular expressions under your belt." }, { "code": null, "e": 32526, "s": 32516, "text": "Well, no." }, { "code": null, "e": 32874, "s": 32526, "text": "This article introduces regular expressions as a shorthand notation for sets of strings. If you happen to have the exact regular expression for zip codes, you have a shorthand notation for the set of all strings representing valid zip codes. You can easily test an input string to check if it is an element of that set. There is a problem however." }, { "code": null, "e": 32958, "s": 32874, "text": "There are many meaningful sets of strings for which there is no regular expression!" }, { "code": null, "e": 33137, "s": 32958, "text": "The set of valid JavaScript programs has no regex representation, for example. There will never be a regex pattern that can check if a JavaScript source is syntactically correct." }, { "code": null, "e": 33569, "s": 33137, "text": "This is mostly due to regex’ inherent inability to deal with nested structures of arbitrary depth. Regular expressions are inherently non-recursive. XML and JSON are nested structures, so is the source code of many programming languages. Palindromes are another example— words that read the same forwards and backwards like racecar — are a very simple form of nested structure. Each character is opening or closing a nesting level." }, { "code": null, "e": 33720, "s": 33569, "text": "You can construct patterns that will match nested structures up to a certain depth but you can’t write a pattern that matches arbitrary depth nesting." }, { "code": null, "e": 33956, "s": 33720, "text": "Nested structures often turn out to be not regular. If you’re interested in computation theory and classifications of languages — that is, sets of strings — have a glimpse at the Chomsky Hiararchy, Formal Grammars and Formal Languages." }, { "code": null, "e": 34241, "s": 33956, "text": "Let me conclude with a word of caution. I sometimes see attempts trying to use regular expressions not only for lexical analysis — the identification and extraction of tokens from a string — but also for semantic analysis trying to interpret and validate each token’s meaning as well." }, { "code": null, "e": 34410, "s": 34241, "text": "While lexical analysis is a perfectly valid use case for regular expressions, attempting semantic validation more often than not leads towards creating another problem." }, { "code": null, "e": 34445, "s": 34410, "text": "The plural of “regex” is “regrets”" }, { "code": null, "e": 34480, "s": 34445, "text": "Let me illustrate with an example." }, { "code": null, "e": 34709, "s": 34480, "text": "Suppose a string shall be an IPv4 address in decimal notation with dots separating the numbers. A regular expression should validate that an input string is indeed an IPv4 address. The first attempt may look something like this:" }, { "code": null, "e": 34764, "s": 34709, "text": "([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})\\.([0-9]{1,3})" }, { "code": null, "e": 34956, "s": 34764, "text": "It matches four groups of one to three digits separated by a dot. Some readers may feel that this pattern falls short. It matches 111.222.333.444 for example, which is not a valid IP address." }, { "code": null, "e": 35186, "s": 34956, "text": "If you now feel the urge to change the pattern so it tests for each group of digits that the encoded number be between 0 and 255 — with possible leading zeros — then you’re on your way to creating the second problem, and regrets." }, { "code": null, "e": 35367, "s": 35186, "text": "Trying to do that leads away from lexical analysis — identifying four groups of digits — to a semantic analysis verifying that the groups of digits translate to admissible numbers." }, { "code": null, "e": 35657, "s": 35367, "text": "This yields a dramatically more complex regular expression, examples of which is found here. I’d recommend solving a problem like this by capturing each group of digits using a regex pattern, then converting captured items to integers and validating their range in a separate logical step." }, { "code": null, "e": 35986, "s": 35657, "text": "When working with regular expressions, the trade-off between complexity, maintainability, performance, and correctness should always be a conscious decision. After all, a regex pattern is as “write-only” as computing syntax can get. It is difficult to read regular expression patterns correctly, let alone debug and extend them." }, { "code": null, "e": 36150, "s": 35986, "text": "My advice is to embrace them as a powerful string processing tool, but to neither overestimate their possibilities, nor the ability of human beings to handle them." } ]
Java Getting Started
Some PCs might have Java already installed. To check if you have Java installed on a Windows PC, search in the start bar for Java or type the following in Command Prompt (cmd.exe): If Java is installed, you will see something like this (depending on version): If you do not have Java installed on your computer, you can download it for free at oracle.com. Note: In this tutorial, we will write Java code in a text editor. However, it is possible to write Java in an Integrated Development Environment, such as IntelliJ IDEA, Netbeans or Eclipse, which are particularly useful when managing larger collections of Java files. To install Java on Windows: Go to "System Properties" (Can be found on Control Panel > System and Security > System > Advanced System Settings) Click on the "Environment variables" button under the "Advanced" tab Then, select the "Path" variable in System variables and click on the "Edit" button Click on the "New" button and add the path where Java is installed, followed by \bin. By default, Java is installed in C:\Program Files\Java\jdk-11.0.1 (If nothing else was specified when you installed it). In that case, You will have to add a new path with: C:\Program Files\Java\jdk-11.0.1\bin Then, click "OK", and save the settings At last, open Command Prompt (cmd.exe) and type java -version to see if Java is running on your machine Go to "System Properties" (Can be found on Control Panel > System and Security > System > Advanced System Settings) Click on the "Environment variables" button under the "Advanced" tab Then, select the "Path" variable in System variables and click on the "Edit" button Click on the "New" button and add the path where Java is installed, followed by \bin. By default, Java is installed in C:\Program Files\Java\jdk-11.0.1 (If nothing else was specified when you installed it). In that case, You will have to add a new path with: C:\Program Files\Java\jdk-11.0.1\bin Then, click "OK", and save the settings At last, open Command Prompt (cmd.exe) and type java -version to see if Java is running on your machine Write the following in the command line (cmd.exe): If Java was successfully installed, you will see something like this (depending on version): In Java, every application begins with a class name, and that class must match the filename. Let's create our first Java file, called Main.java, which can be done in any text editor (like Notepad). The file should contain a "Hello World" message, which is written with the following code: Main.java public class Main { public static void main(String[] args) { System.out.println("Hello World"); } } Try it Yourself » Don't worry if you don't understand the code above - we will discuss it in detail in later chapters. For now, focus on how to run the code above. Save the code in Notepad as "Main.java". Open Command Prompt (cmd.exe), navigate to the directory where you saved your file, and type "javac Main.java": This will compile your code. If there are no errors in the code, the command prompt will take you to the next line. Now, type "java Main" to run the file: The output should read: Congratulations! You have written and executed your first Java program. We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 44, "s": 0, "text": "Some PCs might have Java already installed." }, { "code": null, "e": 181, "s": 44, "text": "To check if you have Java installed on a Windows PC, search in the start bar for Java or type the following in Command Prompt (cmd.exe):" }, { "code": null, "e": 260, "s": 181, "text": "If Java is installed, you will see something like this (depending on version):" }, { "code": null, "e": 356, "s": 260, "text": "If you do not have Java installed on your computer, you can download it for free at oracle.com." }, { "code": null, "e": 624, "s": 356, "text": "Note: In this tutorial, we will write Java code in a text editor. However, it is possible to write Java in an Integrated Development Environment, such as IntelliJ IDEA, Netbeans or Eclipse, which are particularly useful when managing larger collections of Java files." }, { "code": null, "e": 652, "s": 624, "text": "To install Java on Windows:" }, { "code": null, "e": 1383, "s": 652, "text": "\nGo to \"System Properties\" (Can be found on Control Panel > \nSystem and Security > System > Advanced System Settings)\nClick on the \"Environment variables\" button under the \"Advanced\" tab\nThen, select the \"Path\" variable in System variables and click on the \"Edit\" \nbutton\nClick on the \"New\" button and add the path where Java is installed, \n followed by \\bin. By default, Java is installed in C:\\Program \n Files\\Java\\jdk-11.0.1 (If nothing else was specified when you installed it). \n In that case, You will have to add a new path with: C:\\Program \n Files\\Java\\jdk-11.0.1\\bin \n Then, click \"OK\", and save the settings\nAt last, open Command Prompt (cmd.exe) and type java -version to see if Java is \n running on your machine\n" }, { "code": null, "e": 1500, "s": 1383, "text": "Go to \"System Properties\" (Can be found on Control Panel > \nSystem and Security > System > Advanced System Settings)" }, { "code": null, "e": 1569, "s": 1500, "text": "Click on the \"Environment variables\" button under the \"Advanced\" tab" }, { "code": null, "e": 1654, "s": 1569, "text": "Then, select the \"Path\" variable in System variables and click on the \"Edit\" \nbutton" }, { "code": null, "e": 2005, "s": 1654, "text": "Click on the \"New\" button and add the path where Java is installed, \n followed by \\bin. By default, Java is installed in C:\\Program \n Files\\Java\\jdk-11.0.1 (If nothing else was specified when you installed it). \n In that case, You will have to add a new path with: C:\\Program \n Files\\Java\\jdk-11.0.1\\bin \n Then, click \"OK\", and save the settings" }, { "code": null, "e": 2112, "s": 2005, "text": "At last, open Command Prompt (cmd.exe) and type java -version to see if Java is \n running on your machine" }, { "code": null, "e": 2163, "s": 2112, "text": "Write the following in the command line (cmd.exe):" }, { "code": null, "e": 2256, "s": 2163, "text": "If Java was successfully installed, you will see something like this (depending on version):" }, { "code": null, "e": 2349, "s": 2256, "text": "In Java, every application begins with a class name, and that class must match the filename." }, { "code": null, "e": 2455, "s": 2349, "text": "Let's create our first Java file, called Main.java, which can be done in any text editor \n(like Notepad)." }, { "code": null, "e": 2547, "s": 2455, "text": "The file should contain a \"Hello World\" message, which is written with the \nfollowing code:" }, { "code": null, "e": 2557, "s": 2547, "text": "Main.java" }, { "code": null, "e": 2666, "s": 2557, "text": "public class Main {\n public static void main(String[] args) {\n System.out.println(\"Hello World\");\n }\n}\n" }, { "code": null, "e": 2686, "s": 2666, "text": "\nTry it Yourself »\n" }, { "code": null, "e": 2833, "s": 2686, "text": "Don't worry if you don't understand the code above - we will discuss it in detail in later chapters. \nFor now, focus on how to run the code above." }, { "code": null, "e": 2987, "s": 2833, "text": "Save the code in Notepad as \"Main.java\". Open Command Prompt (cmd.exe), navigate to the directory where you saved your file, and type \"javac \nMain.java\":" }, { "code": null, "e": 3143, "s": 2987, "text": "This will compile your code. If there are no errors in the code, the command prompt will take you to the next line. \nNow, type \"java Main\" to run the file:" }, { "code": null, "e": 3167, "s": 3143, "text": "The output should read:" }, { "code": null, "e": 3239, "s": 3167, "text": "Congratulations! You have written and executed your first Java program." }, { "code": null, "e": 3272, "s": 3239, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 3314, "s": 3272, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 3421, "s": 3314, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 3440, "s": 3421, "text": "help@w3schools.com" } ]
Ages - Solved Examples
Q 1 - If the ratio of ages of two persons Ram and sham is 5:4 . After Three years their age ratio changes and becomes 11:9. In that case tell about the present age of mr. sham. A - 23 years B - 24 years C - 25 years D - 26 years Answer - B Explanation If the age of Mr. Ram 5x and 4x is the age of Mr. sham. Then, ( 5x+3) /( 4x+3) = 11/9 ⇒ 9 (5x+3)= 11 (4x+3) ⇒ x= (33-27) = 6 So the present age of Mr. sham = 6*4 = 24 years. Q 2 - A mother is 30 time older in the comparison of her daughter. After the period of 18 year , the mother age would be thrice in the comparison of his daughter . In that case tell about the present age of mother. A - 40 years B - 41 years C - 42 years D - 43 years Answer - A Explanation Let daughter present age be x year . in that case mother present age would be = 30x years 30x+ 18 = 3 (x+ 18) ⇒ 27x = 36 ⇒ x = 4/3 ∴ so the present age of mother = (30* 4/3) = 40 years. Q 3 - The ratio of present ages of three persons ajay , vijay and sanjay are in the proportion of 4: 7: 9. Before 8 year total sum of their age is 56. What should be the present ages? A - 28 and 36 years B - 28 and 38 years C - 30 and 36 years D - 36 and 28 years Answer - A Explanation If the present age of ajay , vijay and sanjay is 4x , 7x and 9x years. Total sum of ages of ajay, vijay and sanjay before 8 years ago = (4x-8)+(7x-8)+(9x-8) =(20x-24) years. ∴ 20x-24 = 56 ⇒ 20x = 80 ⇒ x= 4 Hence, it proves that age of ajay is 4*4 = 16 years, vijay (7*4) = 28 years and sanjay (9* 4) = 36 years. Q 4 - Daughter's present age is 2/5 in the comparison of her mother .8 year later , age of her daughter will be 1/2 in the comparison of her mother. Find out mother present age? A - 39 years. B - 40 years C - 41 years D - 42 years Answer - B Explanation If the present age of mother is equal to x year. In that situation the daughter present age would be = 2x/5 years. 2x/5 + 8 = 1/2 (x+8) ⇒ 4x+ 80 = 5x+40 ⇒ x = 40. The mother age at the present time is = 40 years. Q 5 - Ajay age was double in the comparison of bhuvan before 3 years. Seven years hence, the sum of both ages would be 83 years. What should be the age at the present time of both? A - 43 years B - 44 years C - 45 years D - 46 years Answer - C Explanation Before 3 year let bhuvan age be x years. 3 years before , ajay age will be 2x years. Now Bhuwan's age =(x+3) years and ajay age = (2x+3) years. (x+3)+7+(2x+3)+7 = 83 ⇒ 3x+20 = 83 ⇒ 3x = 63 ⇒ x = 21 Now the bhuwan present age = (21+3) = 24 years Now the ajay present age = (2 *21+3) years = 45 years. Q 6 - I am 4 year older in the comparison of my sister, but my brother who is the youngest among us is 7 year younger to myself. My father is three times in the comparison of my brother. The present age of my sister 18 year and my father is 3 year older in the comparison of my mother. In that situation what should be the present age of my mother?. A - 42 years B - 43 years C - 44 years D - 45 years Answer - A Explanation If my sister age is x years. Then, Sister - x I - x+4 Brother - (x+4-7) = x-3 Father - 3 (x-3) Given x = 18 ∴ Father's age = 3(18-3) = 45 years. Mother age = (45-3) = 42 years. Q 7 - Ajay is as much younger to vijay as he is older to vinay. If 48 years is the sum of the ages of vijay and buwan . Then find out the present age of Mr. ajay ? A - 21 years B - 22 years C - 23 years D - 24 years Answer - D Explanation V-A = A- B ⇒ V+B = 2A =48 ⇒ 24 Now, We can say that the present age of Mr. Ajay is 24 years. Q 8 - If 100 year is equal to the sum of the ages of father and son. 2:1 was the ratio of father and son before the period of 5 years. Find out the ratio of ages which would be after the period 10 year. A - 3:4 B - 3:5 C - 4:3 D - 5:3 Answer - D Explanation If the age of father at the present time = x years His son age at the present time = (100-x) years. x-5 / (100-x-5) = 2/1 ⇒ (x-5) = 2(95-x) ⇒ 3x = 195 ⇒ x = 65 Ratio of the ages of man and son after 10 years = (65+10)/(35+10)= 75/45 = 5/3 = 5:3 87 Lectures 22.5 hours Programming Line Print Add Notes Bookmark this page
[ { "code": null, "e": 4077, "s": 3892, "text": "Q 1 - If the ratio of ages of two persons Ram and sham is 5:4 . After Three years their age ratio changes and becomes 11:9. In that case tell about the present age of mr. sham.\n" }, { "code": null, "e": 4090, "s": 4077, "text": "A - 23 years" }, { "code": null, "e": 4103, "s": 4090, "text": "B - 24 years" }, { "code": null, "e": 4116, "s": 4103, "text": "C - 25 years" }, { "code": null, "e": 4129, "s": 4116, "text": "D - 26 years" }, { "code": null, "e": 4140, "s": 4129, "text": "Answer - B" }, { "code": null, "e": 4152, "s": 4140, "text": "Explanation" }, { "code": null, "e": 4329, "s": 4152, "text": "If the age of Mr. Ram 5x and 4x is the age of Mr. sham.\nThen, ( 5x+3) /( 4x+3) = 11/9 ⇒ 9 (5x+3)= 11 (4x+3) ⇒ x= (33-27) = 6\nSo the present age of Mr. sham = 6*4 = 24 years.\n" }, { "code": null, "e": 4549, "s": 4329, "text": "Q 2 - A mother is 30 time older in the comparison of her daughter. After the period of 18 year , the mother age would be thrice in the comparison of his daughter . In that case tell about the present age of mother." }, { "code": null, "e": 4562, "s": 4549, "text": "A - 40 years" }, { "code": null, "e": 4575, "s": 4562, "text": "B - 41 years" }, { "code": null, "e": 4588, "s": 4575, "text": "C - 42 years" }, { "code": null, "e": 4601, "s": 4588, "text": "D - 43 years" }, { "code": null, "e": 4612, "s": 4601, "text": "Answer - A" }, { "code": null, "e": 4624, "s": 4612, "text": "Explanation" }, { "code": null, "e": 4821, "s": 4624, "text": "Let daughter present age be x year .\nin that case mother present age would be = 30x years \n30x+ 18 = 3 (x+ 18) ⇒ 27x = 36 ⇒ x = 4/3 \n∴ so the present age of mother = (30* 4/3) = 40 years.\n" }, { "code": null, "e": 5012, "s": 4821, "text": "Q 3 - The ratio of present ages of three persons ajay , vijay and sanjay are in the proportion of 4: 7: 9. Before 8 year total sum of their age is 56. What should be the present ages?" }, { "code": null, "e": 5032, "s": 5012, "text": "A - 28 and 36 years" }, { "code": null, "e": 5052, "s": 5032, "text": "B - 28 and 38 years" }, { "code": null, "e": 5072, "s": 5052, "text": "C - 30 and 36 years" }, { "code": null, "e": 5092, "s": 5072, "text": "D - 36 and 28 years" }, { "code": null, "e": 5103, "s": 5092, "text": "Answer - A" }, { "code": null, "e": 5115, "s": 5103, "text": "Explanation" }, { "code": null, "e": 5438, "s": 5115, "text": "If the present age of ajay , vijay and sanjay is 4x , 7x and 9x years.\nTotal sum of ages of ajay, vijay and sanjay before 8 years ago = (4x-8)+(7x-8)+(9x-8)\n=(20x-24) years.\n∴ 20x-24 = 56 ⇒ 20x = 80 ⇒ x= 4 \nHence, it proves that age of ajay is 4*4 = 16 years, \nvijay (7*4) = 28 years and sanjay (9* 4) = 36 years. \n" }, { "code": null, "e": 5620, "s": 5438, "text": "Q 4 - Daughter's present age is 2/5 in the comparison of her mother .8 year later , age of her daughter will be 1/2 in the comparison of her mother. Find out mother present age?" }, { "code": null, "e": 5634, "s": 5620, "text": "A - 39 years." }, { "code": null, "e": 5647, "s": 5634, "text": "B - 40 years" }, { "code": null, "e": 5660, "s": 5647, "text": "C - 41 years" }, { "code": null, "e": 5673, "s": 5660, "text": "D - 42 years" }, { "code": null, "e": 5684, "s": 5673, "text": "Answer - B" }, { "code": null, "e": 5696, "s": 5684, "text": "Explanation" }, { "code": null, "e": 5915, "s": 5696, "text": "If the present age of mother is equal to x year. \nIn that situation the daughter present age would be = 2x/5 years.\n2x/5 + 8 = 1/2 (x+8) ⇒ 4x+ 80 = 5x+40 ⇒ x = 40.\nThe mother age at the present time is = 40 years.\n" }, { "code": null, "e": 6100, "s": 5915, "text": "Q 5 - Ajay age was double in the comparison of bhuvan before 3 years. Seven years hence, the sum of both ages would be 83 years. What should be the age at the present time of both?" }, { "code": null, "e": 6113, "s": 6100, "text": "A - 43 years" }, { "code": null, "e": 6126, "s": 6113, "text": "B - 44 years" }, { "code": null, "e": 6139, "s": 6126, "text": "C - 45 years" }, { "code": null, "e": 6152, "s": 6139, "text": "D - 46 years" }, { "code": null, "e": 6163, "s": 6152, "text": "Answer - C" }, { "code": null, "e": 6175, "s": 6163, "text": "Explanation" }, { "code": null, "e": 6487, "s": 6175, "text": "Before 3 year let bhuvan age be x years. \n3 years before , ajay age will be 2x years. \nNow Bhuwan's age =(x+3) years and ajay age = (2x+3) years.\n(x+3)+7+(2x+3)+7 = 83 ⇒ 3x+20 = 83 ⇒ 3x = 63 ⇒ x = 21 \nNow the bhuwan present age = (21+3) = 24 years \nNow the ajay present age = (2 *21+3) years = 45 years.\n" }, { "code": null, "e": 6842, "s": 6487, "text": "Q 6 - I am 4 year older in the comparison of my sister, but my brother who is the youngest among us is 7 year younger to myself. My father is three times in the comparison of my brother. The present age of my sister 18 year and my father is 3 year older in the comparison of my mother. In that situation what should be the present age of my mother?." }, { "code": null, "e": 6855, "s": 6842, "text": "A - 42 years" }, { "code": null, "e": 6868, "s": 6855, "text": "B - 43 years" }, { "code": null, "e": 6881, "s": 6868, "text": "C - 44 years" }, { "code": null, "e": 6894, "s": 6881, "text": "D - 45 years" }, { "code": null, "e": 6905, "s": 6894, "text": "Answer - A" }, { "code": null, "e": 6917, "s": 6905, "text": "Explanation" }, { "code": null, "e": 7112, "s": 6917, "text": "If my sister age is x years. Then,\nSister - x \nI - x+4\nBrother - (x+4-7) = x-3\nFather - 3 (x-3) \nGiven x = 18\n∴ Father's age = 3(18-3) = 45 years.\nMother age = (45-3) = 42 years.\n" }, { "code": null, "e": 7279, "s": 7112, "text": "Q 7 - Ajay is as much younger to vijay as he is older to vinay. If 48 years is the sum of the ages of vijay and buwan . Then find out the present age of Mr. ajay ?" }, { "code": null, "e": 7292, "s": 7279, "text": "A - 21 years" }, { "code": null, "e": 7305, "s": 7292, "text": "B - 22 years" }, { "code": null, "e": 7318, "s": 7305, "text": "C - 23 years" }, { "code": null, "e": 7331, "s": 7318, "text": "D - 24 years" }, { "code": null, "e": 7342, "s": 7331, "text": "Answer - D" }, { "code": null, "e": 7354, "s": 7342, "text": "Explanation" }, { "code": null, "e": 7451, "s": 7354, "text": "V-A = A- B ⇒ V+B = 2A =48 ⇒ 24\nNow, We can say that the present age of Mr. Ajay is 24 years.\n" }, { "code": null, "e": 7659, "s": 7451, "text": "Q 8 - If 100 year is equal to the sum of the ages of father and son. 2:1 was the ratio of father and son before the period of 5 years. Find out the ratio of ages which would be after the period 10 year." }, { "code": null, "e": 7667, "s": 7659, "text": "A - 3:4" }, { "code": null, "e": 7675, "s": 7667, "text": "B - 3:5" }, { "code": null, "e": 7683, "s": 7675, "text": "C - 4:3" }, { "code": null, "e": 7691, "s": 7683, "text": "D - 5:3" }, { "code": null, "e": 7702, "s": 7691, "text": "Answer - D" }, { "code": null, "e": 7714, "s": 7702, "text": "Explanation" }, { "code": null, "e": 7965, "s": 7714, "text": "If the age of father at the present time = x years \nHis son age at the present time = (100-x) years.\nx-5 / (100-x-5) = 2/1 ⇒ (x-5) = 2(95-x) ⇒ 3x = 195 ⇒ x = 65\nRatio of the ages of man and son after 10 years = (65+10)/(35+10)= 75/45 = 5/3 = 5:3 \n" }, { "code": null, "e": 8001, "s": 7965, "text": "\n 87 Lectures \n 22.5 hours \n" }, { "code": null, "e": 8019, "s": 8001, "text": " Programming Line" }, { "code": null, "e": 8026, "s": 8019, "text": " Print" }, { "code": null, "e": 8037, "s": 8026, "text": " Add Notes" } ]
MongoDB - $push Operator - GeeksforGeeks
10 May, 2020 MongoDB provides different types of array update operators to update the values of the array fields in the documents and $push operator is one of them. This operator is used to append a specified value to an array. Syntax: { $push: { <field1>: <value1>, ... } } Here, <field> can specify with dot notation in embedded/nested documents or an array. If the specified field in the $push operator is not present in the document, then this operator will add the array field with the value as its items. The $push operator insert items at the end of the array. If the specified field in the $push operator is not an array, then this operation will fails. If the value of the $push operator is an array, then this operator will append the whole array as a single element. And if you want to add each item of the value separately, then you can use $each modifier with $push operator. You can use this operator with methods like update(), findAndModify(), etc., according to your requirement. We can also use the following modifiers with the $push operator :Syntax: { $push: { <field1>: { <modifier1>: <value1>, ... }, ... } } The processing of the push operation with modifiers works in the following order: First update the array to add items in the correct position. Second, apply sort if specified. Third slice the array if specified. Fourth store the array. Note: Here the order in which the modifiers appear in the $push operator does not matter. In the following examples, we are working with: Database: GeeksforGeeksCollection: contributorDocument: two documents that contain the details of the contributor in the form of field-value pairs. In this example, we are appending a single value, i.e., “C++” to an array field, i.e., language field in the document that satisfy the condition(name: “Rohit”). db.contributor.update({name: "Rohit"}, {$push: {language: "C++"}}) In this example, we are appending multiple values, i.e., [“C”, “Ruby”, “Go”] to an array field, i.e., language field in the document that satisfy the condition(name: “Sumit”). db.contributor.update({name: "Sumit"}, {$push: {language: {$each: ["C", "Ruby", "Go"]}}}) In this example, we are appending multiple values, i.e., [89, 76.4] to an array field, i.e., personal.semesterMarks field of a nested/embedded document. db.contributor.update({name: "Sumit"}, {$push: {"personal.semesterMarks": {$each: [89, 76.4]}}}) In this example, we are using multiple modifiers like $each, $sort, and $slice with $push operator. db.contributor.update({name: "Rohit"}, {$push: { language: { $each: ["C", "Go"], $sort: 1, $slice: 4}}}) Here, The $each modifier is used to add multiple documents to the language array. The $sort modifier is used to sort all the items of the modified language array in ascending. The $slice modifier is used to keep only the first four sorted items of the language array. MongoDB MongoDB-operators MongoDB Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments MongoDB - Distinct() Method How to connect MongoDB with ReactJS ? MongoDB - limit() Method MongoDB - FindOne() Method MongoDB insertMany() Method - db.Collection.insertMany() MongoDB updateOne() Method - db.Collection.updateOne() MongoDB - Update() Method Create user and add role in MongoDB MongoDB - sort() Method
[ { "code": null, "e": 23879, "s": 23851, "text": "\n10 May, 2020" }, { "code": null, "e": 24094, "s": 23879, "text": "MongoDB provides different types of array update operators to update the values of the array fields in the documents and $push operator is one of them. This operator is used to append a specified value to an array." }, { "code": null, "e": 24102, "s": 24094, "text": "Syntax:" }, { "code": null, "e": 24141, "s": 24102, "text": "{ $push: { <field1>: <value1>, ... } }" }, { "code": null, "e": 24227, "s": 24141, "text": "Here, <field> can specify with dot notation in embedded/nested documents or an array." }, { "code": null, "e": 24377, "s": 24227, "text": "If the specified field in the $push operator is not present in the document, then this operator will add the array field with the value as its items." }, { "code": null, "e": 24434, "s": 24377, "text": "The $push operator insert items at the end of the array." }, { "code": null, "e": 24528, "s": 24434, "text": "If the specified field in the $push operator is not an array, then this operation will fails." }, { "code": null, "e": 24755, "s": 24528, "text": "If the value of the $push operator is an array, then this operator will append the whole array as a single element. And if you want to add each item of the value separately, then you can use $each modifier with $push operator." }, { "code": null, "e": 24863, "s": 24755, "text": "You can use this operator with methods like update(), findAndModify(), etc., according to your requirement." }, { "code": null, "e": 24936, "s": 24863, "text": "We can also use the following modifiers with the $push operator :Syntax:" }, { "code": null, "e": 24997, "s": 24936, "text": "{ $push: { <field1>: { <modifier1>: <value1>, ... }, ... } }" }, { "code": null, "e": 25079, "s": 24997, "text": "The processing of the push operation with modifiers works in the following order:" }, { "code": null, "e": 25140, "s": 25079, "text": "First update the array to add items in the correct position." }, { "code": null, "e": 25173, "s": 25140, "text": "Second, apply sort if specified." }, { "code": null, "e": 25209, "s": 25173, "text": "Third slice the array if specified." }, { "code": null, "e": 25233, "s": 25209, "text": "Fourth store the array." }, { "code": null, "e": 25323, "s": 25233, "text": "Note: Here the order in which the modifiers appear in the $push operator does not matter." }, { "code": null, "e": 25371, "s": 25323, "text": "In the following examples, we are working with:" }, { "code": null, "e": 25519, "s": 25371, "text": "Database: GeeksforGeeksCollection: contributorDocument: two documents that contain the details of the contributor in the form of field-value pairs." }, { "code": null, "e": 25680, "s": 25519, "text": "In this example, we are appending a single value, i.e., “C++” to an array field, i.e., language field in the document that satisfy the condition(name: “Rohit”)." }, { "code": "db.contributor.update({name: \"Rohit\"}, {$push: {language: \"C++\"}})", "e": 25747, "s": 25680, "text": null }, { "code": null, "e": 25923, "s": 25747, "text": "In this example, we are appending multiple values, i.e., [“C”, “Ruby”, “Go”] to an array field, i.e., language field in the document that satisfy the condition(name: “Sumit”)." }, { "code": "db.contributor.update({name: \"Sumit\"}, {$push: {language: {$each: [\"C\", \"Ruby\", \"Go\"]}}})", "e": 26013, "s": 25923, "text": null }, { "code": null, "e": 26166, "s": 26013, "text": "In this example, we are appending multiple values, i.e., [89, 76.4] to an array field, i.e., personal.semesterMarks field of a nested/embedded document." }, { "code": "db.contributor.update({name: \"Sumit\"}, {$push: {\"personal.semesterMarks\": {$each: [89, 76.4]}}})", "e": 26285, "s": 26166, "text": null }, { "code": null, "e": 26385, "s": 26285, "text": "In this example, we are using multiple modifiers like $each, $sort, and $slice with $push operator." }, { "code": "db.contributor.update({name: \"Rohit\"}, {$push: { language: { $each: [\"C\", \"Go\"], $sort: 1, $slice: 4}}})", "e": 26542, "s": 26385, "text": null }, { "code": null, "e": 26548, "s": 26542, "text": "Here," }, { "code": null, "e": 26624, "s": 26548, "text": "The $each modifier is used to add multiple documents to the language array." }, { "code": null, "e": 26718, "s": 26624, "text": "The $sort modifier is used to sort all the items of the modified language array in ascending." }, { "code": null, "e": 26810, "s": 26718, "text": "The $slice modifier is used to keep only the first four sorted items of the language array." }, { "code": null, "e": 26818, "s": 26810, "text": "MongoDB" }, { "code": null, "e": 26836, "s": 26818, "text": "MongoDB-operators" }, { "code": null, "e": 26844, "s": 26836, "text": "MongoDB" }, { "code": null, "e": 26942, "s": 26844, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26951, "s": 26942, "text": "Comments" }, { "code": null, "e": 26964, "s": 26951, "text": "Old Comments" }, { "code": null, "e": 26992, "s": 26964, "text": "MongoDB - Distinct() Method" }, { "code": null, "e": 27030, "s": 26992, "text": "How to connect MongoDB with ReactJS ?" }, { "code": null, "e": 27055, "s": 27030, "text": "MongoDB - limit() Method" }, { "code": null, "e": 27082, "s": 27055, "text": "MongoDB - FindOne() Method" }, { "code": null, "e": 27139, "s": 27082, "text": "MongoDB insertMany() Method - db.Collection.insertMany()" }, { "code": null, "e": 27194, "s": 27139, "text": "MongoDB updateOne() Method - db.Collection.updateOne()" }, { "code": null, "e": 27220, "s": 27194, "text": "MongoDB - Update() Method" }, { "code": null, "e": 27256, "s": 27220, "text": "Create user and add role in MongoDB" } ]
Write a C program for electing a candidate in Elections by calling functions using Switch case
How to cast vote, count, and display the votes for each candidate that participates in elections using C language? Let’s consider three persons who participated in elections. Here we need to write a code for the following − Cast vote − Selecting a candidate by pressing the cast vote Cast vote − Selecting a candidate by pressing the cast vote Find vote count − Finding the total number of votes each candidate gains declaring the winner. Find vote count − Finding the total number of votes each candidate gains declaring the winner. All these operations are performed by calling each function using Switch case − #include<stdio.h> #define CANDIDATE_COUNT #define CANDIDATE1 "ABC" #define CANDIDATE2 "XYZ" #define CANDIDATE3 "PQR" int votescount1=0, votescount2=0, votescount3=0; void castvote(){ int choice; printf("\n\n ### Please choose your Candidate ####\n\n"); printf("\n 1. %s", CANDIDATE1); printf("\n 2. %s", CANDIDATE2); printf("\n 3. %s", CANDIDATE3); printf("\n4. %s", “None of These"); printf("\nInput your choice (1 - 4) : “); scanf("%d",&choice); switch(choice){ case 1: votescount1++; break; case 2: votescount2++; break; case 3: votescount3++; break; default: printf("\n Error: Wrong Choice !! Please retry"); //hold the screen getchar(); } printf(“\n thanks for vote !!"); } void votesCount(){ printf("\n\n ##### Voting Statics ####"); printf("\n %s - %d ", CANDIDATE1, votescount1); printf("\n %s - %d ", CANDIDATE2, votescount2); printf("\n %s - %d ", CANDIDATE3, votescount3); } int main(){ int i; int choice; do{ printf("\n\n ###### Welcome to Election/Voting 2019 #####"); printf("\n\n 1. Cast the Vote"); printf("\n 2. Find Vote Count"); printf("\n 0. Exit"); printf("\n Please enter your choice : "); scanf("%d", &choice); switch(choice){ case 1: castvote();break; case 2: votesCount();break; default: printf("\n Error: Invalid Choice"); } }while(choice!=0); //hold the screen getchar(); return 0; } ###### Welcome to Election/Voting 2019 ##### 1. Cast the Vote 2. Find Vote Count 0. Exit Please enter your choice : 1 ### Please choose your Candidate #### 1. ABC 2. XYZ 3. PQR 4. None of These Input your choice (1 - 4) : 1 thanks for vote !! ###### Welcome to Election/Voting 2019 ##### 1. Cast the Vote 2. Find Vote Count 0. Exit Please enter your choice : 1 ### Please choose your Candidate #### 1. ABC 2. XYZ 3. PQR 4. None of These Input your choice (1 - 4) : 1 thanks for vote !! ###### Welcome to Election/Voting 2019 ##### 1. Cast the Vote 2. Find Vote Count 0. Exit Please enter your choice : 2 ##### Voting Statics #### ABC - 2 XYZ - 0 PQR - 0 ###### Welcome to Election/Voting 2019 ##### 1. Cast the Vote 2. Find Vote Count 0. Exit Please enter your choice :
[ { "code": null, "e": 1177, "s": 1062, "text": "How to cast vote, count, and display the votes for each candidate that participates in elections using C language?" }, { "code": null, "e": 1286, "s": 1177, "text": "Let’s consider three persons who participated in elections. Here we need to write a code for the following −" }, { "code": null, "e": 1346, "s": 1286, "text": "Cast vote − Selecting a candidate by pressing the cast vote" }, { "code": null, "e": 1406, "s": 1346, "text": "Cast vote − Selecting a candidate by pressing the cast vote" }, { "code": null, "e": 1501, "s": 1406, "text": "Find vote count − Finding the total number of votes each candidate gains declaring the winner." }, { "code": null, "e": 1596, "s": 1501, "text": "Find vote count − Finding the total number of votes each candidate gains declaring the winner." }, { "code": null, "e": 1676, "s": 1596, "text": "All these operations are performed by calling each function using Switch case −" }, { "code": null, "e": 3164, "s": 1676, "text": "#include<stdio.h>\n#define CANDIDATE_COUNT\n#define CANDIDATE1 \"ABC\"\n#define CANDIDATE2 \"XYZ\"\n#define CANDIDATE3 \"PQR\"\nint votescount1=0, votescount2=0, votescount3=0;\nvoid castvote(){\n int choice;\n printf(\"\\n\\n ### Please choose your Candidate ####\\n\\n\");\n printf(\"\\n 1. %s\", CANDIDATE1);\n printf(\"\\n 2. %s\", CANDIDATE2);\n printf(\"\\n 3. %s\", CANDIDATE3);\n printf(\"\\n4. %s\", “None of These\");\n printf(\"\\nInput your choice (1 - 4) : “);\n scanf(\"%d\",&choice);\n switch(choice){\n case 1: votescount1++; break;\n case 2: votescount2++; break;\n case 3: votescount3++; break;\n default: printf(\"\\n Error: Wrong Choice !! Please retry\");\n //hold the screen\n getchar();\n }\n printf(“\\n thanks for vote !!\");\n}\nvoid votesCount(){\n printf(\"\\n\\n ##### Voting Statics ####\");\n printf(\"\\n %s - %d \", CANDIDATE1, votescount1);\n printf(\"\\n %s - %d \", CANDIDATE2, votescount2);\n printf(\"\\n %s - %d \", CANDIDATE3, votescount3);\n}\nint main(){\n int i;\n int choice;\n do{\n printf(\"\\n\\n ###### Welcome to Election/Voting 2019 #####\");\n printf(\"\\n\\n 1. Cast the Vote\");\n printf(\"\\n 2. Find Vote Count\");\n printf(\"\\n 0. Exit\");\n printf(\"\\n Please enter your choice : \");\n scanf(\"%d\", &choice);\n switch(choice){\n case 1: castvote();break;\n case 2: votesCount();break;\n default: printf(\"\\n Error: Invalid Choice\");\n }\n }while(choice!=0);\n //hold the screen\n getchar();\n return 0;\n}" }, { "code": null, "e": 3934, "s": 3164, "text": "###### Welcome to Election/Voting 2019 #####\n1. Cast the Vote\n2. Find Vote Count\n0. Exit\nPlease enter your choice : 1\n### Please choose your Candidate ####\n1. ABC\n2. XYZ\n3. PQR\n4. None of These\nInput your choice (1 - 4) : 1\nthanks for vote !!\n###### Welcome to Election/Voting 2019 #####\n1. Cast the Vote\n2. Find Vote Count\n0. Exit\nPlease enter your choice : 1\n### Please choose your Candidate ####\n1. ABC\n2. XYZ\n3. PQR\n4. None of These\nInput your choice (1 - 4) : 1\nthanks for vote !!\n###### Welcome to Election/Voting 2019 #####\n1. Cast the Vote\n2. Find Vote Count\n0. Exit\nPlease enter your choice : 2\n##### Voting Statics ####\nABC - 2\nXYZ - 0\nPQR - 0\n###### Welcome to Election/Voting 2019 #####\n1. Cast the Vote\n2. Find Vote Count\n0. Exit\nPlease enter your choice :" } ]
How to compare JSON objects regardless of order in Python? - GeeksforGeeks
24 Jan, 2021 JSON is Java Script Object Notation. These are language independent source codes used for data exchange and are generally lightweight in nature. It acts as an alternative to XML. These are generally texts which can be read and written easily by humans and it is also easier for machines to parse JSON and generate results. JSON is being used primarily for data transmission between server and web applications. . In this article, we will be learning about how can we compare JSON objects regardless of the order in which they exist in Python. Approach: Import module Create JSON strings Convert strings to python dictionaries Sort dictionaries Compare Print result Various implementation to do the same is given below, Example 1: Using sorted() Python3 import json # JSON stringjson_1 = '{"Name":"GFG", "Class": "Website", "Domain":"CS/IT", "CEO":"Sandeep Jain"}' json_2 = '{"CEO":"Sandeep Jain", "Domain":"CS/IT","Name": "GFG","Class": "Website"}' # Converting string into Python dictionariesjson_dict1 = json.loads(json_1)json_dict2 = json.loads(json_2) print(sorted(json_dict1.items()) == sorted(json_dict2.items())) Output: True Example 2: More complex comparison Python3 import json # JSON stringjson_1 = '{"Name":"GFG", "Class": "Website", "Domain":"CS/IT", "CEO":"Sandeep Jain","Subjects":["DSA","Python","C++","Java"]}' json_2 = '{"CEO":"Sandeep Jain","Subjects":["C++","Python","DSA","Java"], "Domain":"CS/IT","Name": "GFG","Class": "Website"}' # Convert string into Python dictionaryjson1_dict = json.loads(json_1)json2_dict = json.loads(json_2) print(sorted(json1_dict.items()) == sorted(json2_dict.items())) print(sorted(json1_dict.items()))print(sorted(json2_dict.items())) Output: False [(‘CEO’, ‘Sandeep Jain’), (‘Class’, ‘Website’), (‘Domain’, ‘CS/IT’), (‘Name’, ‘GFG’), (‘Subjects’, [‘DSA’, ‘Python’, ‘C++’, ‘Java’])] [(‘CEO’, ‘Sandeep Jain’), (‘Class’, ‘Website’), (‘Domain’, ‘CS/IT’), (‘Name’, ‘GFG’), (‘Subjects’, [‘C++’, ‘Python’, ‘DSA’, ‘Java’])] In this case we get our result as False because the problem with sorted() method is it only works on the top-level of a dictionary i.e., onto the keys and not on their values as can be verified by above code. So, in such cases we can define a custom function ourselves that can recursively sort any list or dictionary (by converting dictionaries into a list of key-value pair) and thus they can be made fit for comparison. Implementation using this alternative is given below. Example: Python3 import json # JSON stringjson_1 = '{"Name":"GFG", "Class": "Website", "Domain":"CS/IT", "CEO":"Sandeep Jain","Subjects":["DSA","Python","C++","Java"]}' json_2 = '{"CEO":"Sandeep Jain","Subjects":["C++","Python","DSA","Java"], "Domain":"CS/IT","Name": "GFG","Class": "Website"}' # Convert string into Python dictionaryjson1_dict = json.loads(json_1)json2_dict = json.loads(json_2) def sorting(item): if isinstance(item, dict): return sorted((key, sorting(values)) for key, values in item.items()) if isinstance(item, list): return sorted(sorting(x) for x in item) else: return item print(sorting(json1_dict) == sorting(json2_dict)) Output: True Picked Python-json Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments How to Install PIP on Windows ? How to drop one or multiple columns in Pandas Dataframe How To Convert Python Dictionary To JSON? Check if element exists in list in Python Python | Pandas dataframe.groupby() Defaultdict in Python Python | Get unique values from a list Python Classes and Objects Python | os.path.join() method Create a directory in Python
[ { "code": null, "e": 23901, "s": 23873, "text": "\n24 Jan, 2021" }, { "code": null, "e": 24316, "s": 23901, "text": "JSON is Java Script Object Notation. These are language independent source codes used for data exchange and are generally lightweight in nature. It acts as an alternative to XML. These are generally texts which can be read and written easily by humans and it is also easier for machines to parse JSON and generate results. JSON is being used primarily for data transmission between server and web applications. . " }, { "code": null, "e": 24446, "s": 24316, "text": "In this article, we will be learning about how can we compare JSON objects regardless of the order in which they exist in Python." }, { "code": null, "e": 24456, "s": 24446, "text": "Approach:" }, { "code": null, "e": 24470, "s": 24456, "text": "Import module" }, { "code": null, "e": 24490, "s": 24470, "text": "Create JSON strings" }, { "code": null, "e": 24529, "s": 24490, "text": "Convert strings to python dictionaries" }, { "code": null, "e": 24547, "s": 24529, "text": "Sort dictionaries" }, { "code": null, "e": 24555, "s": 24547, "text": "Compare" }, { "code": null, "e": 24568, "s": 24555, "text": "Print result" }, { "code": null, "e": 24622, "s": 24568, "text": "Various implementation to do the same is given below," }, { "code": null, "e": 24648, "s": 24622, "text": "Example 1: Using sorted()" }, { "code": null, "e": 24656, "s": 24648, "text": "Python3" }, { "code": "import json # JSON stringjson_1 = '{\"Name\":\"GFG\", \"Class\": \"Website\", \"Domain\":\"CS/IT\", \"CEO\":\"Sandeep Jain\"}' json_2 = '{\"CEO\":\"Sandeep Jain\", \"Domain\":\"CS/IT\",\"Name\": \"GFG\",\"Class\": \"Website\"}' # Converting string into Python dictionariesjson_dict1 = json.loads(json_1)json_dict2 = json.loads(json_2) print(sorted(json_dict1.items()) == sorted(json_dict2.items()))", "e": 25027, "s": 24656, "text": null }, { "code": null, "e": 25035, "s": 25027, "text": "Output:" }, { "code": null, "e": 25040, "s": 25035, "text": "True" }, { "code": null, "e": 25075, "s": 25040, "text": "Example 2: More complex comparison" }, { "code": null, "e": 25083, "s": 25075, "text": "Python3" }, { "code": "import json # JSON stringjson_1 = '{\"Name\":\"GFG\", \"Class\": \"Website\", \"Domain\":\"CS/IT\", \"CEO\":\"Sandeep Jain\",\"Subjects\":[\"DSA\",\"Python\",\"C++\",\"Java\"]}' json_2 = '{\"CEO\":\"Sandeep Jain\",\"Subjects\":[\"C++\",\"Python\",\"DSA\",\"Java\"], \"Domain\":\"CS/IT\",\"Name\": \"GFG\",\"Class\": \"Website\"}' # Convert string into Python dictionaryjson1_dict = json.loads(json_1)json2_dict = json.loads(json_2) print(sorted(json1_dict.items()) == sorted(json2_dict.items())) print(sorted(json1_dict.items()))print(sorted(json2_dict.items()))", "e": 25599, "s": 25083, "text": null }, { "code": null, "e": 25607, "s": 25599, "text": "Output:" }, { "code": null, "e": 25613, "s": 25607, "text": "False" }, { "code": null, "e": 25747, "s": 25613, "text": "[(‘CEO’, ‘Sandeep Jain’), (‘Class’, ‘Website’), (‘Domain’, ‘CS/IT’), (‘Name’, ‘GFG’), (‘Subjects’, [‘DSA’, ‘Python’, ‘C++’, ‘Java’])]" }, { "code": null, "e": 25881, "s": 25747, "text": "[(‘CEO’, ‘Sandeep Jain’), (‘Class’, ‘Website’), (‘Domain’, ‘CS/IT’), (‘Name’, ‘GFG’), (‘Subjects’, [‘C++’, ‘Python’, ‘DSA’, ‘Java’])]" }, { "code": null, "e": 26358, "s": 25881, "text": "In this case we get our result as False because the problem with sorted() method is it only works on the top-level of a dictionary i.e., onto the keys and not on their values as can be verified by above code. So, in such cases we can define a custom function ourselves that can recursively sort any list or dictionary (by converting dictionaries into a list of key-value pair) and thus they can be made fit for comparison. Implementation using this alternative is given below." }, { "code": null, "e": 26367, "s": 26358, "text": "Example:" }, { "code": null, "e": 26375, "s": 26367, "text": "Python3" }, { "code": "import json # JSON stringjson_1 = '{\"Name\":\"GFG\", \"Class\": \"Website\", \"Domain\":\"CS/IT\", \"CEO\":\"Sandeep Jain\",\"Subjects\":[\"DSA\",\"Python\",\"C++\",\"Java\"]}' json_2 = '{\"CEO\":\"Sandeep Jain\",\"Subjects\":[\"C++\",\"Python\",\"DSA\",\"Java\"], \"Domain\":\"CS/IT\",\"Name\": \"GFG\",\"Class\": \"Website\"}' # Convert string into Python dictionaryjson1_dict = json.loads(json_1)json2_dict = json.loads(json_2) def sorting(item): if isinstance(item, dict): return sorted((key, sorting(values)) for key, values in item.items()) if isinstance(item, list): return sorted(sorting(x) for x in item) else: return item print(sorting(json1_dict) == sorting(json2_dict))", "e": 27045, "s": 26375, "text": null }, { "code": null, "e": 27053, "s": 27045, "text": "Output:" }, { "code": null, "e": 27058, "s": 27053, "text": "True" }, { "code": null, "e": 27065, "s": 27058, "text": "Picked" }, { "code": null, "e": 27077, "s": 27065, "text": "Python-json" }, { "code": null, "e": 27084, "s": 27077, "text": "Python" }, { "code": null, "e": 27182, "s": 27084, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27191, "s": 27182, "text": "Comments" }, { "code": null, "e": 27204, "s": 27191, "text": "Old Comments" }, { "code": null, "e": 27236, "s": 27204, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27292, "s": 27236, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27334, "s": 27292, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27376, "s": 27334, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27412, "s": 27376, "text": "Python | Pandas dataframe.groupby()" }, { "code": null, "e": 27434, "s": 27412, "text": "Defaultdict in Python" }, { "code": null, "e": 27473, "s": 27434, "text": "Python | Get unique values from a list" }, { "code": null, "e": 27500, "s": 27473, "text": "Python Classes and Objects" }, { "code": null, "e": 27531, "s": 27500, "text": "Python | os.path.join() method" } ]
What is Content Negotiation in Asp.Net webAPI C#?
Content negotiation is the process of selecting the best representation for a given response when there are multiple representations available. Means, depending on the Accept header value in the request, the server sends the response. The primary mechanism for content negotiation in HTTP are these request headers − Accept − Which media types are acceptable for the response, such as "application/json," "application/xml," or a custom media type such as "application/vnd.example+xml" Accept-Charset − Which character sets are acceptable, such as UTF-8 or ISO 8859-1. Accept-Encoding − Which content encodings are acceptable, such as gzip. Accept-Language − The preferred natural language, such as "en-us". The server can also look at other portions of the HTTP request. For example, if the request contains an X-Requested-With header, indicating an AJAX request, the server might default to JSON if there is no Accept header. In content negotiation, the pipeline gets the IContentNegotiator service from the HttpConfiguration object. It also gets the list of media formatters from the HttpConfiguration.Formatters collection. Next, the pipeline calls IContentNegotiator.Negotiate, passing in − The type of object to serialize The collection of media formatters The HTTP request The Negotiate method returns two pieces of information − Which formatter to use The media type for the response If no formatter is found, the Negotiate method returns null, and the client receives HTTP error 406 (Not Acceptable). Let us consider StudentController like below. using DemoWebApplication.Models; using System; using System.Collections.Generic; using System.Linq; using System.Web.Http; namespace DemoWebApplication.Controllers{ public class StudentController : ApiController{ List<Student> students = new List<Student>{ new Student{ Id = 1, Name = "Mark" }, new Student{ Id = 2, Name = "John" } }; } } One of the standards of the RESTful service is that, the client should have the ability to decide in which format they want the response - XML, JSON etc. A request that is sent to the server includes an Accept header. Using the Accept header the client can specify the format for the response. For example Accept: application/xml returns XML Accept: application/json returns JSON The below output shows the response is of XML when we pass the Accept Header as application/XML. The below output shows the response is of JSON when we pass the Accept Header as application/JSON. When the response is being sent to the client in the requested format, notice that the Content-Type header of the response is set to the appropriate value. For example, if the client has requested application/xml, the server send the data in XML format and also sets the Content-Type=application/xml. We can also specify quality factor. In the example below, xml has higher quality factor than json, so the server uses XML formatter and formats the data in XML. application/xml;q=0.8,application/json;q=0.5
[ { "code": null, "e": 1379, "s": 1062, "text": "Content negotiation is the process of selecting the best representation for a given\nresponse when there are multiple representations available. Means, depending on the\nAccept header value in the request, the server sends the response. The primary\nmechanism for content negotiation in HTTP are these request headers −" }, { "code": null, "e": 1547, "s": 1379, "text": "Accept − Which media types are acceptable for the response, such as \"application/json,\" \"application/xml,\" or a custom media type such as \"application/vnd.example+xml\"" }, { "code": null, "e": 1630, "s": 1547, "text": "Accept-Charset − Which character sets are acceptable, such as UTF-8 or ISO 8859-1." }, { "code": null, "e": 1702, "s": 1630, "text": "Accept-Encoding − Which content encodings are acceptable, such as gzip." }, { "code": null, "e": 1769, "s": 1702, "text": "Accept-Language − The preferred natural language, such as \"en-us\"." }, { "code": null, "e": 1989, "s": 1769, "text": "The server can also look at other portions of the HTTP request. For example, if the\nrequest contains an X-Requested-With header, indicating an AJAX request, the server\nmight default to JSON if there is no Accept header." }, { "code": null, "e": 2189, "s": 1989, "text": "In content negotiation, the pipeline gets the IContentNegotiator service from the\nHttpConfiguration object. It also gets the list of media formatters from the\nHttpConfiguration.Formatters collection." }, { "code": null, "e": 2257, "s": 2189, "text": "Next, the pipeline calls IContentNegotiator.Negotiate, passing in −" }, { "code": null, "e": 2289, "s": 2257, "text": "The type of object to serialize" }, { "code": null, "e": 2324, "s": 2289, "text": "The collection of media formatters" }, { "code": null, "e": 2341, "s": 2324, "text": "The HTTP request" }, { "code": null, "e": 2398, "s": 2341, "text": "The Negotiate method returns two pieces of information −" }, { "code": null, "e": 2421, "s": 2398, "text": "Which formatter to use" }, { "code": null, "e": 2453, "s": 2421, "text": "The media type for the response" }, { "code": null, "e": 2571, "s": 2453, "text": "If no formatter is found, the Negotiate method returns null, and the client receives\nHTTP error 406 (Not Acceptable)." }, { "code": null, "e": 2617, "s": 2571, "text": "Let us consider StudentController like below." }, { "code": null, "e": 3058, "s": 2617, "text": "using DemoWebApplication.Models;\nusing System;\nusing System.Collections.Generic;\nusing System.Linq;\nusing System.Web.Http;\nnamespace DemoWebApplication.Controllers{\n public class StudentController : ApiController{\n List<Student> students = new List<Student>{\n new Student{\n Id = 1,\n Name = \"Mark\"\n },\n new Student{\n Id = 2,\n Name = \"John\"\n }\n };\n }\n}" }, { "code": null, "e": 3364, "s": 3058, "text": "One of the standards of the RESTful service is that, the client should have the ability\nto decide in which format they want the response - XML, JSON etc. A request that is\nsent to the server includes an Accept header. Using the Accept header the client can\nspecify the format for the response. For example" }, { "code": null, "e": 3438, "s": 3364, "text": "Accept: application/xml returns XML\nAccept: application/json returns JSON" }, { "code": null, "e": 3535, "s": 3438, "text": "The below output shows the response is of XML when we pass the Accept Header as\napplication/XML." }, { "code": null, "e": 3634, "s": 3535, "text": "The below output shows the response is of JSON when we pass the Accept Header as\napplication/JSON." }, { "code": null, "e": 3935, "s": 3634, "text": "When the response is being sent to the client in the requested format, notice that the\nContent-Type header of the response is set to the appropriate value. For example, if\nthe client has requested application/xml, the server send the data in XML format and\nalso sets the Content-Type=application/xml." }, { "code": null, "e": 4141, "s": 3935, "text": "We can also specify quality factor. In the example below, xml has higher quality\nfactor than json, so the server uses XML formatter and formats the data in XML.\napplication/xml;q=0.8,application/json;q=0.5" } ]
What are multicasting delegates in C#?
A delegate that holds a reference to more than one method is called multicasting delegate. Let us see an example − using System; delegate void myDelegate(int val1, int val2); public class Demo { public static void CalAdd(int val1, int val2) { Console.WriteLine("{0} + {1} = {2}", val1, val2, val1 + val2); } public static void CalSub(int val1, int val2) { Console.WriteLine("{0} - {1} = {2}", val1, val2, val1 - val2); } } public class Program { static void Main() { myDelegate d = new myDelegate(Demo.CalAdd); d += new myDelegate(Demo.CalSub); d(45, 70); d -= new myDelegate(Demo.CalAdd); d(95, 70); d += new myDelegate(Demo.CalSub); d(88, 6); d -= new myDelegate(Demo.CalAdd); d(40, 20); Console.Read(); } } In the above example, our delegate is − delegate void myDelegate(int val1, int val2); Using the following, we have set a reference to more than one method in delegates i.e. CalAdd() and CalSub() − myDelegate d = new myDelegate(Demo.CalAdd); d += new myDelegate(Demo.CalSub); d(45, 70); d -= new myDelegate(Demo.CalAdd); d(95, 70); d += new myDelegate(Demo.CalSub); d(88, 6); d -= new myDelegate(Demo.CalAdd); d(40, 20);
[ { "code": null, "e": 1153, "s": 1062, "text": "A delegate that holds a reference to more than one method is called multicasting delegate." }, { "code": null, "e": 1177, "s": 1153, "text": "Let us see an example −" }, { "code": null, "e": 1864, "s": 1177, "text": "using System;\ndelegate void myDelegate(int val1, int val2);\npublic class Demo {\n public static void CalAdd(int val1, int val2) {\n Console.WriteLine(\"{0} + {1} = {2}\", val1, val2, val1 + val2);\n }\n\n public static void CalSub(int val1, int val2) {\n Console.WriteLine(\"{0} - {1} = {2}\", val1, val2, val1 - val2);\n }\n}\n\npublic class Program {\n static void Main() {\n myDelegate d = new myDelegate(Demo.CalAdd);\n d += new myDelegate(Demo.CalSub);\n d(45, 70);\n d -= new myDelegate(Demo.CalAdd);\n d(95, 70);\n d += new myDelegate(Demo.CalSub);\n d(88, 6);\n d -= new myDelegate(Demo.CalAdd);\n d(40, 20);\n Console.Read();\n }\n}" }, { "code": null, "e": 1904, "s": 1864, "text": "In the above example, our delegate is −" }, { "code": null, "e": 1950, "s": 1904, "text": "delegate void myDelegate(int val1, int val2);" }, { "code": null, "e": 2061, "s": 1950, "text": "Using the following, we have set a reference to more than one method in delegates i.e. CalAdd() and CalSub() −" }, { "code": null, "e": 2284, "s": 2061, "text": "myDelegate d = new myDelegate(Demo.CalAdd);\nd += new myDelegate(Demo.CalSub);\nd(45, 70);\nd -= new myDelegate(Demo.CalAdd);\nd(95, 70);\nd += new myDelegate(Demo.CalSub);\nd(88, 6);\nd -= new myDelegate(Demo.CalAdd);\nd(40, 20);" } ]
PyQt - QToolBar Widget
A QToolBar widget is a movable panel consisting of text buttons, buttons with icons or other widgets. It is usually situated in a horizontal bar below menu bar, although it can be floating. Some useful methods of QToolBar class are as follows − addAction() Adds tool buttons having text or icon addSeperator() Shows tool buttons in groups addWidget() Adds controls other than button in the toolbar addToolBar() QMainWindow class method adds a new toolbar setMovable() Toolbar becomes movable setOrientation() Toolbar’s orientation sets to Qt.Horizontal or Qt.vertical Whenever a button on the toolbar is clicked, ActionTriggered() signal is emitted. Additionally, it sends reference to QAction object associated with the event to the connected function. A File toolbar is added in the toolbar area by calling addToolBar() method. tb = self.addToolBar("File") Although tool buttons with text captions can be added, a toolbar usually contains graphic buttons. A QAction object with an icon and name is added to the toolbar. new = QAction(QIcon("new.bmp"),"new",self) tb.addAction(new) Similarly, open and save buttons are added. Finally, actionTriggered() signal is connected to a slot function toolbtnpressed() tb.actionTriggered[QAction].connect(self.toolbtnpressed) The complete code to execute the example is as follows − import sys from PyQt4.QtCore import * from PyQt4.QtGui import * class tooldemo(QMainWindow): def __init__(self, parent = None): super(tooldemo, self).__init__(parent) layout = QVBoxLayout() tb = self.addToolBar("File") new = QAction(QIcon("new.bmp"),"new",self) tb.addAction(new) open = QAction(QIcon("open.bmp"),"open",self) tb.addAction(open) save = QAction(QIcon("save.bmp"),"save",self) tb.addAction(save) tb.actionTriggered[QAction].connect(self.toolbtnpressed) self.setLayout(layout) self.setWindowTitle("toolbar demo") def toolbtnpressed(self,a): print "pressed tool button is",a.text() def main(): app = QApplication(sys.argv) ex = tooldemo() ex.show() sys.exit(app.exec_()) if __name__ == '__main__': main() The above code produces the following output − 146 Lectures 22.5 hours ALAA EID Print Add Notes Bookmark this page
[ { "code": null, "e": 2028, "s": 1926, "text": "A QToolBar widget is a movable panel consisting of text buttons, buttons with icons or other widgets." }, { "code": null, "e": 2171, "s": 2028, "text": "It is usually situated in a horizontal bar below menu bar, although it can be floating. Some useful methods of QToolBar class are as follows −" }, { "code": null, "e": 2183, "s": 2171, "text": "addAction()" }, { "code": null, "e": 2221, "s": 2183, "text": "Adds tool buttons having text or icon" }, { "code": null, "e": 2236, "s": 2221, "text": "addSeperator()" }, { "code": null, "e": 2265, "s": 2236, "text": "Shows tool buttons in groups" }, { "code": null, "e": 2277, "s": 2265, "text": "addWidget()" }, { "code": null, "e": 2324, "s": 2277, "text": "Adds controls other than button in the toolbar" }, { "code": null, "e": 2337, "s": 2324, "text": "addToolBar()" }, { "code": null, "e": 2381, "s": 2337, "text": "QMainWindow class method adds a new toolbar" }, { "code": null, "e": 2394, "s": 2381, "text": "setMovable()" }, { "code": null, "e": 2418, "s": 2394, "text": "Toolbar becomes movable" }, { "code": null, "e": 2435, "s": 2418, "text": "setOrientation()" }, { "code": null, "e": 2494, "s": 2435, "text": "Toolbar’s orientation sets to Qt.Horizontal or Qt.vertical" }, { "code": null, "e": 2680, "s": 2494, "text": "Whenever a button on the toolbar is clicked, ActionTriggered() signal is emitted. Additionally, it sends reference to QAction object associated with the event to the connected function." }, { "code": null, "e": 2756, "s": 2680, "text": "A File toolbar is added in the toolbar area by calling addToolBar() method." }, { "code": null, "e": 2786, "s": 2756, "text": "tb = self.addToolBar(\"File\")\n" }, { "code": null, "e": 2949, "s": 2786, "text": "Although tool buttons with text captions can be added, a toolbar usually contains graphic buttons. A QAction object with an icon and name is added to the toolbar." }, { "code": null, "e": 3011, "s": 2949, "text": "new = QAction(QIcon(\"new.bmp\"),\"new\",self)\ntb.addAction(new)\n" }, { "code": null, "e": 3055, "s": 3011, "text": "Similarly, open and save buttons are added." }, { "code": null, "e": 3138, "s": 3055, "text": "Finally, actionTriggered() signal is connected to a slot function toolbtnpressed()" }, { "code": null, "e": 3196, "s": 3138, "text": "tb.actionTriggered[QAction].connect(self.toolbtnpressed)\n" }, { "code": null, "e": 3253, "s": 3196, "text": "The complete code to execute the example is as follows −" }, { "code": null, "e": 4084, "s": 3253, "text": "import sys\nfrom PyQt4.QtCore import *\nfrom PyQt4.QtGui import *\n\nclass tooldemo(QMainWindow):\n def __init__(self, parent = None):\n super(tooldemo, self).__init__(parent)\n layout = QVBoxLayout()\n tb = self.addToolBar(\"File\")\n\t\t\n new = QAction(QIcon(\"new.bmp\"),\"new\",self)\n tb.addAction(new)\n\t\t\n open = QAction(QIcon(\"open.bmp\"),\"open\",self)\n tb.addAction(open)\n save = QAction(QIcon(\"save.bmp\"),\"save\",self)\n tb.addAction(save)\n tb.actionTriggered[QAction].connect(self.toolbtnpressed)\n self.setLayout(layout)\n self.setWindowTitle(\"toolbar demo\")\n\t\t\n def toolbtnpressed(self,a):\n print \"pressed tool button is\",a.text()\n\t\t\ndef main():\n app = QApplication(sys.argv)\n ex = tooldemo()\n ex.show()\n sys.exit(app.exec_())\n\t\nif __name__ == '__main__':\n main()" }, { "code": null, "e": 4131, "s": 4084, "text": "The above code produces the following output −" }, { "code": null, "e": 4168, "s": 4131, "text": "\n 146 Lectures \n 22.5 hours \n" }, { "code": null, "e": 4178, "s": 4168, "text": " ALAA EID" }, { "code": null, "e": 4185, "s": 4178, "text": " Print" }, { "code": null, "e": 4196, "s": 4185, "text": " Add Notes" } ]
How to rotate array elements by using JavaScript ? - GeeksforGeeks
27 Apr, 2020 Given an array containing some array elements and the task is to perform the rotation of the array with the help of JavaScript. There are two approaches that are discussed below: Approach 1: We can use the Array unshift() method and Array pop() method to first pop the last element of the array and then insert it at the beginning of the array. Example: This example rotates the array elements.<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById("GFG_UP"); up.innerHTML = "Click on the button to perform" + " the operation<br>Array - [" + array + "]"; var down = document.getElementById("GFG_DOWN"); function arrayRotate(arr) { arr.unshift(arr.pop()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = "elements of array = [" + array + "]"; } </script></body> </html> <!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById("GFG_UP"); up.innerHTML = "Click on the button to perform" + " the operation<br>Array - [" + array + "]"; var down = document.getElementById("GFG_DOWN"); function arrayRotate(arr) { arr.unshift(arr.pop()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = "elements of array = [" + array + "]"; } </script></body> </html> Output: Approach 2: We can use the Array push() method and Array shift() method to shift the first element and then insert it at the end. Example: This example rotates the array elements.<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById("GFG_UP"); up.innerHTML = "Click on the button to perform" + " the operation<br>Array - [" + array + "]"; var down = document.getElementById("GFG_DOWN"); function arrayRotate(arr) { arr.push(arr.shift()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = "elements of array = [" + array + "]"; } </script></body> </html> <!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP"></p> <button onclick="myGFG()"> Click Here </button> <p id="GFG_DOWN"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById("GFG_UP"); up.innerHTML = "Click on the button to perform" + " the operation<br>Array - [" + array + "]"; var down = document.getElementById("GFG_DOWN"); function arrayRotate(arr) { arr.push(arr.shift()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = "elements of array = [" + array + "]"; } </script></body> </html> Output: CSS-Misc HTML-Misc JavaScript-Misc CSS HTML JavaScript Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Primer CSS Flexbox Flex Direction HTML Course | First Web Page | Printing Hello World Design a web page using HTML and CSS Search Bar using HTML, CSS and JavaScript How to wrap the text around an image using HTML and CSS ? REST API (Introduction) How to Insert Form Data into Database using PHP ? Form validation using HTML and JavaScript How to set input type date in dd-mm-yyyy format using HTML ? How to set the default value for an HTML <select> element ?
[ { "code": null, "e": 24985, "s": 24957, "text": "\n27 Apr, 2020" }, { "code": null, "e": 25164, "s": 24985, "text": "Given an array containing some array elements and the task is to perform the rotation of the array with the help of JavaScript. There are two approaches that are discussed below:" }, { "code": null, "e": 25330, "s": 25164, "text": "Approach 1: We can use the Array unshift() method and Array pop() method to first pop the last element of the array and then insert it at the beginning of the array." }, { "code": null, "e": 26361, "s": 25330, "text": "Example: This example rotates the array elements.<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById(\"GFG_UP\"); up.innerHTML = \"Click on the button to perform\" + \" the operation<br>Array - [\" + array + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function arrayRotate(arr) { arr.unshift(arr.pop()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = \"elements of array = [\" + array + \"]\"; } </script></body> </html>" }, { "code": "<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById(\"GFG_UP\"); up.innerHTML = \"Click on the button to perform\" + \" the operation<br>Array - [\" + array + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function arrayRotate(arr) { arr.unshift(arr.pop()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = \"elements of array = [\" + array + \"]\"; } </script></body> </html>", "e": 27343, "s": 26361, "text": null }, { "code": null, "e": 27351, "s": 27343, "text": "Output:" }, { "code": null, "e": 27481, "s": 27351, "text": "Approach 2: We can use the Array push() method and Array shift() method to shift the first element and then insert it at the end." }, { "code": null, "e": 28512, "s": 27481, "text": "Example: This example rotates the array elements.<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById(\"GFG_UP\"); up.innerHTML = \"Click on the button to perform\" + \" the operation<br>Array - [\" + array + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function arrayRotate(arr) { arr.push(arr.shift()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = \"elements of array = [\" + array + \"]\"; } </script></body> </html>" }, { "code": "<!DOCTYPE HTML><html> <head> <title> Rotate the elements in an array by using JavaScript Methods </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\"></p> <button onclick=\"myGFG()\"> Click Here </button> <p id=\"GFG_DOWN\"></p> <script> var array = ['GFG_1', 'GFG_2', 'GFG_3', 'GFG_4']; var up = document.getElementById(\"GFG_UP\"); up.innerHTML = \"Click on the button to perform\" + \" the operation<br>Array - [\" + array + \"]\"; var down = document.getElementById(\"GFG_DOWN\"); function arrayRotate(arr) { arr.push(arr.shift()); return arr; } function myGFG() { array = arrayRotate(array); down.innerHTML = \"elements of array = [\" + array + \"]\"; } </script></body> </html>", "e": 29494, "s": 28512, "text": null }, { "code": null, "e": 29502, "s": 29494, "text": "Output:" }, { "code": null, "e": 29511, "s": 29502, "text": "CSS-Misc" }, { "code": null, "e": 29521, "s": 29511, "text": "HTML-Misc" }, { "code": null, "e": 29537, "s": 29521, "text": "JavaScript-Misc" }, { "code": null, "e": 29541, "s": 29537, "text": "CSS" }, { "code": null, "e": 29546, "s": 29541, "text": "HTML" }, { "code": null, "e": 29557, "s": 29546, "text": "JavaScript" }, { "code": null, "e": 29574, "s": 29557, "text": "Web Technologies" }, { "code": null, "e": 29601, "s": 29574, "text": "Web technologies Questions" }, { "code": null, "e": 29606, "s": 29601, "text": "HTML" }, { "code": null, "e": 29704, "s": 29606, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29713, "s": 29704, "text": "Comments" }, { "code": null, "e": 29726, "s": 29713, "text": "Old Comments" }, { "code": null, "e": 29760, "s": 29726, "text": "Primer CSS Flexbox Flex Direction" }, { "code": null, "e": 29812, "s": 29760, "text": "HTML Course | First Web Page | Printing Hello World" }, { "code": null, "e": 29849, "s": 29812, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 29891, "s": 29849, "text": "Search Bar using HTML, CSS and JavaScript" }, { "code": null, "e": 29949, "s": 29891, "text": "How to wrap the text around an image using HTML and CSS ?" }, { "code": null, "e": 29973, "s": 29949, "text": "REST API (Introduction)" }, { "code": null, "e": 30023, "s": 29973, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 30065, "s": 30023, "text": "Form validation using HTML and JavaScript" }, { "code": null, "e": 30126, "s": 30065, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" } ]
JavaScript function in href vs. onClick
Both onclick & href have different behaviors when calling JavaScript directly. Also the script in href won’t get executed if the time difference is short. This is for the time between two clicks. Here’s an example showing the usage of href vs onClick in JavaScript. Live Demo <html> <head> <title>JavaScript href vs onClick()</title> <script> function myFunc() { var v = 0; for (var j=0; j<1000; j++) { v+=j; } alert(v); } </script> <a href="javascript:myFunc()">href</a> <a href="#" onclick="javascript:myFunc()">onclick</a> </head> <body> </body> </html>
[ { "code": null, "e": 1258, "s": 1062, "text": "Both onclick & href have different behaviors when calling JavaScript directly. Also the script in href won’t get executed if the time difference is short. This is for the time between two clicks." }, { "code": null, "e": 1328, "s": 1258, "text": "Here’s an example showing the usage of href vs onClick in JavaScript." }, { "code": null, "e": 1339, "s": 1328, "text": " Live Demo" }, { "code": null, "e": 1726, "s": 1339, "text": "<html>\n <head>\n <title>JavaScript href vs onClick()</title>\n <script>\n function myFunc() {\n var v = 0;\n for (var j=0; j<1000; j++) {\n v+=j;\n }\n alert(v);\n }\n </script>\n <a href=\"javascript:myFunc()\">href</a>\n <a href=\"#\" onclick=\"javascript:myFunc()\">onclick</a>\n </head>\n <body>\n </body>\n</html>" } ]
Tryit Editor v3.7
Tryit: A list with discs
[]
How to return current URL for a share button using JavaScript ? - GeeksforGeeks
15 Oct, 2020 In this post, we are going to learn the logic behind the share buttons displayed on the websites which can be used to share a post or the complete URL of the site on other social media platforms. We will be going through this in several steps for understanding. Step 1: First, we need to create an HTML file that will display everything on the browser. html <!DOCTYPE html><html> <head> <style type="text/css" media="all"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id="share-button">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href="#"></a> <script type="text/javascript" charset="utf-8"> // We will write the javascript code here </script> </body> </html> Output: Result for HTML code. Above is the HTML code and we will be writing only the JavaScript part from this point. Step 2: In this step, we will add the JavaScript code which will display the current web-page’s URL as a link. Here the ‘window‘ is a global object which consists of various properties out of which location is one such property and the href property provides us the complete URL with the path and the protocol used. But we don’t need the ‘https://'(protocol) so we remove it using the slice method of JavaScript. HTML <!DOCTYPE html><html> <head> <style type="text/css" media="all"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id="share-button">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href="#"></a> <script type="text/javascript" charset="utf-8"> // Make sure you write this code inside the // script tag of your HTML file // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); link.textContent = URL; link.href = URL; // Displaying in the console console.log(URL); </script> </body> </html> Output: Result for Step: 2 Step 3: We don’t want the URL to be displayed as soon as the browser window loads instead we want it to be displayed when we click the button. So we will add an event listener to the button and then display the URL when the button is clicked HTML <!DOCTYPE html><html> <head> <style type="text/css" media="all"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id="share-button">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href="#"></a> <script type="text/javascript" charset="utf-8"> // Make sure you write this code inside the // script tag of your HTML file // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); const button = document.querySelector('#share-button'); // Adding a mouse click event listener to the button button.addEventListener('click', () => { // Displaying the current webpage link // on the browser window link.textContent = URL; link.href = URL; // Displaying in the console console.log(URL); }); </script> </body> </html> Output: Step 4: Now the link is displayed when the button is clicked, but we don’t need a link to the current page but a link that shares the current webpage on a social media platform. So we update the code as follows. HTML <!DOCTYPE html><html> <head> <style type="text/css" media="all"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id="share-button">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href="#"></a> <script type="text/javascript" charset="utf-8"> // Make sure you write this code inside the // script tag of your HTML file // Facebook share url const fbShare = "https://www.facebook.com/sharer/sharer.php?u=" // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); const button = document.querySelector('#share-button'); // Adding a mouse click event listener to the button button.addEventListener('click', () => { // Displaying the current webpage link // on the browser window link.textContent = fbShare+URL; link.href = fbShare+URL; // Displaying in the console console.log(fbShare+URL); }); </script> </body> </html> Output: This is how we can get a sharing URL, instead of displaying the sharing URL we can actually use it within the button to directly redirect the user to the particular social media platform. Also, I have used the Facebook share URL and similar URLs are available for almost all the social media platforms and feel free to experiment with them and come up with your own share button. This post has covered all the essentials about sharing the current web-page’s URL using JavaScript. CSS-Misc HTML-Misc JavaScript-Misc Picked CSS HTML JavaScript Web Technologies Web technologies Questions HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Design a web page using HTML and CSS Form validation using jQuery How to set space between the flexbox ? Search Bar using HTML, CSS and JavaScript How to style a checkbox using CSS? How to set the default value for an HTML <select> element ? How to set input type date in dd-mm-yyyy format using HTML ? Hide or show elements in HTML using display property How to Insert Form Data into Database using PHP ? REST API (Introduction)
[ { "code": null, "e": 25009, "s": 24981, "text": "\n15 Oct, 2020" }, { "code": null, "e": 25271, "s": 25009, "text": "In this post, we are going to learn the logic behind the share buttons displayed on the websites which can be used to share a post or the complete URL of the site on other social media platforms. We will be going through this in several steps for understanding." }, { "code": null, "e": 25362, "s": 25271, "text": "Step 1: First, we need to create an HTML file that will display everything on the browser." }, { "code": null, "e": 25367, "s": 25362, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <style type=\"text/css\" media=\"all\"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id=\"share-button\">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href=\"#\"></a> <script type=\"text/javascript\" charset=\"utf-8\"> // We will write the javascript code here </script> </body> </html>", "e": 25946, "s": 25367, "text": null }, { "code": null, "e": 25954, "s": 25946, "text": "Output:" }, { "code": null, "e": 25976, "s": 25954, "text": "Result for HTML code." }, { "code": null, "e": 26064, "s": 25976, "text": "Above is the HTML code and we will be writing only the JavaScript part from this point." }, { "code": null, "e": 26175, "s": 26064, "text": "Step 2: In this step, we will add the JavaScript code which will display the current web-page’s URL as a link." }, { "code": null, "e": 26381, "s": 26175, "text": "Here the ‘window‘ is a global object which consists of various properties out of which location is one such property and the href property provides us the complete URL with the path and the protocol used. " }, { "code": null, "e": 26478, "s": 26381, "text": "But we don’t need the ‘https://'(protocol) so we remove it using the slice method of JavaScript." }, { "code": null, "e": 26483, "s": 26478, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style type=\"text/css\" media=\"all\"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id=\"share-button\">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href=\"#\"></a> <script type=\"text/javascript\" charset=\"utf-8\"> // Make sure you write this code inside the // script tag of your HTML file // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); link.textContent = URL; link.href = URL; // Displaying in the console console.log(URL); </script> </body> </html>", "e": 27537, "s": 26483, "text": null }, { "code": null, "e": 27545, "s": 27537, "text": "Output:" }, { "code": null, "e": 27564, "s": 27545, "text": "Result for Step: 2" }, { "code": null, "e": 27806, "s": 27564, "text": "Step 3: We don’t want the URL to be displayed as soon as the browser window loads instead we want it to be displayed when we click the button. So we will add an event listener to the button and then display the URL when the button is clicked" }, { "code": null, "e": 27811, "s": 27806, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style type=\"text/css\" media=\"all\"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id=\"share-button\">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href=\"#\"></a> <script type=\"text/javascript\" charset=\"utf-8\"> // Make sure you write this code inside the // script tag of your HTML file // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); const button = document.querySelector('#share-button'); // Adding a mouse click event listener to the button button.addEventListener('click', () => { // Displaying the current webpage link // on the browser window link.textContent = URL; link.href = URL; // Displaying in the console console.log(URL); }); </script> </body> </html>", "e": 29164, "s": 27811, "text": null }, { "code": null, "e": 29172, "s": 29164, "text": "Output:" }, { "code": null, "e": 29384, "s": 29172, "text": "Step 4: Now the link is displayed when the button is clicked, but we don’t need a link to the current page but a link that shares the current webpage on a social media platform. So we update the code as follows." }, { "code": null, "e": 29389, "s": 29384, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style type=\"text/css\" media=\"all\"> /* This CSS is optional */ #share-button { padding: 10px; font-size: 24px; } </style></head> <body> <!-- The share button --> <button id=\"share-button\">Share</button> <!-- These line breaks are optional --> <br/> <br/> <br/> <!-- The anchor tag for the sharing link --> <a href=\"#\"></a> <script type=\"text/javascript\" charset=\"utf-8\"> // Make sure you write this code inside the // script tag of your HTML file // Facebook share url const fbShare = \"https://www.facebook.com/sharer/sharer.php?u=\" // Storing the URL of the current webpage const URL = window.location.href.slice(7); // We used the slice method to remove // the 'http://' from the prefix // Displaying the current webpage link // on the browser window const link = document.querySelector('a'); const button = document.querySelector('#share-button'); // Adding a mouse click event listener to the button button.addEventListener('click', () => { // Displaying the current webpage link // on the browser window link.textContent = fbShare+URL; link.href = fbShare+URL; // Displaying in the console console.log(fbShare+URL); }); </script> </body> </html>", "e": 30860, "s": 29389, "text": null }, { "code": null, "e": 30868, "s": 30860, "text": "Output:" }, { "code": null, "e": 31348, "s": 30868, "text": "This is how we can get a sharing URL, instead of displaying the sharing URL we can actually use it within the button to directly redirect the user to the particular social media platform. Also, I have used the Facebook share URL and similar URLs are available for almost all the social media platforms and feel free to experiment with them and come up with your own share button. This post has covered all the essentials about sharing the current web-page’s URL using JavaScript." }, { "code": null, "e": 31357, "s": 31348, "text": "CSS-Misc" }, { "code": null, "e": 31367, "s": 31357, "text": "HTML-Misc" }, { "code": null, "e": 31383, "s": 31367, "text": "JavaScript-Misc" }, { "code": null, "e": 31390, "s": 31383, "text": "Picked" }, { "code": null, "e": 31394, "s": 31390, "text": "CSS" }, { "code": null, "e": 31399, "s": 31394, "text": "HTML" }, { "code": null, "e": 31410, "s": 31399, "text": "JavaScript" }, { "code": null, "e": 31427, "s": 31410, "text": "Web Technologies" }, { "code": null, "e": 31454, "s": 31427, "text": "Web technologies Questions" }, { "code": null, "e": 31459, "s": 31454, "text": "HTML" }, { "code": null, "e": 31557, "s": 31459, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31566, "s": 31557, "text": "Comments" }, { "code": null, "e": 31579, "s": 31566, "text": "Old Comments" }, { "code": null, "e": 31616, "s": 31579, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 31645, "s": 31616, "text": "Form validation using jQuery" }, { "code": null, "e": 31684, "s": 31645, "text": "How to set space between the flexbox ?" }, { "code": null, "e": 31726, "s": 31684, "text": "Search Bar using HTML, CSS and JavaScript" }, { "code": null, "e": 31761, "s": 31726, "text": "How to style a checkbox using CSS?" }, { "code": null, "e": 31821, "s": 31761, "text": "How to set the default value for an HTML <select> element ?" }, { "code": null, "e": 31882, "s": 31821, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 31935, "s": 31882, "text": "Hide or show elements in HTML using display property" }, { "code": null, "e": 31985, "s": 31935, "text": "How to Insert Form Data into Database using PHP ?" } ]
How can Keras be used to save model using hdf5 format in Python?
Tensorflow is a machine learning framework that is provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes. The ‘tensorflow’ package can be installed on Windows using the below line of code − pip install tensorflow Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but multidimensional array or a list. Keras was developed as a part of research for the project ONEIROS (Open ended Neuro−Electronic Intelligent Robot Operating System). Keras is a deep learning API, which is written in Python. It is a high−level API that has a productive interface that helps solve machine learning problems. It runs on top of Tensorflow framework. It was built to help experiment in a quick manner. It provides essential abstractions and building blocks that are essential in developing and encapsulating machine learning solutions. It is highly scalable, and comes with cross platform abilities. This means Keras can be run on TPU or clusters of GPUs. Keras models can also be exported to run in a web browser or a mobile phone as well. Keras is already present within the Tensorflow package. It can be accessed using the below line of code. import tensorflow from tensorflow import keras We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. Following is the code − print("The model is saved to HDF5 format") model.save('my_model.h5') print("The same model is recreated with same weights and optimizer") new_model = tf.keras.models.load_model('my_model.h5') print("The architecture of the model is observed") new_model.summary() Code credit − https://www.tensorflow.org/tutorials/keras/save_and_load The model newly created can be saved using the ‘save’ function. The model newly created can be saved using the ‘save’ function. It can be specifically saved to hdf5 format using the extension ‘h5’. It can be specifically saved to hdf5 format using the extension ‘h5’. This model is loaded using the previous weights and optimizer. This model is loaded using the previous weights and optimizer. The details about the new model is displayed on the console using the ‘summary’ method. The details about the new model is displayed on the console using the ‘summary’ method.
[ { "code": null, "e": 1315, "s": 1062, "text": "Tensorflow is a machine learning framework that is provided by Google. It is an open−source framework used in conjunction with Python to implement algorithms, deep learning applications and much more. It is used in research and for production purposes." }, { "code": null, "e": 1399, "s": 1315, "text": "The ‘tensorflow’ package can be installed on Windows using the below line of code −" }, { "code": null, "e": 1422, "s": 1399, "text": "pip install tensorflow" }, { "code": null, "e": 1622, "s": 1422, "text": "Tensor is a data structure used in TensorFlow. It helps connect edges in a flow diagram. This flow diagram is known as the ‘Data flow graph’. Tensors are nothing but multidimensional array or a list." }, { "code": null, "e": 1911, "s": 1622, "text": "Keras was developed as a part of research for the project ONEIROS (Open ended Neuro−Electronic Intelligent Robot Operating System). Keras is a deep learning API, which is written in Python. It is a high−level API that has a productive interface that helps solve machine learning problems." }, { "code": null, "e": 2341, "s": 1911, "text": "It runs on top of Tensorflow framework. It was built to help experiment in a quick manner. It provides essential abstractions and building blocks that are essential in developing and encapsulating machine learning solutions. It is highly scalable, and comes with cross platform abilities. This means Keras can be run on TPU or clusters of GPUs. Keras models can also be exported to run in a web browser or a mobile phone as well." }, { "code": null, "e": 2446, "s": 2341, "text": "Keras is already present within the Tensorflow package. It can be accessed using the below line of code." }, { "code": null, "e": 2493, "s": 2446, "text": "import tensorflow\nfrom tensorflow import keras" }, { "code": null, "e": 2787, "s": 2493, "text": "We are using the Google Colaboratory to run the below code. Google Colab or Colaboratory helps run Python code over the browser and requires zero configuration and free access to GPUs (Graphical Processing Units). Colaboratory has been built on top of Jupyter Notebook. Following is the code −" }, { "code": null, "e": 3050, "s": 2787, "text": "print(\"The model is saved to HDF5 format\")\nmodel.save('my_model.h5')\nprint(\"The same model is recreated with same weights and optimizer\")\nnew_model = tf.keras.models.load_model('my_model.h5')\nprint(\"The architecture of the model is observed\")\nnew_model.summary()" }, { "code": null, "e": 3121, "s": 3050, "text": "Code credit − https://www.tensorflow.org/tutorials/keras/save_and_load" }, { "code": null, "e": 3185, "s": 3121, "text": "The model newly created can be saved using the ‘save’ function." }, { "code": null, "e": 3249, "s": 3185, "text": "The model newly created can be saved using the ‘save’ function." }, { "code": null, "e": 3319, "s": 3249, "text": "It can be specifically saved to hdf5 format using the extension ‘h5’." }, { "code": null, "e": 3389, "s": 3319, "text": "It can be specifically saved to hdf5 format using the extension ‘h5’." }, { "code": null, "e": 3452, "s": 3389, "text": "This model is loaded using the previous weights and optimizer." }, { "code": null, "e": 3515, "s": 3452, "text": "This model is loaded using the previous weights and optimizer." }, { "code": null, "e": 3603, "s": 3515, "text": "The details about the new model is displayed on the console using the ‘summary’ method." }, { "code": null, "e": 3691, "s": 3603, "text": "The details about the new model is displayed on the console using the ‘summary’ method." } ]
JavaScript Assignment
Assignment operators assign values to JavaScript variables. The **= operator is a part of ECMAScript 2016. The = assignment operator assigns a value to a variable. The += assignment operator adds a value to a variable. The -= assignment operator subtracts a value from a variable. The *= assignment operator multiplies a variable. The /= assignment divides a variable. The %= assignment operator assigns a remainder to a variable. Use the correct assignment operator that will result in x being 15 (same as x = x + y). x = 10; y = 5; x y; Start the Exercise We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 60, "s": 0, "text": "Assignment operators assign values to JavaScript variables." }, { "code": null, "e": 107, "s": 60, "text": "The **= operator is a part of ECMAScript 2016." }, { "code": null, "e": 164, "s": 107, "text": "The = assignment operator assigns a value to a variable." }, { "code": null, "e": 219, "s": 164, "text": "The += assignment operator adds a value to a variable." }, { "code": null, "e": 281, "s": 219, "text": "The -= assignment operator subtracts a value from a variable." }, { "code": null, "e": 331, "s": 281, "text": "The *= assignment operator multiplies a variable." }, { "code": null, "e": 369, "s": 331, "text": "The /= assignment divides a variable." }, { "code": null, "e": 431, "s": 369, "text": "The %= assignment operator assigns a remainder to a variable." }, { "code": null, "e": 519, "s": 431, "text": "Use the correct assignment operator that will result in x being 15 (same as x = x + y)." }, { "code": null, "e": 541, "s": 519, "text": "x = 10;\ny = 5;\nx y;\n" }, { "code": null, "e": 560, "s": 541, "text": "Start the Exercise" }, { "code": null, "e": 593, "s": 560, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 635, "s": 593, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 742, "s": 635, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 761, "s": 742, "text": "help@w3schools.com" } ]
Complete guide on How to build a Video Player in Android - GeeksforGeeks
16 Jul, 2021 This article explains the stepwise process as to how to build a Video Player using Android Studio. For viewing videos in android, there is a special class called “MediaPlayer“. In this article, we will be having 2 videos which are connected by the “Dialog box“, i.e a dialog box will come after completion of the first video which will ask the user whether he wants to replay or play next video.To insert videos in Android, we put in raw folder. “raw” folder is present in "app"--> "res" --> "raw" In this folder, you just need to paste the videos whichever you want to play.Steps to build a Video Player: In creating Frontend we just need one component, i.e VideoView.The icons like play, rewind, forward will only come when we touch on VideoView and they will only come for just 3 seconds and then they will disappear. It is provided by Google and it is its default behaviour. Coming to back-end part i.e Java coding, we are getting media controls by:vw.setMediaController(new MediaController(this));Then, adding the videos of the raw folder in ArrayList and making a call to a method called setVideo() by giving an argument to it of the first video.// big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0));Now in setVideo() defining, we need an Uri object so as to pass to a method called as setVideoURI(). Therefore,String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start();Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView.Now, code of generating a dialog box is done inside the method called onCompletion(). Please refer to this article for how to generate Dialog Box// It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this);At last, we have handled the coding of user’s action, i.e what the user has click (Replay or next). The simple logic is used such as increment and decrement.public void onClick(DialogInterface dialog, int which) { if (which == -1) { vw.seekTo(0); vw.start(); } else { ++currvideo; if (currvideo == videolist.size()) currvideo = 0; setVideo(videolist.get(currvideo)); } } In creating Frontend we just need one component, i.e VideoView. The icons like play, rewind, forward will only come when we touch on VideoView and they will only come for just 3 seconds and then they will disappear. It is provided by Google and it is its default behaviour. Coming to back-end part i.e Java coding, we are getting media controls by:vw.setMediaController(new MediaController(this)); vw.setMediaController(new MediaController(this)); Then, adding the videos of the raw folder in ArrayList and making a call to a method called setVideo() by giving an argument to it of the first video.// big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0)); // big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0)); Now in setVideo() defining, we need an Uri object so as to pass to a method called as setVideoURI(). Therefore,String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start();Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView. String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start(); Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView. Now, code of generating a dialog box is done inside the method called onCompletion(). Please refer to this article for how to generate Dialog Box// It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this); // It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this); At last, we have handled the coding of user’s action, i.e what the user has click (Replay or next). The simple logic is used such as increment and decrement.public void onClick(DialogInterface dialog, int which) { if (which == -1) { vw.seekTo(0); vw.start(); } else { ++currvideo; if (currvideo == videolist.size()) currvideo = 0; setVideo(videolist.get(currvideo)); } } public void onClick(DialogInterface dialog, int which) { if (which == -1) { vw.seekTo(0); vw.start(); } else { ++currvideo; if (currvideo == videolist.size()) currvideo = 0; setVideo(videolist.get(currvideo)); } } The complete code (activity_main and MainActivity) for the above discussed program is given below: activity_main.xml <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <VideoView android:id="@+id/vidvw" android:layout_marginTop="10dp" android:layout_width="match_parent" android:layout_height="match_parent" /></RelativeLayout> MainActivity.java package com.example.videoapp_demo; import android.content.DialogInterface;import android.media.MediaPlayer;import android.net.Uri;import android.support.v7.app.AlertDialog;import android.support.v7.app.AppCompatActivity;import android.os.Bundle;import android.widget.MediaController;import android.widget.VideoView; import java.util.ArrayList; public class MainActivity extends AppCompatActivity implements MediaPlayer.OnCompletionListener { VideoView vw; ArrayList<Integer> videolist = new ArrayList<>(); int currvideo = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); vw = (VideoView)findViewById(R.id.vidvw); vw.setMediaController(new MediaController(this)); vw.setOnCompletionListener(this); // video name should be in lower case alphabet. videolist.add(R.raw.middle); videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0)); } public void setVideo(int id) { String uriPath = "android.resource://" + getPackageName() + "/" + id; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start(); } public void onCompletion(MediaPlayer mediapalyer) { AlertDialog.Builder obj = new AlertDialog.Builder(this); obj.setTitle("Playback Finished!"); obj.setIcon(R.mipmap.ic_launcher); MyListener m = new MyListener(); obj.setPositiveButton("Replay", m); obj.setNegativeButton("Next", m); obj.setMessage("Want to replay or play next video?"); obj.show(); } class MyListener implements DialogInterface.OnClickListener { public void onClick(DialogInterface dialog, int which) { if (which == -1) { vw.seekTo(0); vw.start(); } else { ++currvideo; if (currvideo == videolist.size()) currvideo = 0; setVideo(videolist.get(currvideo)); } } }} Output: Playing the first video: first song “faded” first song “faded” Dialogue box after the first video:After completion of first song,dialog box is getting generated After completion of first song,dialog box is getting generated Playing the second video:When we click on “NEXT”,then the second video starts running When we click on “NEXT”,then the second video starts running sweetyty android Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Initialize an ArrayList in Java HashMap in Java with Examples Interfaces in Java ArrayList in Java Multidimensional Arrays in Java Stack Class in Java Stream In Java Singleton Class in Java Set in Java Overriding in Java
[ { "code": null, "e": 24826, "s": 24798, "text": "\n16 Jul, 2021" }, { "code": null, "e": 25301, "s": 24826, "text": "This article explains the stepwise process as to how to build a Video Player using Android Studio. For viewing videos in android, there is a special class called “MediaPlayer“. In this article, we will be having 2 videos which are connected by the “Dialog box“, i.e a dialog box will come after completion of the first video which will ask the user whether he wants to replay or play next video.To insert videos in Android, we put in raw folder. “raw” folder is present in " }, { "code": null, "e": 25326, "s": 25301, "text": "\"app\"--> \"res\" --> \"raw\"" }, { "code": null, "e": 25435, "s": 25326, "text": "In this folder, you just need to paste the videos whichever you want to play.Steps to build a Video Player: " }, { "code": null, "e": 27213, "s": 25435, "text": "In creating Frontend we just need one component, i.e VideoView.The icons like play, rewind, forward will only come when we touch on VideoView and they will only come for just 3 seconds and then they will disappear. It is provided by Google and it is its default behaviour. Coming to back-end part i.e Java coding, we are getting media controls by:vw.setMediaController(new MediaController(this));Then, adding the videos of the raw folder in ArrayList and making a call to a method called setVideo() by giving an argument to it of the first video.// big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0));Now in setVideo() defining, we need an Uri object so as to pass to a method called as setVideoURI(). Therefore,String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start();Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView.Now, code of generating a dialog box is done inside the method called onCompletion(). Please refer to this article for how to generate Dialog Box// It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this);At last, we have handled the coding of user’s action, i.e what the user has click (Replay or next). The simple logic is used such as increment and decrement.public void onClick(DialogInterface dialog, int which) {\n if (which == -1) {\n vw.seekTo(0);\n vw.start();\n }\n else {\n ++currvideo;\n if (currvideo == videolist.size())\n currvideo = 0;\n setVideo(videolist.get(currvideo));\n }\n}" }, { "code": null, "e": 27277, "s": 27213, "text": "In creating Frontend we just need one component, i.e VideoView." }, { "code": null, "e": 27488, "s": 27277, "text": "The icons like play, rewind, forward will only come when we touch on VideoView and they will only come for just 3 seconds and then they will disappear. It is provided by Google and it is its default behaviour. " }, { "code": null, "e": 27612, "s": 27488, "text": "Coming to back-end part i.e Java coding, we are getting media controls by:vw.setMediaController(new MediaController(this));" }, { "code": null, "e": 27662, "s": 27612, "text": "vw.setMediaController(new MediaController(this));" }, { "code": null, "e": 27935, "s": 27662, "text": "Then, adding the videos of the raw folder in ArrayList and making a call to a method called setVideo() by giving an argument to it of the first video.// big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0));" }, { "code": null, "e": 28058, "s": 27935, "text": "// big video songs are not running videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0));" }, { "code": null, "e": 28523, "s": 28058, "text": "Now in setVideo() defining, we need an Uri object so as to pass to a method called as setVideoURI(). Therefore,String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start();Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView." }, { "code": null, "e": 28656, "s": 28523, "text": "String uriPath = “android.resource://” + getPackageName() +”/” + id ; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start();" }, { "code": null, "e": 28878, "s": 28656, "text": "Note: First video will start playing as soon as application gets launch. This is because we are giving call to setVideo() from inside onCreate() and then inside setVideo(), it is calling vw.start(), where vw is VideoView." }, { "code": null, "e": 29120, "s": 28878, "text": "Now, code of generating a dialog box is done inside the method called onCompletion(). Please refer to this article for how to generate Dialog Box// It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this);" }, { "code": null, "e": 29217, "s": 29120, "text": "// It is creating object of AlertDialog AlertDialog.Builder obj = new AlertDialog.Builder(this);" }, { "code": null, "e": 29622, "s": 29217, "text": "At last, we have handled the coding of user’s action, i.e what the user has click (Replay or next). The simple logic is used such as increment and decrement.public void onClick(DialogInterface dialog, int which) {\n if (which == -1) {\n vw.seekTo(0);\n vw.start();\n }\n else {\n ++currvideo;\n if (currvideo == videolist.size())\n currvideo = 0;\n setVideo(videolist.get(currvideo));\n }\n}" }, { "code": null, "e": 29870, "s": 29622, "text": "public void onClick(DialogInterface dialog, int which) {\n if (which == -1) {\n vw.seekTo(0);\n vw.start();\n }\n else {\n ++currvideo;\n if (currvideo == videolist.size())\n currvideo = 0;\n setVideo(videolist.get(currvideo));\n }\n}" }, { "code": null, "e": 29969, "s": 29870, "text": "The complete code (activity_main and MainActivity) for the above discussed program is given below:" }, { "code": null, "e": 29987, "s": 29969, "text": "activity_main.xml" }, { "code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <VideoView android:id=\"@+id/vidvw\" android:layout_marginTop=\"10dp\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" /></RelativeLayout>", "e": 30515, "s": 29987, "text": null }, { "code": null, "e": 30533, "s": 30515, "text": "MainActivity.java" }, { "code": "package com.example.videoapp_demo; import android.content.DialogInterface;import android.media.MediaPlayer;import android.net.Uri;import android.support.v7.app.AlertDialog;import android.support.v7.app.AppCompatActivity;import android.os.Bundle;import android.widget.MediaController;import android.widget.VideoView; import java.util.ArrayList; public class MainActivity extends AppCompatActivity implements MediaPlayer.OnCompletionListener { VideoView vw; ArrayList<Integer> videolist = new ArrayList<>(); int currvideo = 0; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); vw = (VideoView)findViewById(R.id.vidvw); vw.setMediaController(new MediaController(this)); vw.setOnCompletionListener(this); // video name should be in lower case alphabet. videolist.add(R.raw.middle); videolist.add(R.raw.faded); videolist.add(R.raw.aeroplane); setVideo(videolist.get(0)); } public void setVideo(int id) { String uriPath = \"android.resource://\" + getPackageName() + \"/\" + id; Uri uri = Uri.parse(uriPath); vw.setVideoURI(uri); vw.start(); } public void onCompletion(MediaPlayer mediapalyer) { AlertDialog.Builder obj = new AlertDialog.Builder(this); obj.setTitle(\"Playback Finished!\"); obj.setIcon(R.mipmap.ic_launcher); MyListener m = new MyListener(); obj.setPositiveButton(\"Replay\", m); obj.setNegativeButton(\"Next\", m); obj.setMessage(\"Want to replay or play next video?\"); obj.show(); } class MyListener implements DialogInterface.OnClickListener { public void onClick(DialogInterface dialog, int which) { if (which == -1) { vw.seekTo(0); vw.start(); } else { ++currvideo; if (currvideo == videolist.size()) currvideo = 0; setVideo(videolist.get(currvideo)); } } }}", "e": 32687, "s": 30533, "text": null }, { "code": null, "e": 32697, "s": 32687, "text": "Output: " }, { "code": null, "e": 32742, "s": 32697, "text": "Playing the first video: first song “faded”" }, { "code": null, "e": 32761, "s": 32742, "text": "first song “faded”" }, { "code": null, "e": 32859, "s": 32761, "text": "Dialogue box after the first video:After completion of first song,dialog box is getting generated" }, { "code": null, "e": 32922, "s": 32859, "text": "After completion of first song,dialog box is getting generated" }, { "code": null, "e": 33008, "s": 32922, "text": "Playing the second video:When we click on “NEXT”,then the second video starts running" }, { "code": null, "e": 33069, "s": 33008, "text": "When we click on “NEXT”,then the second video starts running" }, { "code": null, "e": 33078, "s": 33069, "text": "sweetyty" }, { "code": null, "e": 33086, "s": 33078, "text": "android" }, { "code": null, "e": 33091, "s": 33086, "text": "Java" }, { "code": null, "e": 33096, "s": 33091, "text": "Java" }, { "code": null, "e": 33194, "s": 33096, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33226, "s": 33194, "text": "Initialize an ArrayList in Java" }, { "code": null, "e": 33256, "s": 33226, "text": "HashMap in Java with Examples" }, { "code": null, "e": 33275, "s": 33256, "text": "Interfaces in Java" }, { "code": null, "e": 33293, "s": 33275, "text": "ArrayList in Java" }, { "code": null, "e": 33325, "s": 33293, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 33345, "s": 33325, "text": "Stack Class in Java" }, { "code": null, "e": 33360, "s": 33345, "text": "Stream In Java" }, { "code": null, "e": 33384, "s": 33360, "text": "Singleton Class in Java" }, { "code": null, "e": 33396, "s": 33384, "text": "Set in Java" } ]
AngularJS | fetch data from API using HttpClient - GeeksforGeeks
22 Dec, 2021 There is some data in the API and our task here is to fetch data from that API using HTTP and display it. In this article, we will use a case where the API contains employee details which we will fetch. The API is a fake API in which data is stored in the form of JSON (Key: Value) pair.API: API stands for Application Programming Interface, which is a software intermediary that allows two applications to communicate to each other. Angular offers HttpClient to work on API and handle data easily. In this approach HttpClient along with subscribe() method will be used for fetching data. The following steps are to be followed to reach the goal of problem. Step 1: Create the necessary component and application. Step 2: Do the necessary imports for HttpClient in module.ts file. import { HttpClientModule } from '@angular/common/http'; @NgModule({ declarations: [ ], imports: [ HttpClientModule, ], providers: [], bootstrap: [] }) Step 3: Do the necessary imports for HttpClient in component.ts file. import {HttpClient} from '@angular/common/http'; export class ShowApiComponent implements OnInit { constructor(private http : HttpClient){ } } Step 4: We get Response from API by passing API url in get() method and then subscribing to the url. this.http.get('API url').subscribe(parameter) The Response of the API is stored in a variable from which data can be accessed. Step 5: Now data array need to be showed using HTML. A Table is used in which rows are added dynamically by the size of data array. For this, rows are created using *ngFor then data is showed from each row. Prerequisite: Here you will need an API for getting data. A fake API can also be created and data can be stored – “https://www.mocky.io/”.Example: Here for example, we already have a fake API which contains employee data which we will fetch API: “http://www.mocky.io/v2/5ea172973100002d001eeada”Steps to display the data: Step 1: Required Angular App and Component(Here show-api component) is created Step 2: For using HttpClient for our app, HttpClientModule is imported to app.module.tsapp.module.ts: javascript import { BrowserModule } from '@angular/platform-browser';import { NgModule } from '@angular/core';import { HttpClientModule } from '@angular/common/http'; import { AppRoutingModule } from './app-routing.module';import { AppComponent } from './app.component';import { AddInputComponent } from './add-input/add-input.component';import { ShowApiComponent } from './show-api/show-api.component'; @NgModule({ declarations: [ AppComponent, ShowApiComponent ], imports: [ BrowserModule, AppRoutingModule, HttpClientModule ], providers: [], bootstrap: [AppComponent]})export class AppModule { } Step 3: In Typescript file of component(Here show-api.component.ts) import HttpClient. HttpClient helps to render and Fetch Data.The Employee Details API is used to get data. We get Response from API by passing API url in get() method and then subscribing to the url. The Response of the API is stored in a variable named li from which data array is further stored in an array named list here. The list array will help us show the data. A user defined function is called when response comes to hide the loader. show-app.component.ts:- javascript import { Component, OnInit } from '@angular/core';import {HttpClient} from '@angular/common/http';@Component({ selector: 'app-show-api', templateUrl: './show-api.component.html', styleUrls: ['./show-api.component.css']})export class ShowApiComponent implements OnInit { li:any; lis=[]; constructor(private http : HttpClient){ } ngOnInit(): void { this.http.get('http://www.mocky.io/v2/5ea172973100002d001eeada') .subscribe(Response => { // If response comes hideloader() function is called // to hide that loader if(Response){ hideloader(); } console.log(Response) this.li=Response; this.lis=this.li.list; }); function hideloader(){ document.getElementById('loading').style.display = 'none';} }}// The url of api is passed to get() and then subscribed and// stored the response to li element data array list[] is created// using JSON element property Step 4: Now data array need to be shown using HTML. A Table is used in which rows are added dynamically by the size of data array. For this, rows are created using *ngFor then data is showed from each row. In this file I had added a loader which loads till response comes. show-app.component.html:- html <h1>Registered Employees</h1><div class="d-flex justify-content-center"><div class="spinner-border" role="status" > <span class="sr-only" id="loading">Loading...</span></div></div><table class="table" id='tab'> <thead> <tr> <th scope="col">Name</th> <th scope="col">Position</th> <th scope="col">Office</th> <th scope="col">Salary</th> </tr> </thead> <tbody> <tr *ngFor="let e of lis;"> <td>{{ e.name }}</td> <td>{{ e.position }}</td> <td>{{ e.office }}</td> <td>{{ e.salary }}</td> </tr> </tbody></table> Output: In console, the data array of the response can also seen which is further used to show data. ruhelaa48 sweetyty AngularJS-Misc Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Top 10 Front End Developer Skills That You Need in 2022 How to fetch data from an API in ReactJS ? Difference between var, let and const keywords in JavaScript Convert a string to an integer in JavaScript Differences between Functional Components and Class Components in React How to set the default value for an HTML <select> element ? File uploading in React.js How to set input type date in dd-mm-yyyy format using HTML ? How to Open URL in New Tab using JavaScript ? Types of CSS (Cascading Style Sheet)
[ { "code": null, "e": 24203, "s": 24175, "text": "\n22 Dec, 2021" }, { "code": null, "e": 24862, "s": 24203, "text": "There is some data in the API and our task here is to fetch data from that API using HTTP and display it. In this article, we will use a case where the API contains employee details which we will fetch. The API is a fake API in which data is stored in the form of JSON (Key: Value) pair.API: API stands for Application Programming Interface, which is a software intermediary that allows two applications to communicate to each other. Angular offers HttpClient to work on API and handle data easily. In this approach HttpClient along with subscribe() method will be used for fetching data. The following steps are to be followed to reach the goal of problem. " }, { "code": null, "e": 24918, "s": 24862, "text": "Step 1: Create the necessary component and application." }, { "code": null, "e": 24987, "s": 24918, "text": "Step 2: Do the necessary imports for HttpClient in module.ts file. " }, { "code": null, "e": 25156, "s": 24987, "text": "import { HttpClientModule } from '@angular/common/http';\n\n@NgModule({\n declarations: [\n ],\n imports: [\n HttpClientModule,\n ],\n providers: [],\n bootstrap: []\n})" }, { "code": null, "e": 25228, "s": 25156, "text": "Step 3: Do the necessary imports for HttpClient in component.ts file. " }, { "code": null, "e": 25381, "s": 25228, "text": "import {HttpClient} from '@angular/common/http';\n\nexport class ShowApiComponent implements OnInit {\n constructor(private http : HttpClient){\n\n } \n }" }, { "code": null, "e": 25484, "s": 25381, "text": "Step 4: We get Response from API by passing API url in get() method and then subscribing to the url. " }, { "code": null, "e": 25530, "s": 25484, "text": "this.http.get('API url').subscribe(parameter)" }, { "code": null, "e": 25611, "s": 25530, "text": "The Response of the API is stored in a variable from which data can be accessed." }, { "code": null, "e": 25818, "s": 25611, "text": "Step 5: Now data array need to be showed using HTML. A Table is used in which rows are added dynamically by the size of data array. For this, rows are created using *ngFor then data is showed from each row." }, { "code": null, "e": 26142, "s": 25818, "text": "Prerequisite: Here you will need an API for getting data. A fake API can also be created and data can be stored – “https://www.mocky.io/”.Example: Here for example, we already have a fake API which contains employee data which we will fetch API: “http://www.mocky.io/v2/5ea172973100002d001eeada”Steps to display the data: " }, { "code": null, "e": 26221, "s": 26142, "text": "Step 1: Required Angular App and Component(Here show-api component) is created" }, { "code": null, "e": 26325, "s": 26221, "text": "Step 2: For using HttpClient for our app, HttpClientModule is imported to app.module.tsapp.module.ts: " }, { "code": null, "e": 26336, "s": 26325, "text": "javascript" }, { "code": "import { BrowserModule } from '@angular/platform-browser';import { NgModule } from '@angular/core';import { HttpClientModule } from '@angular/common/http'; import { AppRoutingModule } from './app-routing.module';import { AppComponent } from './app.component';import { AddInputComponent } from './add-input/add-input.component';import { ShowApiComponent } from './show-api/show-api.component'; @NgModule({ declarations: [ AppComponent, ShowApiComponent ], imports: [ BrowserModule, AppRoutingModule, HttpClientModule ], providers: [], bootstrap: [AppComponent]})export class AppModule { }", "e": 26946, "s": 26336, "text": null }, { "code": null, "e": 27483, "s": 26946, "text": "Step 3: In Typescript file of component(Here show-api.component.ts) import HttpClient. HttpClient helps to render and Fetch Data.The Employee Details API is used to get data. We get Response from API by passing API url in get() method and then subscribing to the url. The Response of the API is stored in a variable named li from which data array is further stored in an array named list here. The list array will help us show the data. A user defined function is called when response comes to hide the loader. show-app.component.ts:- " }, { "code": null, "e": 27494, "s": 27483, "text": "javascript" }, { "code": "import { Component, OnInit } from '@angular/core';import {HttpClient} from '@angular/common/http';@Component({ selector: 'app-show-api', templateUrl: './show-api.component.html', styleUrls: ['./show-api.component.css']})export class ShowApiComponent implements OnInit { li:any; lis=[]; constructor(private http : HttpClient){ } ngOnInit(): void { this.http.get('http://www.mocky.io/v2/5ea172973100002d001eeada') .subscribe(Response => { // If response comes hideloader() function is called // to hide that loader if(Response){ hideloader(); } console.log(Response) this.li=Response; this.lis=this.li.list; }); function hideloader(){ document.getElementById('loading').style.display = 'none';} }}// The url of api is passed to get() and then subscribed and// stored the response to li element data array list[] is created// using JSON element property", "e": 28420, "s": 27494, "text": null }, { "code": null, "e": 28721, "s": 28420, "text": "Step 4: Now data array need to be shown using HTML. A Table is used in which rows are added dynamically by the size of data array. For this, rows are created using *ngFor then data is showed from each row. In this file I had added a loader which loads till response comes. show-app.component.html:- " }, { "code": null, "e": 28726, "s": 28721, "text": "html" }, { "code": "<h1>Registered Employees</h1><div class=\"d-flex justify-content-center\"><div class=\"spinner-border\" role=\"status\" > <span class=\"sr-only\" id=\"loading\">Loading...</span></div></div><table class=\"table\" id='tab'> <thead> <tr> <th scope=\"col\">Name</th> <th scope=\"col\">Position</th> <th scope=\"col\">Office</th> <th scope=\"col\">Salary</th> </tr> </thead> <tbody> <tr *ngFor=\"let e of lis;\"> <td>{{ e.name }}</td> <td>{{ e.position }}</td> <td>{{ e.office }}</td> <td>{{ e.salary }}</td> </tr> </tbody></table> ", "e": 29334, "s": 28726, "text": null }, { "code": null, "e": 29437, "s": 29334, "text": "Output: In console, the data array of the response can also seen which is further used to show data. " }, { "code": null, "e": 29449, "s": 29439, "text": "ruhelaa48" }, { "code": null, "e": 29458, "s": 29449, "text": "sweetyty" }, { "code": null, "e": 29473, "s": 29458, "text": "AngularJS-Misc" }, { "code": null, "e": 29490, "s": 29473, "text": "Web Technologies" }, { "code": null, "e": 29517, "s": 29490, "text": "Web technologies Questions" }, { "code": null, "e": 29615, "s": 29517, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29624, "s": 29615, "text": "Comments" }, { "code": null, "e": 29637, "s": 29624, "text": "Old Comments" }, { "code": null, "e": 29693, "s": 29637, "text": "Top 10 Front End Developer Skills That You Need in 2022" }, { "code": null, "e": 29736, "s": 29693, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 29797, "s": 29736, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 29842, "s": 29797, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 29914, "s": 29842, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 29974, "s": 29914, "text": "How to set the default value for an HTML <select> element ?" }, { "code": null, "e": 30001, "s": 29974, "text": "File uploading in React.js" }, { "code": null, "e": 30062, "s": 30001, "text": "How to set input type date in dd-mm-yyyy format using HTML ?" }, { "code": null, "e": 30108, "s": 30062, "text": "How to Open URL in New Tab using JavaScript ?" } ]
C# | Trim() Method - GeeksforGeeks
31 Jan, 2019 C# Trim() is a string method. This method is used to removes all leading and trailing white-space characters from the current String object. This method can be overloaded by passing arguments to it. Syntax: public string Trim() or public string Trim (params char[] trimChars) Explanation : First method will not take any parameter and the second method will take an array of Unicode characters or null as a parameter. Null is because of params keyword. The type of Trim() method is System.String. Note: If no parameter is pass in public string Trim() then Null , TAB, Carriage Return and White Space will automatically remove if they are present in current string object. And If any parameter will pass into the Trim() method then only specified character(which passed as arguments in Trim() method) will be removed from the current string object. Null, TAB, Carriage Return, and White Space will not remove automatically if they are not specified in the arguments list. Below are the programs to demonstrate the above method : Example 1: Program to demonstrate the public string Trim() method. The Trim method removes all leading and trailing white-space characters from the current string object. Each leading and trailing trim operation stops when a non-white-space character is encountered. For example, If current string is ” abc xyz ” and then Trim method returns “abc xyz”.// C# program to illustrate the // method without any parametersusing System; class GFG { // Main Method public static void Main() { string s1 = " GFG"; string s2 = " GFG "; string s3 = "GFG "; // Before Trim method call Console.WriteLine("Before:"); Console.WriteLine(s1); Console.WriteLine(s2); Console.WriteLine(s3); Console.WriteLine(""); // After Trim method call Console.WriteLine("After:"); Console.WriteLine(s1.Trim()); Console.WriteLine(s2.Trim()); Console.WriteLine(s3.Trim()); }}Output:Before: GFG GFG GFG After: GFG GFG GFG // C# program to illustrate the // method without any parametersusing System; class GFG { // Main Method public static void Main() { string s1 = " GFG"; string s2 = " GFG "; string s3 = "GFG "; // Before Trim method call Console.WriteLine("Before:"); Console.WriteLine(s1); Console.WriteLine(s2); Console.WriteLine(s3); Console.WriteLine(""); // After Trim method call Console.WriteLine("After:"); Console.WriteLine(s1.Trim()); Console.WriteLine(s2.Trim()); Console.WriteLine(s3.Trim()); }} Before: GFG GFG GFG After: GFG GFG GFG Example 2: Program to demonstrate the public string Trim (params char[] trimChars) method. The Trim method removes from the current string all leading and trailing characters which are present in the parameter list. Each leading and trailing trim operation stops when a character which is not in trimChars encountered. For example, current string is “123abc456xyz789” and trimChars contains the digits from “1 to 9”, then Trim method returns “abc456xyz”.// C# program to illustrate the // method with parametersusing System; class GFG { // Main Method public static void Main() { // declare char[] array and // initialize character 0 to 9 char[] charsToTrim1 = {'1', '2', '3', '4', '5', '6', '7', '8', '9'}; string s1 = "123abc456xyz789"; Console.WriteLine("Before:" + s1); Console.WriteLine("After:" + s1.Trim(charsToTrim1)); Console.WriteLine(""); char[] charsToTrim2 = { '*', '1', 'c' }; string s2 = "*123xyz********c******c"; Console.WriteLine("Before:" + s2); Console.WriteLine("After:" + s2.Trim(charsToTrim2)); Console.WriteLine(""); char[] charsToTrim3 = { 'G', 'e', 'k', 's' }; string s3 = "GeeksForGeeks"; Console.WriteLine("Before:" + s3); Console.WriteLine("After:" + s3.Trim(charsToTrim3)); Console.WriteLine(""); string s4 = " Geeks0000"; Console.WriteLine("Before:" + s4); Console.WriteLine("After:" + s4.Trim('0')); }}Output:Before:123abc456xyz789 After:abc456xyz Before:*123xyz********c******c After:23xyz Before:GeeksForGeeks After:For Before: Geeks0000 After: Geeks // C# program to illustrate the // method with parametersusing System; class GFG { // Main Method public static void Main() { // declare char[] array and // initialize character 0 to 9 char[] charsToTrim1 = {'1', '2', '3', '4', '5', '6', '7', '8', '9'}; string s1 = "123abc456xyz789"; Console.WriteLine("Before:" + s1); Console.WriteLine("After:" + s1.Trim(charsToTrim1)); Console.WriteLine(""); char[] charsToTrim2 = { '*', '1', 'c' }; string s2 = "*123xyz********c******c"; Console.WriteLine("Before:" + s2); Console.WriteLine("After:" + s2.Trim(charsToTrim2)); Console.WriteLine(""); char[] charsToTrim3 = { 'G', 'e', 'k', 's' }; string s3 = "GeeksForGeeks"; Console.WriteLine("Before:" + s3); Console.WriteLine("After:" + s3.Trim(charsToTrim3)); Console.WriteLine(""); string s4 = " Geeks0000"; Console.WriteLine("Before:" + s4); Console.WriteLine("After:" + s4.Trim('0')); }} Before:123abc456xyz789 After:abc456xyz Before:*123xyz********c******c After:23xyz Before:GeeksForGeeks After:For Before: Geeks0000 After: Geeks Important Points About Trim() Method: If the Trim method removes any characters from the current instance, then this method does not modify the value of the current instance. Instead, it returns a new string in which all leading and trailing whitespace characters of the current instance will be removed out. If the current string equals Empty or all the characters in the current instance consist of white-space characters, the method returns Empty. Reference: https://msdn.microsoft.com/en-us/library/system.string.trim CSharp-method CSharp-string C# Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. C# | Method Overriding C# Dictionary with examples Destructors in C# Difference between Ref and Out keywords in C# C# | Delegates C# | Constructors Extension Method in C# C# | Class and Object Introduction to .NET Framework C# | Abstract Classes
[ { "code": null, "e": 24460, "s": 24432, "text": "\n31 Jan, 2019" }, { "code": null, "e": 24659, "s": 24460, "text": "C# Trim() is a string method. This method is used to removes all leading and trailing white-space characters from the current String object. This method can be overloaded by passing arguments to it." }, { "code": null, "e": 24667, "s": 24659, "text": "Syntax:" }, { "code": null, "e": 24739, "s": 24667, "text": "public string Trim() \nor\npublic string Trim (params char[] trimChars)\n" }, { "code": null, "e": 24960, "s": 24739, "text": "Explanation : First method will not take any parameter and the second method will take an array of Unicode characters or null as a parameter. Null is because of params keyword. The type of Trim() method is System.String." }, { "code": null, "e": 25434, "s": 24960, "text": "Note: If no parameter is pass in public string Trim() then Null , TAB, Carriage Return and White Space will automatically remove if they are present in current string object. And If any parameter will pass into the Trim() method then only specified character(which passed as arguments in Trim() method) will be removed from the current string object. Null, TAB, Carriage Return, and White Space will not remove automatically if they are not specified in the arguments list." }, { "code": null, "e": 25491, "s": 25434, "text": "Below are the programs to demonstrate the above method :" }, { "code": null, "e": 26508, "s": 25491, "text": "Example 1: Program to demonstrate the public string Trim() method. The Trim method removes all leading and trailing white-space characters from the current string object. Each leading and trailing trim operation stops when a non-white-space character is encountered. For example, If current string is ” abc xyz ” and then Trim method returns “abc xyz”.// C# program to illustrate the // method without any parametersusing System; class GFG { // Main Method public static void Main() { string s1 = \" GFG\"; string s2 = \" GFG \"; string s3 = \"GFG \"; // Before Trim method call Console.WriteLine(\"Before:\"); Console.WriteLine(s1); Console.WriteLine(s2); Console.WriteLine(s3); Console.WriteLine(\"\"); // After Trim method call Console.WriteLine(\"After:\"); Console.WriteLine(s1.Trim()); Console.WriteLine(s2.Trim()); Console.WriteLine(s3.Trim()); }}Output:Before:\n GFG\n GFG \nGFG \n\nAfter:\nGFG\nGFG\nGFG\n" }, { "code": "// C# program to illustrate the // method without any parametersusing System; class GFG { // Main Method public static void Main() { string s1 = \" GFG\"; string s2 = \" GFG \"; string s3 = \"GFG \"; // Before Trim method call Console.WriteLine(\"Before:\"); Console.WriteLine(s1); Console.WriteLine(s2); Console.WriteLine(s3); Console.WriteLine(\"\"); // After Trim method call Console.WriteLine(\"After:\"); Console.WriteLine(s1.Trim()); Console.WriteLine(s2.Trim()); Console.WriteLine(s3.Trim()); }}", "e": 27122, "s": 26508, "text": null }, { "code": null, "e": 27167, "s": 27122, "text": "Before:\n GFG\n GFG \nGFG \n\nAfter:\nGFG\nGFG\nGFG\n" }, { "code": null, "e": 28912, "s": 27167, "text": "Example 2: Program to demonstrate the public string Trim (params char[] trimChars) method. The Trim method removes from the current string all leading and trailing characters which are present in the parameter list. Each leading and trailing trim operation stops when a character which is not in trimChars encountered. For example, current string is “123abc456xyz789” and trimChars contains the digits from “1 to 9”, then Trim method returns “abc456xyz”.// C# program to illustrate the // method with parametersusing System; class GFG { // Main Method public static void Main() { // declare char[] array and // initialize character 0 to 9 char[] charsToTrim1 = {'1', '2', '3', '4', '5', '6', '7', '8', '9'}; string s1 = \"123abc456xyz789\"; Console.WriteLine(\"Before:\" + s1); Console.WriteLine(\"After:\" + s1.Trim(charsToTrim1)); Console.WriteLine(\"\"); char[] charsToTrim2 = { '*', '1', 'c' }; string s2 = \"*123xyz********c******c\"; Console.WriteLine(\"Before:\" + s2); Console.WriteLine(\"After:\" + s2.Trim(charsToTrim2)); Console.WriteLine(\"\"); char[] charsToTrim3 = { 'G', 'e', 'k', 's' }; string s3 = \"GeeksForGeeks\"; Console.WriteLine(\"Before:\" + s3); Console.WriteLine(\"After:\" + s3.Trim(charsToTrim3)); Console.WriteLine(\"\"); string s4 = \" Geeks0000\"; Console.WriteLine(\"Before:\" + s4); Console.WriteLine(\"After:\" + s4.Trim('0')); }}Output:Before:123abc456xyz789\nAfter:abc456xyz\n\nBefore:*123xyz********c******c\nAfter:23xyz\n\nBefore:GeeksForGeeks\nAfter:For\n\nBefore: Geeks0000\nAfter: Geeks\n" }, { "code": "// C# program to illustrate the // method with parametersusing System; class GFG { // Main Method public static void Main() { // declare char[] array and // initialize character 0 to 9 char[] charsToTrim1 = {'1', '2', '3', '4', '5', '6', '7', '8', '9'}; string s1 = \"123abc456xyz789\"; Console.WriteLine(\"Before:\" + s1); Console.WriteLine(\"After:\" + s1.Trim(charsToTrim1)); Console.WriteLine(\"\"); char[] charsToTrim2 = { '*', '1', 'c' }; string s2 = \"*123xyz********c******c\"; Console.WriteLine(\"Before:\" + s2); Console.WriteLine(\"After:\" + s2.Trim(charsToTrim2)); Console.WriteLine(\"\"); char[] charsToTrim3 = { 'G', 'e', 'k', 's' }; string s3 = \"GeeksForGeeks\"; Console.WriteLine(\"Before:\" + s3); Console.WriteLine(\"After:\" + s3.Trim(charsToTrim3)); Console.WriteLine(\"\"); string s4 = \" Geeks0000\"; Console.WriteLine(\"Before:\" + s4); Console.WriteLine(\"After:\" + s4.Trim('0')); }}", "e": 30041, "s": 28912, "text": null }, { "code": null, "e": 30197, "s": 30041, "text": "Before:123abc456xyz789\nAfter:abc456xyz\n\nBefore:*123xyz********c******c\nAfter:23xyz\n\nBefore:GeeksForGeeks\nAfter:For\n\nBefore: Geeks0000\nAfter: Geeks\n" }, { "code": null, "e": 30235, "s": 30197, "text": "Important Points About Trim() Method:" }, { "code": null, "e": 30506, "s": 30235, "text": "If the Trim method removes any characters from the current instance, then this method does not modify the value of the current instance. Instead, it returns a new string in which all leading and trailing whitespace characters of the current instance will be removed out." }, { "code": null, "e": 30648, "s": 30506, "text": "If the current string equals Empty or all the characters in the current instance consist of white-space characters, the method returns Empty." }, { "code": null, "e": 30719, "s": 30648, "text": "Reference: https://msdn.microsoft.com/en-us/library/system.string.trim" }, { "code": null, "e": 30733, "s": 30719, "text": "CSharp-method" }, { "code": null, "e": 30747, "s": 30733, "text": "CSharp-string" }, { "code": null, "e": 30750, "s": 30747, "text": "C#" }, { "code": null, "e": 30848, "s": 30750, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30871, "s": 30848, "text": "C# | Method Overriding" }, { "code": null, "e": 30899, "s": 30871, "text": "C# Dictionary with examples" }, { "code": null, "e": 30917, "s": 30899, "text": "Destructors in C#" }, { "code": null, "e": 30963, "s": 30917, "text": "Difference between Ref and Out keywords in C#" }, { "code": null, "e": 30978, "s": 30963, "text": "C# | Delegates" }, { "code": null, "e": 30996, "s": 30978, "text": "C# | Constructors" }, { "code": null, "e": 31019, "s": 30996, "text": "Extension Method in C#" }, { "code": null, "e": 31041, "s": 31019, "text": "C# | Class and Object" }, { "code": null, "e": 31072, "s": 31041, "text": "Introduction to .NET Framework" } ]
The image() object in JavaScript.
The image object represents the HTML element. Following is the code for image object in JavaScript − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } .result { font-size: 18px; font-weight: 500; color: rebeccapurple; } </style> </head> <body> <h1>The image() object in JavaScript</h1> <div class="result"><br /></div> <button class="Btn">CLICK HERE</button> <h3>Click on the above button to display an image</h3> <script> let resEle = document.querySelector(".result"); let BtnEle = document.querySelector(".Btn"); let newImage = new Image(500, 300); newImage.src = "https://i.picsum.photos/id/195/536/354.jpg"; BtnEle.addEventListener("click", () => { resEle.appendChild(newImage); }); </script> </body> </html> The above code will produce the following output − On clicking the ‘CLICK HERE’ button −
[ { "code": null, "e": 1109, "s": 1062, "text": "The image object represents the HTML element." }, { "code": null, "e": 1164, "s": 1109, "text": "Following is the code for image object in JavaScript −" }, { "code": null, "e": 1175, "s": 1164, "text": " Live Demo" }, { "code": null, "e": 2039, "s": 1175, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result {\n font-size: 18px;\n font-weight: 500;\n color: rebeccapurple;\n }\n</style>\n</head>\n<body>\n<h1>The image() object in JavaScript</h1>\n<div class=\"result\"><br /></div>\n<button class=\"Btn\">CLICK HERE</button>\n<h3>Click on the above button to display an image</h3>\n<script>\n let resEle = document.querySelector(\".result\");\n let BtnEle = document.querySelector(\".Btn\");\n let newImage = new Image(500, 300);\n newImage.src = \"https://i.picsum.photos/id/195/536/354.jpg\";\n BtnEle.addEventListener(\"click\", () => {\n resEle.appendChild(newImage);\n });\n</script>\n</body>\n</html>" }, { "code": null, "e": 2090, "s": 2039, "text": "The above code will produce the following output −" }, { "code": null, "e": 2128, "s": 2090, "text": "On clicking the ‘CLICK HERE’ button −" } ]
How to search on a MySQL varchar column using a number?
Use INSERT() function from MySQL. It has the following parameters − Let us first create a table − mysql> create table DemoTable -> ( -> Code varchar(100) -> ); Query OK, 0 rows affected (0.82 sec) Insert some records in the table using insert command − mysql> insert into DemoTable values('958575/98') ; Query OK, 1 row affected (0.16 sec) mysql> insert into DemoTable values('765432/99'); Query OK, 1 row affected (0.19 sec) mysql> insert into DemoTable values('983456/91'); Query OK, 1 row affected (0.15 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +-----------+ | Code | +-----------+ | 958575/98 | | 765432/99 | | 983456/91 | +-----------+ 3 rows in set (0.00 sec) Here is the query to search on a varchar column using a number − mysql> select *from DemoTable where Code =INSERT(76543299,7,0,'/'); This will produce the following output − +-----------+ | Code | +-----------+ | 765432/99 | +-----------+ 1 row in set (0.00 sec)
[ { "code": null, "e": 1130, "s": 1062, "text": "Use INSERT() function from MySQL. It has the following parameters −" }, { "code": null, "e": 1160, "s": 1130, "text": "Let us first create a table −" }, { "code": null, "e": 1268, "s": 1160, "text": "mysql> create table DemoTable\n -> (\n -> Code varchar(100)\n -> );\nQuery OK, 0 rows affected (0.82 sec)" }, { "code": null, "e": 1324, "s": 1268, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 1585, "s": 1324, "text": "mysql> insert into DemoTable values('958575/98') ;\nQuery OK, 1 row affected (0.16 sec)\n\nmysql> insert into DemoTable values('765432/99');\nQuery OK, 1 row affected (0.19 sec)\n\nmysql> insert into DemoTable values('983456/91');\nQuery OK, 1 row affected (0.15 sec)" }, { "code": null, "e": 1645, "s": 1585, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 1676, "s": 1645, "text": "mysql> select *from DemoTable;" }, { "code": null, "e": 1717, "s": 1676, "text": "This will produce the following output −" }, { "code": null, "e": 1840, "s": 1717, "text": "+-----------+\n| Code |\n+-----------+\n| 958575/98 |\n| 765432/99 |\n| 983456/91 |\n+-----------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 1905, "s": 1840, "text": "Here is the query to search on a varchar column using a number −" }, { "code": null, "e": 1973, "s": 1905, "text": "mysql> select *from DemoTable where Code =INSERT(76543299,7,0,'/');" }, { "code": null, "e": 2014, "s": 1973, "text": "This will produce the following output −" }, { "code": null, "e": 2108, "s": 2014, "text": "+-----------+\n| Code |\n+-----------+\n| 765432/99 |\n+-----------+\n1 row in set (0.00 sec)" } ]
Null Aware Operators in Dart Programming
Dart has different null aware operators that we can use to make sure that we are not accessing the null values and to deal with them in a subtle way. Mainly, these are − ?? operator ?? operator ??= operator ??= operator ? operator ? operator We will go through each of them in the following article. The ?? operator returns the first expression if and only if it is not null. Consider the example shown below − void main() { var age; age = age ?? 23; print(age); var name = "mukul"; name = name ?? "suruchi"; print(name); } In the above example, we declared two variables and one of them is of null value and the other is not null and contains a string value. We are using the ?? operator when reassigning values to those variables. In the first variable, since the age is null, the ?? operator will return the second value, i.e., 23 and in the second case, the name variable is not null, hence the first value will be returned from the ?? operator. 23 mukul The ??= operator in Dart is used when we want to assign a value if and only if it is not null. Consider the example shown below − void main() { var age; var myAge = 24; myAge ??= age; print(myAge); } In the above example we have two variables, one of them is null and the other contains an int value, when we try to assign the value of the age variable to myAge variable it did nothing, as age is null and hence the ??= doesn't change the original value of myAge variable. 24 The ? operator is used when we want to make sure that we don't invoke a function of a null value. It will call a function if and only if the object is not null. Consider the example shown below − void main() { var earthMoon; var length = earthMoon?.length; print(length); } In the above code, we know that the variable earthMoon has null as its value, so when we try to invoke the length function on it using the ? operator nothing changed, and the length variable is also a null value. null
[ { "code": null, "e": 1212, "s": 1062, "text": "Dart has different null aware operators that we can use to make sure that we are not accessing the null values and to deal with them in a subtle way." }, { "code": null, "e": 1232, "s": 1212, "text": "Mainly, these are −" }, { "code": null, "e": 1244, "s": 1232, "text": "?? operator" }, { "code": null, "e": 1256, "s": 1244, "text": "?? operator" }, { "code": null, "e": 1269, "s": 1256, "text": "??= operator" }, { "code": null, "e": 1282, "s": 1269, "text": "??= operator" }, { "code": null, "e": 1293, "s": 1282, "text": "? operator" }, { "code": null, "e": 1304, "s": 1293, "text": "? operator" }, { "code": null, "e": 1362, "s": 1304, "text": "We will go through each of them in the following article." }, { "code": null, "e": 1438, "s": 1362, "text": "The ?? operator returns the first expression if and only if it is not null." }, { "code": null, "e": 1473, "s": 1438, "text": "Consider the example shown below −" }, { "code": null, "e": 1605, "s": 1473, "text": "void main() {\n var age;\n age = age ?? 23;\n print(age);\n\n var name = \"mukul\";\n name = name ?? \"suruchi\";\n print(name);\n}" }, { "code": null, "e": 2031, "s": 1605, "text": "In the above example, we declared two variables and one of them is of null value and the other is not null and contains a string value. We are using the ?? operator when reassigning values to those variables. In the first variable, since the age is null, the ?? operator will return the second value, i.e., 23 and in the second case, the name variable is not null, hence the first value will be returned from the ?? operator." }, { "code": null, "e": 2040, "s": 2031, "text": "23\nmukul" }, { "code": null, "e": 2135, "s": 2040, "text": "The ??= operator in Dart is used when we want to assign a value if and only if it is not null." }, { "code": null, "e": 2170, "s": 2135, "text": "Consider the example shown below −" }, { "code": null, "e": 2252, "s": 2170, "text": "void main() {\n var age;\n var myAge = 24;\n myAge ??= age;\n print(myAge);\n}" }, { "code": null, "e": 2525, "s": 2252, "text": "In the above example we have two variables, one of them is null and the other contains an int value, when we try to assign the value of the age variable to myAge variable it did nothing, as age is null and hence the ??= doesn't change the original value of myAge variable." }, { "code": null, "e": 2528, "s": 2525, "text": "24" }, { "code": null, "e": 2689, "s": 2528, "text": "The ? operator is used when we want to make sure that we don't invoke a function of a null value. It will call a function if and only if the object is not null." }, { "code": null, "e": 2724, "s": 2689, "text": "Consider the example shown below −" }, { "code": null, "e": 2811, "s": 2724, "text": "void main() {\n var earthMoon;\n var length = earthMoon?.length;\n print(length);\n}" }, { "code": null, "e": 3024, "s": 2811, "text": "In the above code, we know that the variable earthMoon has null as its value, so when we try to invoke the length function on it using the ? operator nothing changed, and the length variable is also a null value." }, { "code": null, "e": 3029, "s": 3024, "text": "null" } ]
How to select elements by data attribute using CSS? - GeeksforGeeks
06 Mar, 2019 CSS allows to select HTML elements that have specific attributes or attribute values. Element can be selected in number of ways. Some examples are given below: [attribute]: It selects the element with specified attribute. [attribute=”value”]: It selects the elements with a specified attribute and value. [attribute~=”value”]: It selects the elements with an attribute value which contains a specified word. [attribute|=”value”]: It selects the elements with the specified attribute which starts with the specified value. [attribute^=”value”]: It selects the elements in which attribute value begins with a specified value. [attribute$=”value”]: It selects the elements in which attribute value ends with a specified value. [attribute*=”value”]: It selects the elements in which attribute value contains a specified value. Example 1: This example changes the background-color of <a> element by selecting the element [target] using CSS. <!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> a[target] { background-color: yellow; } a { font-size: 20px; } </style></head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <a href="https://www.geeksforgeeks.org" target="_blank"> geeksforgeeks.org </a> <br><br> <a href="https://www.google.com" > google.com </a> </body> </html> Output: Example 2: This example changes the background-color and text-color of the <a> element by selecting element having [target = “_top”] using CSS. <!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> a[target=_top] { background-color: green; color: white; } a { font-size: 20px; } </style></head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <a href="https://www.geeksforgeeks.org" target="_top" > geeksforgeeks.org </a> <br><br> <a href="https://www.google.com" > google.com </a> </body> </html> Output: Example 3: This example changes the background-color and text-color of the <p> element by selecting element having [class^=”top”] using CSS. <!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> [class^="top"] { background-color: green; color: white; } p { font-size: 20px; } </style></head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <p class="top-p">A computer science portal</p> <p class="topPara">Attribute Selector Example</p> <p class="Para">CSS does not applies here</p> </body> </html> Output: CSS-Selectors Picked CSS Web Technologies Web technologies Questions Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Design a web page using HTML and CSS Form validation using jQuery How to set space between the flexbox ? Search Bar using HTML, CSS and JavaScript How to Create Time-Table schedule using HTML ? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript
[ { "code": null, "e": 25376, "s": 25348, "text": "\n06 Mar, 2019" }, { "code": null, "e": 25536, "s": 25376, "text": "CSS allows to select HTML elements that have specific attributes or attribute values. Element can be selected in number of ways. Some examples are given below:" }, { "code": null, "e": 25598, "s": 25536, "text": "[attribute]: It selects the element with specified attribute." }, { "code": null, "e": 25681, "s": 25598, "text": "[attribute=”value”]: It selects the elements with a specified attribute and value." }, { "code": null, "e": 25784, "s": 25681, "text": "[attribute~=”value”]: It selects the elements with an attribute value which contains a specified word." }, { "code": null, "e": 25898, "s": 25784, "text": "[attribute|=”value”]: It selects the elements with the specified attribute which starts with the specified value." }, { "code": null, "e": 26000, "s": 25898, "text": "[attribute^=”value”]: It selects the elements in which attribute value begins with a specified value." }, { "code": null, "e": 26100, "s": 26000, "text": "[attribute$=”value”]: It selects the elements in which attribute value ends with a specified value." }, { "code": null, "e": 26199, "s": 26100, "text": "[attribute*=”value”]: It selects the elements in which attribute value contains a specified value." }, { "code": null, "e": 26312, "s": 26199, "text": "Example 1: This example changes the background-color of <a> element by selecting the element [target] using CSS." }, { "code": "<!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> a[target] { background-color: yellow; } a { font-size: 20px; } </style></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksForGeeks </h1> <a href=\"https://www.geeksforgeeks.org\" target=\"_blank\"> geeksforgeeks.org </a> <br><br> <a href=\"https://www.google.com\" > google.com </a> </body> </html> ", "e": 26886, "s": 26312, "text": null }, { "code": null, "e": 26894, "s": 26886, "text": "Output:" }, { "code": null, "e": 27038, "s": 26894, "text": "Example 2: This example changes the background-color and text-color of the <a> element by selecting element having [target = “_top”] using CSS." }, { "code": "<!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> a[target=_top] { background-color: green; color: white; } a { font-size: 20px; } </style></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksForGeeks </h1> <a href=\"https://www.geeksforgeeks.org\" target=\"_top\" > geeksforgeeks.org </a> <br><br> <a href=\"https://www.google.com\" > google.com </a> </body> </html> ", "e": 27648, "s": 27038, "text": null }, { "code": null, "e": 27656, "s": 27648, "text": "Output:" }, { "code": null, "e": 27797, "s": 27656, "text": "Example 3: This example changes the background-color and text-color of the <p> element by selecting element having [class^=”top”] using CSS." }, { "code": "<!DOCTYPE html> <html> <head> <title> Attribute selector in CSS </title> <style> [class^=\"top\"] { background-color: green; color: white; } p { font-size: 20px; } </style></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksForGeeks </h1> <p class=\"top-p\">A computer science portal</p> <p class=\"topPara\">Attribute Selector Example</p> <p class=\"Para\">CSS does not applies here</p> </body> </html> ", "e": 28371, "s": 27797, "text": null }, { "code": null, "e": 28379, "s": 28371, "text": "Output:" }, { "code": null, "e": 28393, "s": 28379, "text": "CSS-Selectors" }, { "code": null, "e": 28400, "s": 28393, "text": "Picked" }, { "code": null, "e": 28404, "s": 28400, "text": "CSS" }, { "code": null, "e": 28421, "s": 28404, "text": "Web Technologies" }, { "code": null, "e": 28448, "s": 28421, "text": "Web technologies Questions" }, { "code": null, "e": 28546, "s": 28448, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28583, "s": 28546, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 28612, "s": 28583, "text": "Form validation using jQuery" }, { "code": null, "e": 28651, "s": 28612, "text": "How to set space between the flexbox ?" }, { "code": null, "e": 28693, "s": 28651, "text": "Search Bar using HTML, CSS and JavaScript" }, { "code": null, "e": 28740, "s": 28693, "text": "How to Create Time-Table schedule using HTML ?" }, { "code": null, "e": 28782, "s": 28740, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 28815, "s": 28782, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 28858, "s": 28815, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 28903, "s": 28858, "text": "Convert a string to an integer in JavaScript" } ]
Program for Point of Intersection of Two Lines in C++
Given points A and B corresponding to line AB and points P and Q corresponding to line PQ; the task is to find the point of intersection between these two lines. Note − The points are given in 2D plane on X and Y coordinates. Here A(a1, a2), B(b1, b2) and C(c1, c2), D(d1, d2) are the coordinates which are forming two distinct lines and P(p1, p2) is the point of intersection. (just for diagrammatic explanation of point of intersection) How to find the point of intersection − Let’s take above figure as − So using the (a1, a2), (b1, b2), (c1, c2), (d1, d2) We will calculate : A1 = b2 - a2 B1 = a1 - b1 C1 = (A1 * a1) + (B1 * a1) A2 = d2 - c2 B2 = c1 - d1 C2 = (A2 * c1) + (B2 * c2) Let the given lines be: 1. A1x + B1y = C1 2. A2x + B2y = C2 Now, to find the point of intersection, we have to solve these 2 equations. We will multiply 1 by B1 and 2 by B2, so we will get: A1B2x +B1B2y = C1B2 A1B1x +B2B1y = C1B1 Subtracting these we get, (A1B2 - A2B1)x = C1B2-C2B1 This gives us value of x, and similarly we will get the value of y which will be the point of intersection p1 which is x and p2 which is y. Note − the above formula will give the point of intersection of the two lines, but if the segments are given instead of lines, then we have to recheck that the point so the computed result must lie on the line segment. min (x1, x2) <= x <= max (x1, x2) min (y1, y2) <= y <= max (y1, y2) Approach we are using to solve the above problem − Take the input values. Find the determinant which is a1 * b2 - a2 * b1 Check if the determinant = 0 then the lines are parellel If the determinant is not zero then x = (c1 * b2 - c2 * b1) and y = (a1 * c2 - a2 * c1) Return and print the result. Start Step 1-> Declare function to print the x and y coordinates void display(mk_pair par) Print par.first and par.second Step 2-> declare function to calculate the intersection point mk_pair intersection(mk_pair A, mk_pair B, mk_pair C, mk_pair D) Declare double a = B.second - A.second Declare double b = A.first - B.first Declare double c = a*(A.first) + b*(A.second) Declare double a1 = D.second - C.second Declare double b1 = C.first - D.first Declare double c1 = a1*(C.first)+ b1*(C.second) Declare double det = a*b1 - a1*b IF (det = 0) return make_pair(FLT_MAX, FLT_MAX) End Else Declare double x = (b1*c - b*c1)/det Declare double y = (a*c1 - a1*c)/det return make_pair(x, y) End Step 3-> In main() Declare and call function for points as mk_pair q = make_pair(2, 1) IF (inter.first = FLT_MAX AND inter.second = FLT_MAX) Print “given lines are parallel“ End Else Call display(inter) End Stop Live Demo #include <bits/stdc++.h> using namespace std; #define mk_pair pair<double, double> //display the x and y coordinates void display(mk_pair par) { cout << "(" << par.first << ", " << par.second << ")" << endl; } mk_pair intersection(mk_pair A, mk_pair B, mk_pair C, mk_pair D) { // Line AB represented as a1x + b1y = c1 double a = B.second - A.second; double b = A.first - B.first; double c = a*(A.first) + b*(A.second); // Line CD represented as a2x + b2y = c2 double a1 = D.second - C.second; double b1 = C.first - D.first; double c1 = a1*(C.first)+ b1*(C.second); double det = a*b1 - a1*b; if (det == 0) { return make_pair(FLT_MAX, FLT_MAX); } else { double x = (b1*c - b*c1)/det; double y = (a*c1 - a1*c)/det; return make_pair(x, y); } } int main() { mk_pair q = make_pair(2, 1); mk_pair r = make_pair(2, 7); mk_pair s = make_pair(4, 4); mk_pair t = make_pair(6, 4); mk_pair inter = intersection(q, r, s, t); if (inter.first == FLT_MAX && inter.second==FLT_MAX) { cout << "The given lines AB and CD are parallel.\n"; } else { cout << "The intersection of the given lines AB and CD is: "; display(inter); } return 0; } The intersection of the given lines AB and CD is: (2, 4)
[ { "code": null, "e": 1224, "s": 1062, "text": "Given points A and B corresponding to line AB and points P and Q corresponding to line PQ; the task is to find the point of intersection between these two lines." }, { "code": null, "e": 1288, "s": 1224, "text": "Note − The points are given in 2D plane on X and Y coordinates." }, { "code": null, "e": 1501, "s": 1288, "text": "Here A(a1, a2), B(b1, b2) and C(c1, c2), D(d1, d2) are the coordinates which are forming two distinct lines and P(p1, p2) is the point of intersection. (just for diagrammatic explanation of point of intersection)" }, { "code": null, "e": 1541, "s": 1501, "text": "How to find the point of intersection −" }, { "code": null, "e": 1570, "s": 1541, "text": "Let’s take above figure as −" }, { "code": null, "e": 2032, "s": 1570, "text": "So using the (a1, a2), (b1, b2), (c1, c2), (d1, d2)\nWe will calculate :\nA1 = b2 - a2\nB1 = a1 - b1\nC1 = (A1 * a1) + (B1 * a1)\nA2 = d2 - c2\nB2 = c1 - d1\nC2 = (A2 * c1) + (B2 * c2)\nLet the given lines be:\n1. A1x + B1y = C1\n2. A2x + B2y = C2\nNow, to find the point of intersection, we have to solve these 2 equations. We will multiply 1 by B1 and 2 by B2, so we will get:\nA1B2x +B1B2y = C1B2\nA1B1x +B2B1y = C1B1\n\nSubtracting these we get,\n(A1B2 - A2B1)x = C1B2-C2B1" }, { "code": null, "e": 2172, "s": 2032, "text": "This gives us value of x, and similarly we will get the value of y which will be the point of intersection p1 which is x and p2 which is y." }, { "code": null, "e": 2391, "s": 2172, "text": "Note − the above formula will give the point of intersection of the two lines, but if the segments are given instead of lines, then we have to recheck that the point so the computed result must lie on the line segment." }, { "code": null, "e": 2425, "s": 2391, "text": "min (x1, x2) <= x <= max (x1, x2)" }, { "code": null, "e": 2459, "s": 2425, "text": "min (y1, y2) <= y <= max (y1, y2)" }, { "code": null, "e": 2510, "s": 2459, "text": "Approach we are using to solve the above problem −" }, { "code": null, "e": 2533, "s": 2510, "text": "Take the input values." }, { "code": null, "e": 2581, "s": 2533, "text": "Find the determinant which is a1 * b2 - a2 * b1" }, { "code": null, "e": 2638, "s": 2581, "text": "Check if the determinant = 0 then the lines are parellel" }, { "code": null, "e": 2726, "s": 2638, "text": "If the determinant is not zero then x = (c1 * b2 - c2 * b1) and y = (a1 * c2 - a2 * c1)" }, { "code": null, "e": 2755, "s": 2726, "text": "Return and print the result." }, { "code": null, "e": 3748, "s": 2755, "text": "Start\nStep 1-> Declare function to print the x and y coordinates\n void display(mk_pair par)\n Print par.first and par.second\nStep 2-> declare function to calculate the intersection point\n mk_pair intersection(mk_pair A, mk_pair B, mk_pair C, mk_pair D)\n Declare double a = B.second - A.second\n Declare double b = A.first - B.first\n Declare double c = a*(A.first) + b*(A.second)\n Declare double a1 = D.second - C.second\n Declare double b1 = C.first - D.first\n Declare double c1 = a1*(C.first)+ b1*(C.second)\n Declare double det = a*b1 - a1*b\n IF (det = 0)\n return make_pair(FLT_MAX, FLT_MAX)\n End\n Else\n Declare double x = (b1*c - b*c1)/det\n Declare double y = (a*c1 - a1*c)/det\n return make_pair(x, y)\n End\nStep 3-> In main()\n Declare and call function for points as mk_pair q = make_pair(2, 1)\n IF (inter.first = FLT_MAX AND inter.second = FLT_MAX)\n Print “given lines are parallel“\n End\n Else\n Call display(inter)\n End\nStop" }, { "code": null, "e": 3759, "s": 3748, "text": " Live Demo" }, { "code": null, "e": 4987, "s": 3759, "text": "#include <bits/stdc++.h>\nusing namespace std;\n#define mk_pair pair<double, double>\n//display the x and y coordinates\nvoid display(mk_pair par) {\n cout << \"(\" << par.first << \", \" << par.second << \")\" << endl;\n}\nmk_pair intersection(mk_pair A, mk_pair B, mk_pair C, mk_pair D) {\n // Line AB represented as a1x + b1y = c1\n double a = B.second - A.second;\n double b = A.first - B.first;\n double c = a*(A.first) + b*(A.second);\n // Line CD represented as a2x + b2y = c2\n double a1 = D.second - C.second;\n double b1 = C.first - D.first;\n double c1 = a1*(C.first)+ b1*(C.second);\n double det = a*b1 - a1*b;\n if (det == 0) {\n return make_pair(FLT_MAX, FLT_MAX);\n } else {\n double x = (b1*c - b*c1)/det;\n double y = (a*c1 - a1*c)/det;\n return make_pair(x, y);\n }\n}\nint main() {\n mk_pair q = make_pair(2, 1);\n mk_pair r = make_pair(2, 7);\n mk_pair s = make_pair(4, 4);\n mk_pair t = make_pair(6, 4);\n mk_pair inter = intersection(q, r, s, t);\n if (inter.first == FLT_MAX && inter.second==FLT_MAX) {\n cout << \"The given lines AB and CD are parallel.\\n\";\n } else {\n cout << \"The intersection of the given lines AB and CD is: \";\n display(inter);\n }\n return 0;\n}" }, { "code": null, "e": 5044, "s": 4987, "text": "The intersection of the given lines AB and CD is: (2, 4)" } ]
MySQL - RETURN Statement
The RETURN statement in MySQL is used to end the stored functions. Each stored function should have at least one RETURN statement. This is used only in functions in stored procedures triggers or, events you can use LEAVE instead of RETURN. Following is the syntax of the RETURN statement is MySQL − RETURN expression Where, expression is the value to be returned. Following query demonstrates how to use the RETURN statement with in a function. DELIMITER // CREATE FUNCTION Sample (bonus INT) RETURNS INT BEGIN DECLARE income INT; SET income = 0; myLabel: LOOP SET income = income + bonus; IF income < 10000 THEN ITERATE myLabel; END IF; LEAVE myLabel; END LOOP myLabel; RETURN income; END; // Query OK, 0 rows affected (0.41 sec) mysql> DELIMITER ; You can call the above created function as shown below − mysql> SELECT Sample(1000); +--------------+ | Sample(1000) | +--------------+ | 10000 | +--------------+ 1 row in set (0.15 sec) Suppose we have created a table named Emp in the database using the CREATE statement as shown below − mysql> CREATE TABLE Emp(Name VARCHAR(255), DOB DATE, Location VARCHAR(255)); Query OK, 0 rows affected (2.03 sec) And we have inserted three records in the Emp table as − mysql> INSERT INTO Emp VALUES ('Amit', DATE('1970-01-08'), 'Hyderabad'); mysql> INSERT INTO Emp VALUES ('Sumith', DATE('1990-11-02'), 'Vishakhapatnam'); mysql> INSERT INTO Emp VALUES ('Sudha', DATE('1980-11-06'), 'Vijayawada'); Following query creates a function named getDob()which accepts the name of the employee, retrieves and returns the value of DOB column. mysql> DELIMITER // mysql> CREATE FUNCTION test.getDob(emp_name VARCHAR(50)) RETURNS DATE DETERMINISTIC BEGIN declare dateOfBirth DATE; select DOB into dateOfBirth from test.emp where Name = emp_name; return dateOfBirth; END// Query OK, 0 rows affected (0.31 sec) mysql> DELIMITER ; If you call the function you can get date of birth of an employee as shown below − mysql> SELECT getDob('Amit'); +----------------+ | getDob('Amit') | +----------------+ | 1970-01-08 | +----------------+ 1 row in set (0.15 sec) 31 Lectures 6 hours Eduonix Learning Solutions 84 Lectures 5.5 hours Frahaan Hussain 6 Lectures 3.5 hours DATAhill Solutions Srinivas Reddy 60 Lectures 10 hours Vijay Kumar Parvatha Reddy 10 Lectures 1 hours Harshit Srivastava 25 Lectures 4 hours Trevoir Williams Print Add Notes Bookmark this page
[ { "code": null, "e": 2573, "s": 2333, "text": "The RETURN statement in MySQL is used to end the stored functions. Each stored function should have at least one RETURN statement. This is used only in functions in stored procedures triggers or, events you can use LEAVE instead of RETURN." }, { "code": null, "e": 2632, "s": 2573, "text": "Following is the syntax of the RETURN statement is MySQL −" }, { "code": null, "e": 2651, "s": 2632, "text": "RETURN expression\n" }, { "code": null, "e": 2698, "s": 2651, "text": "Where, expression is the value to be returned." }, { "code": null, "e": 2779, "s": 2698, "text": "Following query demonstrates how to use the RETURN statement with in a function." }, { "code": null, "e": 3165, "s": 2779, "text": "DELIMITER //\nCREATE FUNCTION Sample (bonus INT)\n RETURNS INT\n BEGIN\n DECLARE income INT;\n SET income = 0;\n myLabel: LOOP\n SET income = income + bonus;\n IF income < 10000 THEN\n ITERATE myLabel;\n END IF;\n LEAVE myLabel;\n END LOOP myLabel;\n RETURN income;\nEND; //\nQuery OK, 0 rows affected (0.41 sec)\nmysql> DELIMITER ;" }, { "code": null, "e": 3222, "s": 3165, "text": "You can call the above created function as shown below −" }, { "code": null, "e": 3361, "s": 3222, "text": "mysql> SELECT Sample(1000);\n+--------------+\n| Sample(1000) | \n+--------------+\n| 10000 |\n+--------------+\n1 row in set (0.15 sec)\n" }, { "code": null, "e": 3463, "s": 3361, "text": "Suppose we have created a table named Emp in the database using the CREATE statement as shown below −" }, { "code": null, "e": 3577, "s": 3463, "text": "mysql> CREATE TABLE Emp(Name VARCHAR(255), DOB DATE, Location VARCHAR(255));\nQuery OK, 0 rows affected (2.03 sec)" }, { "code": null, "e": 3634, "s": 3577, "text": "And we have inserted three records in the Emp table as −" }, { "code": null, "e": 3862, "s": 3634, "text": "mysql> INSERT INTO Emp VALUES ('Amit', DATE('1970-01-08'), 'Hyderabad');\nmysql> INSERT INTO Emp VALUES ('Sumith', DATE('1990-11-02'), 'Vishakhapatnam');\nmysql> INSERT INTO Emp VALUES ('Sudha', DATE('1980-11-06'), 'Vijayawada');" }, { "code": null, "e": 3998, "s": 3862, "text": "Following query creates a function named getDob()which accepts the name of the employee, retrieves and returns the value of DOB column." }, { "code": null, "e": 4311, "s": 3998, "text": "mysql> DELIMITER //\nmysql> CREATE FUNCTION test.getDob(emp_name VARCHAR(50))\n RETURNS DATE\n DETERMINISTIC\n BEGIN\n declare dateOfBirth DATE;\n select DOB into dateOfBirth from test.emp where Name = emp_name;\n return dateOfBirth;\n END//\nQuery OK, 0 rows affected (0.31 sec)\nmysql> DELIMITER ;" }, { "code": null, "e": 4394, "s": 4311, "text": "If you call the function you can get date of birth of an employee as shown below −" }, { "code": null, "e": 4544, "s": 4394, "text": "mysql> SELECT getDob('Amit');\n+----------------+\n| getDob('Amit') |\n+----------------+\n| 1970-01-08 |\n+----------------+\n1 row in set (0.15 sec)\n" }, { "code": null, "e": 4577, "s": 4544, "text": "\n 31 Lectures \n 6 hours \n" }, { "code": null, "e": 4605, "s": 4577, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4640, "s": 4605, "text": "\n 84 Lectures \n 5.5 hours \n" }, { "code": null, "e": 4657, "s": 4640, "text": " Frahaan Hussain" }, { "code": null, "e": 4691, "s": 4657, "text": "\n 6 Lectures \n 3.5 hours \n" }, { "code": null, "e": 4726, "s": 4691, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 4760, "s": 4726, "text": "\n 60 Lectures \n 10 hours \n" }, { "code": null, "e": 4788, "s": 4760, "text": " Vijay Kumar Parvatha Reddy" }, { "code": null, "e": 4821, "s": 4788, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 4841, "s": 4821, "text": " Harshit Srivastava" }, { "code": null, "e": 4874, "s": 4841, "text": "\n 25 Lectures \n 4 hours \n" }, { "code": null, "e": 4892, "s": 4874, "text": " Trevoir Williams" }, { "code": null, "e": 4899, "s": 4892, "text": " Print" }, { "code": null, "e": 4910, "s": 4899, "text": " Add Notes" } ]
A Quick Introduction to Market Basket Analysis | by Andrew Udell | Towards Data Science
Retailers have access to an unprecedented amount of shopper transactions. As shopping habits have become more electronic, records of every purchase are neatly stored in databases, ready to be read and analyzed. With such an arsenal of data at their disposal, they can uncover patterns of consumer behavior. A market basket analysis is a set of affinity calculations meant to determine which items sell together. For example, a grocery store may use market basket analysis to determine that consumers typically buy both hot dogs and hot dog buns together. If you’ve ever gone onto an online retailer’s website, you’ve probably seen a recommendation on a product’s page phrased as “Customers who bought this item also bought” or “Customers buy these together”. More than likely, the online retailer performed some sort of market basket analysis to link the products together. A savvy retailer can leverage this knowledge to inform decisions on pricing, promotions, and store layouts. The aforementioned grocery store might have a sale on hot dogs, but increase the margins on the hot dog buns. The customers would buy more hot dogs and feel as if they found a bargain while the store would sell more product and raise their revenue. For every combination of items purchased, three key statistics are calculated: support, confidence, and lift. Support is the general popularity of an item relative to all other purchases. In a grocery store, milk would have a high support, because many shoppers buy it every trip. Support is given as a number between 0 and 1. Mathematically: Confidence is the conditional probability that customers who bought Product A also bought Product B. There would likely be a high confidence between marshmallows and graham crackers, because they’re often bought together for s’mores. Confidence is given as a number between 0 and 1. Mathematically: Lift is the sales increase in Product B when Product A is bought. There might be a high lift between hamburger patties and buns, because as more patties are bought, they drive the sale of buns. Mathematically: Lift is a bit unusual compared to the other two measures. Instead of a value between 0 and 1, lift is interpreted by its distance from 1: Lift = 1 suggests no relationship between the products lift > 1 suggests a positive relationship between the products lift < 1 suggests a negative relationship between the products By far, the most common approach to perform Market Basket Analysis is the Apriori Algorithm. First proposed in 1994 by Agrawal and Srikant, the algorithm has become historically important for its impact on retailers to meaningfully track transaction associations. While still useful and widely used, the Apriori Algorithm also suffers from high computation times on larger data sets. Thankfully, most implementations offer minimum parameters for confidence and support and set a limit to the number of items per transaction to reduce the time to process. For demonstration, I’ll use the Python implementation called efficient-apriori. Note that this library is rated for Python 3.6 and 3.7. Older versions of Python may use apyori, which supports 2.7 and 3.3–3.5. To show the application of the Apriori Algorithm, I’ll use a data set of transactions from a bakery available on Kaggle. import pandas as pdimport numpy as np# Read the datadf = pd.read_csv("BreadBasket_DMS.csv")# eliminate lines with "NONE" in itemsdf = df[df["Item"] != "NONE"] After the usual imports of Pandas and Numpy to help process the data, the previously saved CSV file is read to a DataFrame. A few lines of the data contain “NONE” in the Item column, which isn’t particularly helpful, so those are filtered out. # Create and empty list for data processingtransaction_items = []# Get an array of transaction numberstransactions = df["Transaction"].unique()for transaction in transactions: # Get an array of items per transaction number items = df[df["Transaction"] == transaction]["Item"].unique() # Add the item to the list as a tuple transaction_items.append(tuple(items)) Unlike a lot of other libraries which support Pandas DataFrames out of the box, efficient-apriori needs the transaction lines as a series of tuples in a list. To create this data structure, a list of unique Transaction ID numbers are collected. For every Transaction ID, a Numpy array of items associated with the ID are grouped. Finally, they’re converted into tuples and placed in a list. # import apriori algorithmfrom efficient_apriori import apriori# Calculate support, confidence, & liftitemsets, rules = apriori(transaction_items, min_support = 0.05, min_confidence = 0.1) After importing the library, the Apriori Algorithm can be placed on a single line. Note the min_support and min_confidence arguments, which specify the minimum support and confidence values to calculate. The actual values of these will differ between various types of data. Setting them too high won’t produce any results. Setting them too low will give too many results and will take a long to time to run. It’s a Goldilocks problem which requires some trial and error to determine. For particularly large data sets, some preliminary calculations for support may be required to determine a good baseline. For this particular data set, most transactions contain a single item purchases. While an interesting result in and of itself, it means the minimum values for support and confidence need to be set relatively low. # print the rules and corresponding valuesfor rule in sorted(rules, key = lambda rule: rule.lift): print(rule) Finally, the results are placed in the rule variable, which may be printed. The results should look something like the below: {Coffee} -> {Cake} (conf: 0.114, supp: 0.055, lift: 1.102, conv: 1.012){Cake} -> {Coffee} (conf: 0.527, supp: 0.055, lift: 1.102, conv: 1.103) To better understand the output, look at the two lines specifying the rules for coffee and cake. The two lines both give the confidence (conf), the support (supp), and lift, but the order between the two lines are switched. In the first line, probabilities are measured as cake conditional on coffee while in the second line, probabilities are measured as coffee conditional on cake. In other words, of the customers who bought coffee, not many also bought cake. Of the customers who bought cake, however, most also bought coffee. This is why there’s a difference in confidence values. It’s a subtle, but important difference to understand. In addition, the lift values are greater than 1, suggesting that the sale of cake boosts the sale of coffee and vice versa. With this understanding, the bakery could take advantage of this analysis by: Placing coffee and cake closer together on the menu board Offer a meal with cake and a coffee Run a coupon campaign on cake to drive the sale of coffee Market basket analysis is a set of calculations meant to help businesses understand the underlying patterns in their sales. Certain complementary goods are often bought together and the Apriori Algorithm can undercover them. Understanding how products sale can be used in everything from promotions to cross-selling to recommendations. While the examples I gave were primarily retail-driven, any industry can benefit from better understanding how their products move.
[ { "code": null, "e": 479, "s": 172, "text": "Retailers have access to an unprecedented amount of shopper transactions. As shopping habits have become more electronic, records of every purchase are neatly stored in databases, ready to be read and analyzed. With such an arsenal of data at their disposal, they can uncover patterns of consumer behavior." }, { "code": null, "e": 727, "s": 479, "text": "A market basket analysis is a set of affinity calculations meant to determine which items sell together. For example, a grocery store may use market basket analysis to determine that consumers typically buy both hot dogs and hot dog buns together." }, { "code": null, "e": 1046, "s": 727, "text": "If you’ve ever gone onto an online retailer’s website, you’ve probably seen a recommendation on a product’s page phrased as “Customers who bought this item also bought” or “Customers buy these together”. More than likely, the online retailer performed some sort of market basket analysis to link the products together." }, { "code": null, "e": 1403, "s": 1046, "text": "A savvy retailer can leverage this knowledge to inform decisions on pricing, promotions, and store layouts. The aforementioned grocery store might have a sale on hot dogs, but increase the margins on the hot dog buns. The customers would buy more hot dogs and feel as if they found a bargain while the store would sell more product and raise their revenue." }, { "code": null, "e": 1513, "s": 1403, "text": "For every combination of items purchased, three key statistics are calculated: support, confidence, and lift." }, { "code": null, "e": 1746, "s": 1513, "text": "Support is the general popularity of an item relative to all other purchases. In a grocery store, milk would have a high support, because many shoppers buy it every trip. Support is given as a number between 0 and 1. Mathematically:" }, { "code": null, "e": 2045, "s": 1746, "text": "Confidence is the conditional probability that customers who bought Product A also bought Product B. There would likely be a high confidence between marshmallows and graham crackers, because they’re often bought together for s’mores. Confidence is given as a number between 0 and 1. Mathematically:" }, { "code": null, "e": 2255, "s": 2045, "text": "Lift is the sales increase in Product B when Product A is bought. There might be a high lift between hamburger patties and buns, because as more patties are bought, they drive the sale of buns. Mathematically:" }, { "code": null, "e": 2393, "s": 2255, "text": "Lift is a bit unusual compared to the other two measures. Instead of a value between 0 and 1, lift is interpreted by its distance from 1:" }, { "code": null, "e": 2448, "s": 2393, "text": "Lift = 1 suggests no relationship between the products" }, { "code": null, "e": 2511, "s": 2448, "text": "lift > 1 suggests a positive relationship between the products" }, { "code": null, "e": 2574, "s": 2511, "text": "lift < 1 suggests a negative relationship between the products" }, { "code": null, "e": 2838, "s": 2574, "text": "By far, the most common approach to perform Market Basket Analysis is the Apriori Algorithm. First proposed in 1994 by Agrawal and Srikant, the algorithm has become historically important for its impact on retailers to meaningfully track transaction associations." }, { "code": null, "e": 3129, "s": 2838, "text": "While still useful and widely used, the Apriori Algorithm also suffers from high computation times on larger data sets. Thankfully, most implementations offer minimum parameters for confidence and support and set a limit to the number of items per transaction to reduce the time to process." }, { "code": null, "e": 3338, "s": 3129, "text": "For demonstration, I’ll use the Python implementation called efficient-apriori. Note that this library is rated for Python 3.6 and 3.7. Older versions of Python may use apyori, which supports 2.7 and 3.3–3.5." }, { "code": null, "e": 3459, "s": 3338, "text": "To show the application of the Apriori Algorithm, I’ll use a data set of transactions from a bakery available on Kaggle." }, { "code": null, "e": 3618, "s": 3459, "text": "import pandas as pdimport numpy as np# Read the datadf = pd.read_csv(\"BreadBasket_DMS.csv\")# eliminate lines with \"NONE\" in itemsdf = df[df[\"Item\"] != \"NONE\"]" }, { "code": null, "e": 3742, "s": 3618, "text": "After the usual imports of Pandas and Numpy to help process the data, the previously saved CSV file is read to a DataFrame." }, { "code": null, "e": 3862, "s": 3742, "text": "A few lines of the data contain “NONE” in the Item column, which isn’t particularly helpful, so those are filtered out." }, { "code": null, "e": 4236, "s": 3862, "text": "# Create and empty list for data processingtransaction_items = []# Get an array of transaction numberstransactions = df[\"Transaction\"].unique()for transaction in transactions: # Get an array of items per transaction number items = df[df[\"Transaction\"] == transaction][\"Item\"].unique() # Add the item to the list as a tuple transaction_items.append(tuple(items))" }, { "code": null, "e": 4395, "s": 4236, "text": "Unlike a lot of other libraries which support Pandas DataFrames out of the box, efficient-apriori needs the transaction lines as a series of tuples in a list." }, { "code": null, "e": 4627, "s": 4395, "text": "To create this data structure, a list of unique Transaction ID numbers are collected. For every Transaction ID, a Numpy array of items associated with the ID are grouped. Finally, they’re converted into tuples and placed in a list." }, { "code": null, "e": 4816, "s": 4627, "text": "# import apriori algorithmfrom efficient_apriori import apriori# Calculate support, confidence, & liftitemsets, rules = apriori(transaction_items, min_support = 0.05, min_confidence = 0.1)" }, { "code": null, "e": 5224, "s": 4816, "text": "After importing the library, the Apriori Algorithm can be placed on a single line. Note the min_support and min_confidence arguments, which specify the minimum support and confidence values to calculate. The actual values of these will differ between various types of data. Setting them too high won’t produce any results. Setting them too low will give too many results and will take a long to time to run." }, { "code": null, "e": 5422, "s": 5224, "text": "It’s a Goldilocks problem which requires some trial and error to determine. For particularly large data sets, some preliminary calculations for support may be required to determine a good baseline." }, { "code": null, "e": 5635, "s": 5422, "text": "For this particular data set, most transactions contain a single item purchases. While an interesting result in and of itself, it means the minimum values for support and confidence need to be set relatively low." }, { "code": null, "e": 5749, "s": 5635, "text": "# print the rules and corresponding valuesfor rule in sorted(rules, key = lambda rule: rule.lift): print(rule)" }, { "code": null, "e": 5875, "s": 5749, "text": "Finally, the results are placed in the rule variable, which may be printed. The results should look something like the below:" }, { "code": null, "e": 6018, "s": 5875, "text": "{Coffee} -> {Cake} (conf: 0.114, supp: 0.055, lift: 1.102, conv: 1.012){Cake} -> {Coffee} (conf: 0.527, supp: 0.055, lift: 1.102, conv: 1.103)" }, { "code": null, "e": 6242, "s": 6018, "text": "To better understand the output, look at the two lines specifying the rules for coffee and cake. The two lines both give the confidence (conf), the support (supp), and lift, but the order between the two lines are switched." }, { "code": null, "e": 6604, "s": 6242, "text": "In the first line, probabilities are measured as cake conditional on coffee while in the second line, probabilities are measured as coffee conditional on cake. In other words, of the customers who bought coffee, not many also bought cake. Of the customers who bought cake, however, most also bought coffee. This is why there’s a difference in confidence values." }, { "code": null, "e": 6659, "s": 6604, "text": "It’s a subtle, but important difference to understand." }, { "code": null, "e": 6783, "s": 6659, "text": "In addition, the lift values are greater than 1, suggesting that the sale of cake boosts the sale of coffee and vice versa." }, { "code": null, "e": 6861, "s": 6783, "text": "With this understanding, the bakery could take advantage of this analysis by:" }, { "code": null, "e": 6919, "s": 6861, "text": "Placing coffee and cake closer together on the menu board" }, { "code": null, "e": 6955, "s": 6919, "text": "Offer a meal with cake and a coffee" }, { "code": null, "e": 7013, "s": 6955, "text": "Run a coupon campaign on cake to drive the sale of coffee" }, { "code": null, "e": 7238, "s": 7013, "text": "Market basket analysis is a set of calculations meant to help businesses understand the underlying patterns in their sales. Certain complementary goods are often bought together and the Apriori Algorithm can undercover them." } ]
Toutatis - OSINT Tool to Extract Information From Instagram Account - GeeksforGeeks
22 Nov, 2021 Information collected through public sources or social networking sites helps in building a social engineering attack environment. This is an OSINT technique to gather information. Toutatis tool is an automated tool developed in the Python Language and also comes as the package with the pip in python. This tool extracts sensitive information from an Instagram social networking site. Toutatis gathers more than enough information about the target like Phone number, Email address, Profile picture, and many more. Toutatis tool is available on GitHub and also you can install it through pip install toutatis in python. Note: Make Sure You have Python Installed on your System, as this is a python-based tool. Click to check the Installation process: Python Installation Steps on Linux Step 1: Use the following command to install the tool in your Kali Linux operating system. git clone https://github.com/megadose/toutatis.git Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool. cd toutatis Step 3: Run the setup.py file to complete the installation of the tool. sudo python3 setup.py install Step 4: All the dependencies have been installed in your Kali Linux operating system. Now use the following command to run the tool and check the help section. toutatis -h Example 1: Searching data for Username 1 toutatis -u geeksforgeeks -s <InsertYourInstagramSessionId> We have got the details of geeksforgeeks username. This information can be used in Social Engineering attacks. Example 2: Searching data for Username 2 toutatis -u thisisbillgates -s <InsertYourInstagramSessionId> We have got the information for username 2. We have displayed the profile picture of the user. Although if the account is private, then to you can view the image. Kali-Linux Linux-Tools Linux-Unix Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Thread functions in C/C++ Array Basics in Shell Scripting | Set 1 scp command in Linux with Examples chown command in Linux with Examples nohup Command in Linux with Examples Named Pipe or FIFO with example C program mv command in Linux with examples SED command in Linux | Set 2 Basic Operators in Shell Scripting Start/Stop/Restart Services Using Systemctl in Linux
[ { "code": null, "e": 24326, "s": 24298, "text": "\n22 Nov, 2021" }, { "code": null, "e": 24946, "s": 24326, "text": "Information collected through public sources or social networking sites helps in building a social engineering attack environment. This is an OSINT technique to gather information. Toutatis tool is an automated tool developed in the Python Language and also comes as the package with the pip in python. This tool extracts sensitive information from an Instagram social networking site. Toutatis gathers more than enough information about the target like Phone number, Email address, Profile picture, and many more. Toutatis tool is available on GitHub and also you can install it through pip install toutatis in python." }, { "code": null, "e": 25112, "s": 24946, "text": "Note: Make Sure You have Python Installed on your System, as this is a python-based tool. Click to check the Installation process: Python Installation Steps on Linux" }, { "code": null, "e": 25203, "s": 25112, "text": "Step 1: Use the following command to install the tool in your Kali Linux operating system." }, { "code": null, "e": 25254, "s": 25203, "text": "git clone https://github.com/megadose/toutatis.git" }, { "code": null, "e": 25392, "s": 25254, "text": "Step 2: Now use the following command to move into the directory of the tool. You have to move in the directory in order to run the tool." }, { "code": null, "e": 25404, "s": 25392, "text": "cd toutatis" }, { "code": null, "e": 25476, "s": 25404, "text": "Step 3: Run the setup.py file to complete the installation of the tool." }, { "code": null, "e": 25506, "s": 25476, "text": "sudo python3 setup.py install" }, { "code": null, "e": 25666, "s": 25506, "text": "Step 4: All the dependencies have been installed in your Kali Linux operating system. Now use the following command to run the tool and check the help section." }, { "code": null, "e": 25678, "s": 25666, "text": "toutatis -h" }, { "code": null, "e": 25719, "s": 25678, "text": "Example 1: Searching data for Username 1" }, { "code": null, "e": 25779, "s": 25719, "text": "toutatis -u geeksforgeeks -s <InsertYourInstagramSessionId>" }, { "code": null, "e": 25890, "s": 25779, "text": "We have got the details of geeksforgeeks username. This information can be used in Social Engineering attacks." }, { "code": null, "e": 25931, "s": 25890, "text": "Example 2: Searching data for Username 2" }, { "code": null, "e": 25993, "s": 25931, "text": "toutatis -u thisisbillgates -s <InsertYourInstagramSessionId>" }, { "code": null, "e": 26037, "s": 25993, "text": "We have got the information for username 2." }, { "code": null, "e": 26156, "s": 26037, "text": "We have displayed the profile picture of the user. Although if the account is private, then to you can view the image." }, { "code": null, "e": 26167, "s": 26156, "text": "Kali-Linux" }, { "code": null, "e": 26179, "s": 26167, "text": "Linux-Tools" }, { "code": null, "e": 26190, "s": 26179, "text": "Linux-Unix" }, { "code": null, "e": 26288, "s": 26190, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26297, "s": 26288, "text": "Comments" }, { "code": null, "e": 26310, "s": 26297, "text": "Old Comments" }, { "code": null, "e": 26336, "s": 26310, "text": "Thread functions in C/C++" }, { "code": null, "e": 26376, "s": 26336, "text": "Array Basics in Shell Scripting | Set 1" }, { "code": null, "e": 26411, "s": 26376, "text": "scp command in Linux with Examples" }, { "code": null, "e": 26448, "s": 26411, "text": "chown command in Linux with Examples" }, { "code": null, "e": 26485, "s": 26448, "text": "nohup Command in Linux with Examples" }, { "code": null, "e": 26527, "s": 26485, "text": "Named Pipe or FIFO with example C program" }, { "code": null, "e": 26561, "s": 26527, "text": "mv command in Linux with examples" }, { "code": null, "e": 26590, "s": 26561, "text": "SED command in Linux | Set 2" }, { "code": null, "e": 26625, "s": 26590, "text": "Basic Operators in Shell Scripting" } ]
Launch a Website for Free in 5 simple steps with GitHub Pages | by Emile Gill | Towards Data Science
Having your own website allows you to showcase to the world who you are, and what you’re passionate about. Many people assume that to set up a website would be too expensive or technically challenging for the average person. Actually, it doesn’t have to cost anything at all, can be done by anyone- regardless of experience level- and can be up and running in under 10 minutes! (You can check out mine here) GitHub Pages is a feature of GitHub which allows users to host static webpages straight from repositories. It’s ideal for hosting a personal portfolio website, showing off projects, or for a small organisation to host their page. In this post I’ll demonstrate just how simple it is for anyone to get a website onto the web, hosted through GitHub Pages. If you already have a GitHub account, feel free to skip this step. If not, you can sign up here. I’ll try not to assume prior knowledge throughout this article, but I’d recommend you familiarise yourself with Git concepts if you’ve never used it before. This article offers a great introduction to Git and GitHub from the complete basics: towardsdatascience.com A repository is where you store all the code for your project, or as GitHub’s help page explains: “A repository is like a folder for your project. Your project’s repository contains all of your project’s files and stores each file’s revision history. You can also discuss and manage your project’s work within the repository.” GitHub Help Therefore, to get started with our project, the first thing we need to do is to create a repository for our website’s code to reside. Now that we have a GitHub account this is easy to do. Simply navigate to the top right of the GitHub homepage, and under the “+” menu, select “New Repository”. This will open the “Create a New Repository” page where we can choose a repository name and choose whether our repository is “Public” or “Private”. It’s important that we give this repository a specific name so that GitHub knows that this repository should be hosted through GitHub pages. This should be [your_username].github.io . So, for example, since my GitHub username is emilegill743, my repository name- and as we will see later also my website URL- will be emilegill743.github.io. We will also want to set our repository as “Public”, since GitHub Pages only allows hosting of “Private” repositories if we upgrade to GitHub Pro. Once we have completed this, GitHub will take us to our newly created repository and explain several ways we can get started working with a local copy of the repository. If you’re not familiar with working with Git on the command line, the easiest way to “clone” this empty repository to your local system is to use the “Set up in Desktop” option. This will open up GitHub Desktop- which can be installed here- creating a local copy of the repository at a location of your choice. You’ll now have an empty folder with the name of your repository on your local filesystem which will be tracking any changes we make. These changes can then be “committed” and “pushed” to our remote repository on GitHub. Now, if you’re hardcore you might want to design your website from scratch. Theoretically, you could do that; all we need is an index.html file in our remote repository and GitHub will go ahead and render our website. However, since the purpose of this post is to get our website up and running as quickly as possible, we’ll use a template to get us started. This will allow us to create a sleek, responsive, professional website with minimal work involved. There are many sites offering templates for website designs, some can be purchased for a small cost, but many are available for free. A particular favourite of mine is HTML5 UP, which offers a selection of beautiful designs, perfect for a personal portfolio website. All designs are free under the Creative Commons Attribution 3.0 License, meaning that we are free to use it as we wish, so long as we credit HTML5 UP for the design. Feel free to do your own research and find a design that best suits you; for demonstration, here I will use HTML5 UP’s Strata theme. Now that we have selected our design, we can download its associated files and transfer them into our local repository. We can then commit our changes and push this to our remote repository. Again, if you’re comfortable working with Git, go ahead and do this via the command line, but if you’re new to this you can equivalently use GitHub Desktop. This is where the magic happens. Open up a web browser and navigate to the URL [your_username].github.io . Our website is live! Now let’s look at how we can add the final touches and personalise it, right now it’s just a template. For this final step, we’ll need a text editor. If you’re a coding pro, you’ll probably already have a text editor of choice. Any will do, but I’d personally recommend Visual Studio Code, which you can install here. Opening the folder containing our repository in our text editor we will see all the files contributing towards the design of our website. The most important of these is our index.html file, which denotes the structure of the main page of our website. There will probably also be some .css files and maybe some .js too. Don’t worry too much if you’re not familiar with any of these, I wasn’t when I created my first website! A basic summary is that HTML (Hypertext Markup Language) forms the building blocks of the structure of our webpage, CSS (Cascading Style Sheets) describes how our webpage should be styled, and Javascript defines the interactive behaviour of our webpage. The first thing I’d suggest is that you download the ‘Live Server’ extension for Visual Studio Code, which will enable us to preview our website as we edit it, automatically refreshing when we save changes. Alternatively, you can just open up the index.html page in your web browser and manually refresh to check changes. I won’t go into too much detail about how to write HTML, otherwise, this could quickly become a tutorial. The basic syntax is composed of ‘tags’ which form HTML elements, the building blocks of an HTML web page. These tags normally consist of a start tag, any attributes of the tag, some content and then an end tag. <tag attribute="value">Some content</tag> It’s up to you how much you want to customise your website. At the most basic level, you’ll want to change the content of the HTML elements in your webpage to reflect your personal information. You’ll also probably want to change the images in the webpage. To do this all you need to do is copy the image that you want to use into your repository (if you’re using an HTML5 UP template it’ll probably already have a folder designated for images) and then adapt the src attribute of the image element to reflect the path to your new image, <img src="path_to_image.jpg"/> . It’s worth noting here that image elements are composed of a single tag containing their attributes, rather than a start and end tag like many other HTML elements. If you’re interested in learning more about HTML, CSS and Javascript so that you can take your website personalisation to the next level, I’d strongly recommend Harvard’s CS50W: Web Programming with Python and JavaScript. This is a completely free course on edX, the first couple of chapters of which focus on Git, HTML and CSS and later goes on to look at Javascript. It provides a great introduction to the skills we need to really make the website our own. For further inspiration, feel free to check out my website at emilegill743.github.io and the corresponding GitHub repository linked below. github.com Thanks for reading! If you enjoyed this post, feel free to check out some of my other articles:
[ { "code": null, "e": 279, "s": 172, "text": "Having your own website allows you to showcase to the world who you are, and what you’re passionate about." }, { "code": null, "e": 580, "s": 279, "text": "Many people assume that to set up a website would be too expensive or technically challenging for the average person. Actually, it doesn’t have to cost anything at all, can be done by anyone- regardless of experience level- and can be up and running in under 10 minutes! (You can check out mine here)" }, { "code": null, "e": 933, "s": 580, "text": "GitHub Pages is a feature of GitHub which allows users to host static webpages straight from repositories. It’s ideal for hosting a personal portfolio website, showing off projects, or for a small organisation to host their page. In this post I’ll demonstrate just how simple it is for anyone to get a website onto the web, hosted through GitHub Pages." }, { "code": null, "e": 1030, "s": 933, "text": "If you already have a GitHub account, feel free to skip this step. If not, you can sign up here." }, { "code": null, "e": 1272, "s": 1030, "text": "I’ll try not to assume prior knowledge throughout this article, but I’d recommend you familiarise yourself with Git concepts if you’ve never used it before. This article offers a great introduction to Git and GitHub from the complete basics:" }, { "code": null, "e": 1295, "s": 1272, "text": "towardsdatascience.com" }, { "code": null, "e": 1393, "s": 1295, "text": "A repository is where you store all the code for your project, or as GitHub’s help page explains:" }, { "code": null, "e": 1634, "s": 1393, "text": "“A repository is like a folder for your project. Your project’s repository contains all of your project’s files and stores each file’s revision history. You can also discuss and manage your project’s work within the repository.” GitHub Help" }, { "code": null, "e": 1928, "s": 1634, "text": "Therefore, to get started with our project, the first thing we need to do is to create a repository for our website’s code to reside. Now that we have a GitHub account this is easy to do. Simply navigate to the top right of the GitHub homepage, and under the “+” menu, select “New Repository”." }, { "code": null, "e": 2564, "s": 1928, "text": "This will open the “Create a New Repository” page where we can choose a repository name and choose whether our repository is “Public” or “Private”. It’s important that we give this repository a specific name so that GitHub knows that this repository should be hosted through GitHub pages. This should be [your_username].github.io . So, for example, since my GitHub username is emilegill743, my repository name- and as we will see later also my website URL- will be emilegill743.github.io. We will also want to set our repository as “Public”, since GitHub Pages only allows hosting of “Private” repositories if we upgrade to GitHub Pro." }, { "code": null, "e": 3266, "s": 2564, "text": "Once we have completed this, GitHub will take us to our newly created repository and explain several ways we can get started working with a local copy of the repository. If you’re not familiar with working with Git on the command line, the easiest way to “clone” this empty repository to your local system is to use the “Set up in Desktop” option. This will open up GitHub Desktop- which can be installed here- creating a local copy of the repository at a location of your choice. You’ll now have an empty folder with the name of your repository on your local filesystem which will be tracking any changes we make. These changes can then be “committed” and “pushed” to our remote repository on GitHub." }, { "code": null, "e": 3724, "s": 3266, "text": "Now, if you’re hardcore you might want to design your website from scratch. Theoretically, you could do that; all we need is an index.html file in our remote repository and GitHub will go ahead and render our website. However, since the purpose of this post is to get our website up and running as quickly as possible, we’ll use a template to get us started. This will allow us to create a sleek, responsive, professional website with minimal work involved." }, { "code": null, "e": 4157, "s": 3724, "text": "There are many sites offering templates for website designs, some can be purchased for a small cost, but many are available for free. A particular favourite of mine is HTML5 UP, which offers a selection of beautiful designs, perfect for a personal portfolio website. All designs are free under the Creative Commons Attribution 3.0 License, meaning that we are free to use it as we wish, so long as we credit HTML5 UP for the design." }, { "code": null, "e": 4290, "s": 4157, "text": "Feel free to do your own research and find a design that best suits you; for demonstration, here I will use HTML5 UP’s Strata theme." }, { "code": null, "e": 4638, "s": 4290, "text": "Now that we have selected our design, we can download its associated files and transfer them into our local repository. We can then commit our changes and push this to our remote repository. Again, if you’re comfortable working with Git, go ahead and do this via the command line, but if you’re new to this you can equivalently use GitHub Desktop." }, { "code": null, "e": 4745, "s": 4638, "text": "This is where the magic happens. Open up a web browser and navigate to the URL [your_username].github.io ." }, { "code": null, "e": 4869, "s": 4745, "text": "Our website is live! Now let’s look at how we can add the final touches and personalise it, right now it’s just a template." }, { "code": null, "e": 5084, "s": 4869, "text": "For this final step, we’ll need a text editor. If you’re a coding pro, you’ll probably already have a text editor of choice. Any will do, but I’d personally recommend Visual Studio Code, which you can install here." }, { "code": null, "e": 5762, "s": 5084, "text": "Opening the folder containing our repository in our text editor we will see all the files contributing towards the design of our website. The most important of these is our index.html file, which denotes the structure of the main page of our website. There will probably also be some .css files and maybe some .js too. Don’t worry too much if you’re not familiar with any of these, I wasn’t when I created my first website! A basic summary is that HTML (Hypertext Markup Language) forms the building blocks of the structure of our webpage, CSS (Cascading Style Sheets) describes how our webpage should be styled, and Javascript defines the interactive behaviour of our webpage." }, { "code": null, "e": 6084, "s": 5762, "text": "The first thing I’d suggest is that you download the ‘Live Server’ extension for Visual Studio Code, which will enable us to preview our website as we edit it, automatically refreshing when we save changes. Alternatively, you can just open up the index.html page in your web browser and manually refresh to check changes." }, { "code": null, "e": 6401, "s": 6084, "text": "I won’t go into too much detail about how to write HTML, otherwise, this could quickly become a tutorial. The basic syntax is composed of ‘tags’ which form HTML elements, the building blocks of an HTML web page. These tags normally consist of a start tag, any attributes of the tag, some content and then an end tag." }, { "code": null, "e": 6443, "s": 6401, "text": "<tag attribute=\"value\">Some content</tag>" }, { "code": null, "e": 7178, "s": 6443, "text": "It’s up to you how much you want to customise your website. At the most basic level, you’ll want to change the content of the HTML elements in your webpage to reflect your personal information. You’ll also probably want to change the images in the webpage. To do this all you need to do is copy the image that you want to use into your repository (if you’re using an HTML5 UP template it’ll probably already have a folder designated for images) and then adapt the src attribute of the image element to reflect the path to your new image, <img src=\"path_to_image.jpg\"/> . It’s worth noting here that image elements are composed of a single tag containing their attributes, rather than a start and end tag like many other HTML elements." }, { "code": null, "e": 7638, "s": 7178, "text": "If you’re interested in learning more about HTML, CSS and Javascript so that you can take your website personalisation to the next level, I’d strongly recommend Harvard’s CS50W: Web Programming with Python and JavaScript. This is a completely free course on edX, the first couple of chapters of which focus on Git, HTML and CSS and later goes on to look at Javascript. It provides a great introduction to the skills we need to really make the website our own." }, { "code": null, "e": 7777, "s": 7638, "text": "For further inspiration, feel free to check out my website at emilegill743.github.io and the corresponding GitHub repository linked below." }, { "code": null, "e": 7788, "s": 7777, "text": "github.com" }, { "code": null, "e": 7808, "s": 7788, "text": "Thanks for reading!" } ]
How to set the font size of text with JavaScript?
To set the font size, use the fontSize property in JavaScript. You can try to run the following code to set the font size of the text with JavaScript − <!DOCTYPE html> <html> <body> <h1>Heading 1</h1> <p id = "myID"> This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text. </p> <button type = "button" onclick = "display()">Set Font Family and Size</button> <script> function display() { document.getElementById("myID").style.fontFamily = "verdana,sans-serif"; document.getElementById("myID").style.fontSize = "smaller"; } </script> </body> </html>
[ { "code": null, "e": 1214, "s": 1062, "text": "To set the font size, use the fontSize property in JavaScript. You can try to run the following code to set the font size of the text with JavaScript −" }, { "code": null, "e": 1823, "s": 1214, "text": "<!DOCTYPE html>\n<html>\n <body>\n <h1>Heading 1</h1>\n <p id = \"myID\">\n This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text.\n This is Demo Text. This is Demo Text. This is Demo Text. This is Demo Text.\n </p>\n <button type = \"button\" onclick = \"display()\">Set Font Family and Size</button>\n \n <script>\n function display() {\n document.getElementById(\"myID\").style.fontFamily = \"verdana,sans-serif\";\n document.getElementById(\"myID\").style.fontSize = \"smaller\";\n }\n </script>\n \n </body>\n</html>" } ]
End-to-End Project of Game Prediction Based on LeBron’s Stats Using Three Machine Learning Models | by Yufeng | Towards Data Science
I am a huge fan of machine learning and basketball, so I like generating some mini-projects by combining these two. In this post, I would like to share with you one of these projects. No matter whether you are a basketball fan or not, you must know LeBron James. As a core player, his performance is essential to the game result. So, the question I try to answer in this project is “Can we predict the game result based on LeBron’s game stats?” I framed it into a binary classification problem with “Win” or “Lose” of the team as the output labels. The features are the basic statistics of LeBron James in each game. I implemented three classifiers in the project, Logistic Regression, Random Forest Classifier, and Deep Learning Classifier, by using two popular Python libraries in machine learning, sklearn, and keras. I am going through the project step by step and I hope it is helpful to you. The libraries used in the code are listed here. import pandas as pdimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.model_selection import StratifiedKFoldfrom sklearn.impute import SimpleImputerfrom sklearn.preprocessing import StandardScalerfrom sklearn.pipeline import Pipelinefrom sklearn.linear_model import LogisticRegressionfrom sklearn.ensemble import RandomForestClassifierfrom keras.layers import Dense, Dropoutfrom keras.models import Model, Sequentialfrom keras.wrappers.scikit_learn import KerasClassifierfrom sklearn.model_selection import GridSearchCV I manually compiled the basic statistics of LeBron’s games from season 2003–2004 to season 2019–2020 (until NBA suspension in March). In total, there are 1,258 games. The import code is as below: df = pd.read_csv("lebron_2003_2020_career_gamelog.csv",index_col=0)df.head() From the figure above, you can see the basic stats and the game results (“Win” and “Winby”). Then, I want to make sure the data type is ‘float32’ so that I can directly feed them to the neuron network model in keras. The data type transform code is as below: df = df.astype('float32') Next, I need to specify the feature space and the label in the dataset using the code as below: y = df['Win']X = df.drop(columns=['Win','Winby']) Column “Win” is the recorded game result, where 1 means win and 0 means lose. And column “Winby” is the game score difference with the component, where a positive number means win and a negative number means lose. Therefore, it is necessary to remove both of them from the feature space. Next, the data is split into the training and the testing set, where the testing set will never be touched before the model evaluation step. The code is as below: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,stratify=y, random_state=42) You may notice that I use a stratified split here which avoids the situation where the game result is biased to one category in the training data. “stratify=y” means the “stratified split” is done based on y which is our output label. Till now, the steak is ready to cook, we need to start to preheat the oven. As I mentioned, three models are going to be used, Logistic Regression, Random Forest Classifier, and Deep Learning Classifier. To make them all fit the same Scikit-Learn workflow, we need to define the deep learning model first in a Scikit-Learn style as below: def my_DL(epochs=6,batchsize=512): model = Sequential() model.add(Dense(32,activation='relu')) model.add(Dense(16,activation='relu')) model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy']) return model Specifically, this neural network has two hidden layers with 32 and 16 nodes. The loss function, optimizer, and the metric of the network are fixed as ‘binary_crossentropy’, ‘rmsprop’ and ‘accuracy’, respectively. There are two adjustable parameters for this compiled model, epochs is the number of epochs and batchsize is the number of samples for each batch. Both parameters have their default values, whose formats are similar to those of a classifier in sklearn. The parameters of a model that cannot be trained from the data but need to be assigned before the training process are called hyperparameters. These hyperparameters are always related to the complexity of the model, which needs to be selected properly to avoid underfitting or overfitting problems. To select the best set of hyperparameters, we can do it in two ways. First, we can further split the training dataset into two parts, namely the training and the validation dataset. Then, we need to evaluate the trained model from the training dataset on the validation set. The best set of hyperparameters is the one with the best performance on the validation set. However, when the sample size is small, only one split of data can be biased. So, cross-validation is another way of training hyperparameters, which is more popular. I, therefore, use cross-validation in this project. I list the whole function of the hyperparameter tunning as below and will go through it in detail. def train_hyper_tune(X,y): # create the pre-processing component my_scaler = StandardScaler() my_imputer = SimpleImputer(strategy="median") # define classifiers ## Classifier 1: Logistic Regression clf_LR = LogisticRegression(random_state=0,penalty='elasticnet',solver='saga') ## Classifier 2: Random Forest Classifier clf_RF = RandomForestClassifier(random_state=0) ## Classifier 3: Deep Learning Binary Classifier clf_DL = KerasClassifier(build_fn=my_DL) # define pipeline for three classifiers ## clf_LR pipe1 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('lr_model',clf_LR)]) ## clf_RF pipe2 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('rf_model',clf_RF)]) ## clf_DL pipe3 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('dl_model',clf_DL)]) # create hyperparameter space of the three models ## clf_LR param_grid1 = { 'lr_model__C' : [1e-1,1,10], 'lr_model__l1_ratio' : [0,0.5,1] } ## clf_RF param_grid2 = { 'rf_model__n_estimators' : [50,100], 'rf_model__max_features' : [0.8,"auto"], 'rf_model__max_depth' : [4,5] } ## clf_DL param_grid3 = { 'dl_model__epochs' : [6,12,18,24], 'dl_model__batchsize' : [256,512] } # set GridSearch via 5-fold cross-validation ## clf_LR grid1 = GridSearchCV(pipe1, cv=5, param_grid=param_grid1) ## clf_RF grid2 = GridSearchCV(pipe2, cv=5, param_grid=param_grid2) ## clf_DL grid3 = GridSearchCV(pipe3, cv=5, param_grid=param_grid3) # run the hyperparameter tunning process grid1.fit(X,y) grid2.fit(X,y) grid3.fit(X,y) # return results of the tunning process return grid1,grid2,grid3,pipe1,pipe2,pipe3 As shown in the code, there are mainly six steps inside the function: Step 1. Create the pre-processing functions. # create the pre-processing component my_scaler = StandardScaler() my_imputer = SimpleImputer(strategy="median") I use the feature’s median to impute the missing values and the standard scaler to normalize the data. This step is the same for all three models. Step 2. Define all three classifiers. # define classifiers ## Classifier 1: Logistic Regression clf_LR = LogisticRegression(random_state=0,penalty='elasticnet',solver='saga') ## Classifier 2: Random Forest Classifier clf_RF = RandomForestClassifier(random_state=0) ## Classifier 3: Deep Learning Binary Classifier clf_DL = KerasClassifier(build_fn=my_DL) First, the logistic regression classifier is usually used as the “Hello world!” model in machine learning books. Here, it is used together with a penalty function to avoid overfitting. The model with this penalty term is called ‘Elastic Net’, which is a combination of l1 and l2 norm in the regularization. For those who are interested in why we chose Elastic Net as the penalty term, please read my another post as below: towardsdatascience.com Second, the Random Forest Classifier is defined in a more freestyle without fixing any hyperparameters. Three of its hyperparameters are going to be tuned in the following steps, which I will go over in detail later. Third, the deep learning classifier used here is based on the Scikit-Learn style model as aforementioned, my_DL. Thankfully Keras provides the wonderful Wrappers for the Scikit-Learn API. I directly call the function my_DL by passing it to the function KerasClassifier(). Step 3. Define a pipeline for each model that combines the pre-processing and modeling together. # define pipeline for three classifiers ## clf_LR pipe1 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('lr_model',clf_LR)]) ## clf_RF pipe2 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('rf_model',clf_RF)]) ## clf_DL pipe3 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('dl_model',clf_DL)]) For each of the three models, I combine the pre-processing and the classifier together into a pipeline with the Pipeline function in sklearn. For each step of processing, a name should be given. For example, I name my logistic regression model as “lr_model” and call it via clf_LR in the pipeline. The aim of combining everything into a pipeline is to make sure the exact same processing of the training data is used to the testing data in the cross-validation. This is essential to avoid data leaking. Step 4. Create the hyperparameter space for each of the models. # create hyperparameter space of the three models ## clf_LR param_grid1 = { 'lr_model__C' : [1e-1,1,10], 'lr_model__l1_ratio' : [0,0.5,1] } ## clf_RF param_grid2 = { 'rf_model__n_estimators' : [50,100], 'rf_model__max_features' : [0.8,"auto"], 'rf_model__max_depth' : [4,5] } ## clf_DL param_grid3 = { 'dl_model__epochs' : [6,12,18,24], 'dl_model__batchsize' : [256,512] } This part is more flexible because there is plenty number of parameters in these three models. It’s important to select the parameters that are closely related to the complexity of the model. For example, the maximum depth of the trees in the random forest model is a must-tune hyperparameter. For those who are interested, please refer to this post below. towardsdatascience.com To note, the name of the step in a pipeline needs to be specified in the hyperparameter space. For example, the number of epochs in the deep learning model is named as “dl_model__epochs”, where “dl_model” is the name of the deep learning model in my pipeline and “epochs” is the name of a parameter that can be passed to my deep learning model. They are connected in a string format by “__” in the hyperparameter space. Step 5. Set the grid search function across the hyperparameter space via cross-validation. # set GridSearch via 5-fold cross-validation ## clf_LR grid1 = GridSearchCV(pipe1, cv=5, param_grid=param_grid1) ## clf_RF grid2 = GridSearchCV(pipe2, cv=5, param_grid=param_grid2) ## clf_DL grid3 = GridSearchCV(pipe3, cv=5, param_grid=param_grid3) Comparing to the randomized search, the grid search is more computationally costly because it spans the entire hyperparameter space. In this project, I use the grid search because the hyperparameter space is relatively small. For each grid search, I use 5-fold cross-validation to evaluate the average performance of the combinations of hyperparameters. Step 6. Run the tunning process. # run the hyperparameter tunning process grid1.fit(X,y) grid2.fit(X,y) grid3.fit(X,y) This step is pretty straight forward, which execute the grid search on the three defined pipelines. Lastly, we just need to run the function as below: my_grid1,my_grid2,my_grid3,my_pipe1,my_pipe2,my_pipe3 = train_hyper_tune(X_train, y_train) We can check the training performance by pulling out the best score in the grid search result. It seems the random forest has the best performance on the training dataset. But all three models are pretty comparable to each other. After the hyperparameters are selected in the previous step, I use them to re-train the models on the entire training data. The code is shown below: def train_on_entire(X,y,pipe,grid_res): # fit pipeline pipe.set_params(**grid_res.best_params_).fit(X, y) # return the newly trained pipeline return pipe Here, **grid_res.best_params_ is used to pass the best parameters from the grid search to the pipeline for the hyperparameter setting. After refit with X and y, the returned pipeline, pipe, is a completely trained model on the entire training dataset. We then need to evaluate this trained model on the test dataset. train_on_entire(X_train,y_train,my_pipe1,my_grid1).score(X_test,y_test)train_on_entire(X_train,y_train,my_pipe2,my_grid2).score(X_test,y_test)train_on_entire(X_train,y_train,my_pipe3,my_grid3).score(X_test,y_test) The performance of logistic regression, random forest classifier, and deep learning classifier in terms of accuracy is 0.869, 0.901, and 0.877, respectively. We can draw several conclusions from the results. First, the random forest classifier seems to outperform the other two methods in this prediction. Second, the deep learning method doesn’t show advantages in dealing with the tabular dataset like this. Third, all three methods show that LeBron’s game stats have the prediction ability on the game result, which shows his dominating status in his team. That’s it, the end-to-end machine learning project. I hope you’ve learned something from it.
[ { "code": null, "e": 356, "s": 172, "text": "I am a huge fan of machine learning and basketball, so I like generating some mini-projects by combining these two. In this post, I would like to share with you one of these projects." }, { "code": null, "e": 617, "s": 356, "text": "No matter whether you are a basketball fan or not, you must know LeBron James. As a core player, his performance is essential to the game result. So, the question I try to answer in this project is “Can we predict the game result based on LeBron’s game stats?”" }, { "code": null, "e": 789, "s": 617, "text": "I framed it into a binary classification problem with “Win” or “Lose” of the team as the output labels. The features are the basic statistics of LeBron James in each game." }, { "code": null, "e": 993, "s": 789, "text": "I implemented three classifiers in the project, Logistic Regression, Random Forest Classifier, and Deep Learning Classifier, by using two popular Python libraries in machine learning, sklearn, and keras." }, { "code": null, "e": 1070, "s": 993, "text": "I am going through the project step by step and I hope it is helpful to you." }, { "code": null, "e": 1118, "s": 1070, "text": "The libraries used in the code are listed here." }, { "code": null, "e": 1670, "s": 1118, "text": "import pandas as pdimport numpy as npfrom sklearn.model_selection import train_test_splitfrom sklearn.model_selection import StratifiedKFoldfrom sklearn.impute import SimpleImputerfrom sklearn.preprocessing import StandardScalerfrom sklearn.pipeline import Pipelinefrom sklearn.linear_model import LogisticRegressionfrom sklearn.ensemble import RandomForestClassifierfrom keras.layers import Dense, Dropoutfrom keras.models import Model, Sequentialfrom keras.wrappers.scikit_learn import KerasClassifierfrom sklearn.model_selection import GridSearchCV" }, { "code": null, "e": 1866, "s": 1670, "text": "I manually compiled the basic statistics of LeBron’s games from season 2003–2004 to season 2019–2020 (until NBA suspension in March). In total, there are 1,258 games. The import code is as below:" }, { "code": null, "e": 1943, "s": 1866, "text": "df = pd.read_csv(\"lebron_2003_2020_career_gamelog.csv\",index_col=0)df.head()" }, { "code": null, "e": 2036, "s": 1943, "text": "From the figure above, you can see the basic stats and the game results (“Win” and “Winby”)." }, { "code": null, "e": 2202, "s": 2036, "text": "Then, I want to make sure the data type is ‘float32’ so that I can directly feed them to the neuron network model in keras. The data type transform code is as below:" }, { "code": null, "e": 2228, "s": 2202, "text": "df = df.astype('float32')" }, { "code": null, "e": 2324, "s": 2228, "text": "Next, I need to specify the feature space and the label in the dataset using the code as below:" }, { "code": null, "e": 2374, "s": 2324, "text": "y = df['Win']X = df.drop(columns=['Win','Winby'])" }, { "code": null, "e": 2662, "s": 2374, "text": "Column “Win” is the recorded game result, where 1 means win and 0 means lose. And column “Winby” is the game score difference with the component, where a positive number means win and a negative number means lose. Therefore, it is necessary to remove both of them from the feature space." }, { "code": null, "e": 2825, "s": 2662, "text": "Next, the data is split into the training and the testing set, where the testing set will never be touched before the model evaluation step. The code is as below:" }, { "code": null, "e": 2926, "s": 2825, "text": "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,stratify=y, random_state=42)" }, { "code": null, "e": 3161, "s": 2926, "text": "You may notice that I use a stratified split here which avoids the situation where the game result is biased to one category in the training data. “stratify=y” means the “stratified split” is done based on y which is our output label." }, { "code": null, "e": 3237, "s": 3161, "text": "Till now, the steak is ready to cook, we need to start to preheat the oven." }, { "code": null, "e": 3500, "s": 3237, "text": "As I mentioned, three models are going to be used, Logistic Regression, Random Forest Classifier, and Deep Learning Classifier. To make them all fit the same Scikit-Learn workflow, we need to define the deep learning model first in a Scikit-Learn style as below:" }, { "code": null, "e": 3790, "s": 3500, "text": "def my_DL(epochs=6,batchsize=512): model = Sequential() model.add(Dense(32,activation='relu')) model.add(Dense(16,activation='relu')) model.add(Dense(1,activation='sigmoid')) model.compile(loss='binary_crossentropy',optimizer='rmsprop', metrics=['accuracy']) return model" }, { "code": null, "e": 4004, "s": 3790, "text": "Specifically, this neural network has two hidden layers with 32 and 16 nodes. The loss function, optimizer, and the metric of the network are fixed as ‘binary_crossentropy’, ‘rmsprop’ and ‘accuracy’, respectively." }, { "code": null, "e": 4257, "s": 4004, "text": "There are two adjustable parameters for this compiled model, epochs is the number of epochs and batchsize is the number of samples for each batch. Both parameters have their default values, whose formats are similar to those of a classifier in sklearn." }, { "code": null, "e": 4556, "s": 4257, "text": "The parameters of a model that cannot be trained from the data but need to be assigned before the training process are called hyperparameters. These hyperparameters are always related to the complexity of the model, which needs to be selected properly to avoid underfitting or overfitting problems." }, { "code": null, "e": 4923, "s": 4556, "text": "To select the best set of hyperparameters, we can do it in two ways. First, we can further split the training dataset into two parts, namely the training and the validation dataset. Then, we need to evaluate the trained model from the training dataset on the validation set. The best set of hyperparameters is the one with the best performance on the validation set." }, { "code": null, "e": 5141, "s": 4923, "text": "However, when the sample size is small, only one split of data can be biased. So, cross-validation is another way of training hyperparameters, which is more popular. I, therefore, use cross-validation in this project." }, { "code": null, "e": 5240, "s": 5141, "text": "I list the whole function of the hyperparameter tunning as below and will go through it in detail." }, { "code": null, "e": 7015, "s": 5240, "text": "def train_hyper_tune(X,y): # create the pre-processing component my_scaler = StandardScaler() my_imputer = SimpleImputer(strategy=\"median\") # define classifiers ## Classifier 1: Logistic Regression clf_LR = LogisticRegression(random_state=0,penalty='elasticnet',solver='saga') ## Classifier 2: Random Forest Classifier clf_RF = RandomForestClassifier(random_state=0) ## Classifier 3: Deep Learning Binary Classifier clf_DL = KerasClassifier(build_fn=my_DL) # define pipeline for three classifiers ## clf_LR pipe1 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('lr_model',clf_LR)]) ## clf_RF pipe2 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('rf_model',clf_RF)]) ## clf_DL pipe3 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('dl_model',clf_DL)]) # create hyperparameter space of the three models ## clf_LR param_grid1 = { 'lr_model__C' : [1e-1,1,10], 'lr_model__l1_ratio' : [0,0.5,1] } ## clf_RF param_grid2 = { 'rf_model__n_estimators' : [50,100], 'rf_model__max_features' : [0.8,\"auto\"], 'rf_model__max_depth' : [4,5] } ## clf_DL param_grid3 = { 'dl_model__epochs' : [6,12,18,24], 'dl_model__batchsize' : [256,512] } # set GridSearch via 5-fold cross-validation ## clf_LR grid1 = GridSearchCV(pipe1, cv=5, param_grid=param_grid1) ## clf_RF grid2 = GridSearchCV(pipe2, cv=5, param_grid=param_grid2) ## clf_DL grid3 = GridSearchCV(pipe3, cv=5, param_grid=param_grid3) # run the hyperparameter tunning process grid1.fit(X,y) grid2.fit(X,y) grid3.fit(X,y) # return results of the tunning process return grid1,grid2,grid3,pipe1,pipe2,pipe3" }, { "code": null, "e": 7085, "s": 7015, "text": "As shown in the code, there are mainly six steps inside the function:" }, { "code": null, "e": 7130, "s": 7085, "text": "Step 1. Create the pre-processing functions." }, { "code": null, "e": 7253, "s": 7130, "text": " # create the pre-processing component my_scaler = StandardScaler() my_imputer = SimpleImputer(strategy=\"median\")" }, { "code": null, "e": 7400, "s": 7253, "text": "I use the feature’s median to impute the missing values and the standard scaler to normalize the data. This step is the same for all three models." }, { "code": null, "e": 7438, "s": 7400, "text": "Step 2. Define all three classifiers." }, { "code": null, "e": 7777, "s": 7438, "text": " # define classifiers ## Classifier 1: Logistic Regression clf_LR = LogisticRegression(random_state=0,penalty='elasticnet',solver='saga') ## Classifier 2: Random Forest Classifier clf_RF = RandomForestClassifier(random_state=0) ## Classifier 3: Deep Learning Binary Classifier clf_DL = KerasClassifier(build_fn=my_DL)" }, { "code": null, "e": 8084, "s": 7777, "text": "First, the logistic regression classifier is usually used as the “Hello world!” model in machine learning books. Here, it is used together with a penalty function to avoid overfitting. The model with this penalty term is called ‘Elastic Net’, which is a combination of l1 and l2 norm in the regularization." }, { "code": null, "e": 8200, "s": 8084, "text": "For those who are interested in why we chose Elastic Net as the penalty term, please read my another post as below:" }, { "code": null, "e": 8223, "s": 8200, "text": "towardsdatascience.com" }, { "code": null, "e": 8440, "s": 8223, "text": "Second, the Random Forest Classifier is defined in a more freestyle without fixing any hyperparameters. Three of its hyperparameters are going to be tuned in the following steps, which I will go over in detail later." }, { "code": null, "e": 8712, "s": 8440, "text": "Third, the deep learning classifier used here is based on the Scikit-Learn style model as aforementioned, my_DL. Thankfully Keras provides the wonderful Wrappers for the Scikit-Learn API. I directly call the function my_DL by passing it to the function KerasClassifier()." }, { "code": null, "e": 8809, "s": 8712, "text": "Step 3. Define a pipeline for each model that combines the pre-processing and modeling together." }, { "code": null, "e": 9165, "s": 8809, "text": " # define pipeline for three classifiers ## clf_LR pipe1 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('lr_model',clf_LR)]) ## clf_RF pipe2 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('rf_model',clf_RF)]) ## clf_DL pipe3 = Pipeline([('imputer', my_imputer), ('scaler', my_scaler), ('dl_model',clf_DL)])" }, { "code": null, "e": 9463, "s": 9165, "text": "For each of the three models, I combine the pre-processing and the classifier together into a pipeline with the Pipeline function in sklearn. For each step of processing, a name should be given. For example, I name my logistic regression model as “lr_model” and call it via clf_LR in the pipeline." }, { "code": null, "e": 9668, "s": 9463, "text": "The aim of combining everything into a pipeline is to make sure the exact same processing of the training data is used to the testing data in the cross-validation. This is essential to avoid data leaking." }, { "code": null, "e": 9732, "s": 9668, "text": "Step 4. Create the hyperparameter space for each of the models." }, { "code": null, "e": 10185, "s": 9732, "text": " # create hyperparameter space of the three models ## clf_LR param_grid1 = { 'lr_model__C' : [1e-1,1,10], 'lr_model__l1_ratio' : [0,0.5,1] } ## clf_RF param_grid2 = { 'rf_model__n_estimators' : [50,100], 'rf_model__max_features' : [0.8,\"auto\"], 'rf_model__max_depth' : [4,5] } ## clf_DL param_grid3 = { 'dl_model__epochs' : [6,12,18,24], 'dl_model__batchsize' : [256,512] }" }, { "code": null, "e": 10542, "s": 10185, "text": "This part is more flexible because there is plenty number of parameters in these three models. It’s important to select the parameters that are closely related to the complexity of the model. For example, the maximum depth of the trees in the random forest model is a must-tune hyperparameter. For those who are interested, please refer to this post below." }, { "code": null, "e": 10565, "s": 10542, "text": "towardsdatascience.com" }, { "code": null, "e": 10985, "s": 10565, "text": "To note, the name of the step in a pipeline needs to be specified in the hyperparameter space. For example, the number of epochs in the deep learning model is named as “dl_model__epochs”, where “dl_model” is the name of the deep learning model in my pipeline and “epochs” is the name of a parameter that can be passed to my deep learning model. They are connected in a string format by “__” in the hyperparameter space." }, { "code": null, "e": 11076, "s": 10985, "text": "Step 5. Set the grid search function across the hyperparameter space via cross-validation." }, { "code": null, "e": 11347, "s": 11076, "text": " # set GridSearch via 5-fold cross-validation ## clf_LR grid1 = GridSearchCV(pipe1, cv=5, param_grid=param_grid1) ## clf_RF grid2 = GridSearchCV(pipe2, cv=5, param_grid=param_grid2) ## clf_DL grid3 = GridSearchCV(pipe3, cv=5, param_grid=param_grid3)" }, { "code": null, "e": 11573, "s": 11347, "text": "Comparing to the randomized search, the grid search is more computationally costly because it spans the entire hyperparameter space. In this project, I use the grid search because the hyperparameter space is relatively small." }, { "code": null, "e": 11701, "s": 11573, "text": "For each grid search, I use 5-fold cross-validation to evaluate the average performance of the combinations of hyperparameters." }, { "code": null, "e": 11734, "s": 11701, "text": "Step 6. Run the tunning process." }, { "code": null, "e": 11833, "s": 11734, "text": " # run the hyperparameter tunning process grid1.fit(X,y) grid2.fit(X,y) grid3.fit(X,y)" }, { "code": null, "e": 11933, "s": 11833, "text": "This step is pretty straight forward, which execute the grid search on the three defined pipelines." }, { "code": null, "e": 11984, "s": 11933, "text": "Lastly, we just need to run the function as below:" }, { "code": null, "e": 12075, "s": 11984, "text": "my_grid1,my_grid2,my_grid3,my_pipe1,my_pipe2,my_pipe3 = train_hyper_tune(X_train, y_train)" }, { "code": null, "e": 12170, "s": 12075, "text": "We can check the training performance by pulling out the best score in the grid search result." }, { "code": null, "e": 12305, "s": 12170, "text": "It seems the random forest has the best performance on the training dataset. But all three models are pretty comparable to each other." }, { "code": null, "e": 12454, "s": 12305, "text": "After the hyperparameters are selected in the previous step, I use them to re-train the models on the entire training data. The code is shown below:" }, { "code": null, "e": 12620, "s": 12454, "text": "def train_on_entire(X,y,pipe,grid_res): # fit pipeline pipe.set_params(**grid_res.best_params_).fit(X, y) # return the newly trained pipeline return pipe" }, { "code": null, "e": 12755, "s": 12620, "text": "Here, **grid_res.best_params_ is used to pass the best parameters from the grid search to the pipeline for the hyperparameter setting." }, { "code": null, "e": 12872, "s": 12755, "text": "After refit with X and y, the returned pipeline, pipe, is a completely trained model on the entire training dataset." }, { "code": null, "e": 12937, "s": 12872, "text": "We then need to evaluate this trained model on the test dataset." }, { "code": null, "e": 13151, "s": 12937, "text": "train_on_entire(X_train,y_train,my_pipe1,my_grid1).score(X_test,y_test)train_on_entire(X_train,y_train,my_pipe2,my_grid2).score(X_test,y_test)train_on_entire(X_train,y_train,my_pipe3,my_grid3).score(X_test,y_test)" }, { "code": null, "e": 13309, "s": 13151, "text": "The performance of logistic regression, random forest classifier, and deep learning classifier in terms of accuracy is 0.869, 0.901, and 0.877, respectively." }, { "code": null, "e": 13711, "s": 13309, "text": "We can draw several conclusions from the results. First, the random forest classifier seems to outperform the other two methods in this prediction. Second, the deep learning method doesn’t show advantages in dealing with the tabular dataset like this. Third, all three methods show that LeBron’s game stats have the prediction ability on the game result, which shows his dominating status in his team." } ]
Birthday Paradox - GeeksforGeeks
31 Jan, 2022 How many people must be there in a room to make the probability 100% that at-least two people in the room have same birthday? Answer: 367 (since there are 366 possible birthdays, including February 29). The above question was simple. Try the below question yourself.How many people must be there in a room to make the probability 50% that at-least two people in the room have same birthday? Answer: 23 The number is surprisingly very low. In fact, we need only 70 people to make the probability 99.9 %.Let us discuss the generalized formula.What is the probability that two persons among n have same birthday? Let the probability that two people in a room with n have same birthday be P(same). P(Same) can be easily evaluated in terms of P(different) where P(different) is the probability that all of them have different birthday.P(same) = 1 – P(different)P(different) can be written as 1 x (364/365) x (363/365) x (362/365) x .... x (1 – (n-1)/365)How did we get the above expression? Persons from first to last can get birthdays in following order for all birthdays to be distinct: The first person can have any birthday among 365 The second person should have a birthday which is not same as first person The third person should have a birthday which is not same as first two persons. ................ ............... The n’th person should have a birthday which is not same as any of the earlier considered (n-1) persons.Approximation of above expression The above expression can be approximated using Taylor’s Series. provides a first-order approximation for ex for x << 1:To apply this approximation to the first expression derived for p(different), set x = -a / 365. Thus,The above expression derived for p(different) can be written as 1 x (1 – 1/365) x (1 – 2/365) x (1 – 3/365) x .... x (1 – (n-1)/365)By putting the value of 1 – a/365 as e-a/365, we get following.Therefore,p(same) = 1- p(different) An even coarser approximation is given byp(same) By taking Log on both sides, we get the reverse formula.Using the above approximate formula, we can approximate number of people for a given probability. For example the following C++ function find() returns the smallest n for which the probability is greater than the given p.Implementation of approximate formula. The following is program to approximate number of people for a given probability. C++ Java Python3 C# PHP Javascript // C++ program to approximate number of people in Birthday Paradox// problem#include <cmath>#include <iostream>using namespace std; // Returns approximate number of people for a given probabilityint find(double p){ return ceil(sqrt(2*365*log(1/(1-p))));} int main(){ cout << find(0.70);} // Java program to approximate number// of people in Birthday Paradox problemclass GFG { // Returns approximate number of people // for a given probability static double find(double p) { return Math.ceil(Math.sqrt(2 * 365 * Math.log(1 / (1 - p)))); } // Driver code public static void main(String[] args) { System.out.println(find(0.70)); }} // This code is contributed by Anant Agarwal. # Python3 code to approximate number# of people in Birthday Paradox problemimport math # Returns approximate number of# people for a given probabilitydef find( p ): return math.ceil(math.sqrt(2 * 365 * math.log(1/(1-p)))); # Driver Codeprint(find(0.70)) # This code is contributed by "Sharad_Bhardwaj". // C# program to approximate number// of people in Birthday Paradox problem.using System; class GFG { // Returns approximate number of people // for a given probability static double find(double p) { return Math.Ceiling(Math.Sqrt(2 * 365 * Math.Log(1 / (1 - p)))); } // Driver code public static void Main() { Console.Write(find(0.70)); }} // This code is contributed by nitin mittal. <?php// PHP program to approximate// number of people in Birthday// Paradox problem // Returns approximate number// of people for a given probabilityfunction find( $p){ return ceil(sqrt(2 * 365 * log(1 / (1 - $p))));} // Driver Codeecho find(0.70); // This code is contributed by aj_36?> <script> // JavaScript program to approximate number// of people in Birthday Paradox problem // Returns approximate number of// people for a given probabilityfunction find( p){ return Math.ceil(Math.sqrt(2*365*Math.log(1/(1-p))));}document.write(find(0.70)); </script> Output : 30 Time Complexity: O(log n) Auxiliary Space: O(1) Source: http://en.wikipedia.org/wiki/Birthday_problemApplications: 1) Birthday Paradox is generally discussed with hashing to show importance of collision handling even for a small set of keys. 2) Birthday AttackBelow is an alternate implementation in C language : C Java Python3 C# Javascript #include<stdio.h>int main(){ // Assuming non-leap year float num = 365; float denom = 365; float pr; int n = 0; printf("Probability to find : "); scanf("%f", &pr); float p = 1; while (p > pr){ p *= (num/denom); num--; n++; } printf("\nTotal no. of people out of which there " " is %0.1f probability that two of them " "have same birthdays is %d ", p, n); return 0;} class GFG{ public static void main(String[] args){ // Assuming non-leap year float num = 365; float denom = 365; double pr=0.7; int n = 0; float p = 1; while (p > pr){ p *= (num/denom); num--; n++; } System.out.printf("\nTotal no. of people out of which there is "); System.out.printf( "%.1f probability that two of them " + "have same birthdays is %d ", p, n); }} // This code is contributed by Rajput-Ji if __name__ == '__main__': # Assuming non-leap year num = 365; denom = 365; pr = 0.7; n = 0; p = 1; while (p > pr): p *= (num / denom); num -= 1; n += 1; print("Total no. of people out of which there is ", end=""); print ("{0:.1f}".format(p), end="") print(" probability that two of them " + "have same birthdays is ", n); # This code is contributed by Rajput-Ji using System;public class GFG { public static void Main(String[] args) { // Assuming non-leap year float num = 365; float denom = 365; double pr = 0.7; int n = 0; float p = 1; while (p > pr) { p *= (num / denom); num--; n++; } Console.Write("\nTotal no. of people out of which there is "); Console.Write("{0:F1} probability that two of them have same birthdays is {1} ", p, n); }} // This code is contributed by Rajput-Ji <script> // Assuming non-leap year var num = 365; var denom = 365; var pr = 0.7; var n = 0; var p = 1; while (p > pr) { p *= (num / denom); num--; n++; } document.write("\nTotal no. of people out of which there is "); document.write(p.toFixed(1)+" probability that two of them " + "have same birthdays is "+ n); // This code is contributed by Rajput-Ji</script> Time Complexity: O(log n) Auxiliary Space: O(1) This article is contributed by Shubham. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. nitin mittal jit_t KrishnaHare rohan07 subham348 Rajput-Ji Probability Mathematical Randomized Mathematical Probability Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Merge two sorted arrays Modulo Operator (%) in C/C++ with Examples Print all possible combinations of r elements in a given array of size n Operators in C / C++ Program for factorial of a number K'th Smallest/Largest Element in Unsorted Array | Set 2 (Expected Linear Time) Shuffle a given array using Fisher–Yates shuffle Algorithm QuickSort using Random Pivoting Shuffle or Randomize a list in Java Estimating the value of Pi using Monte Carlo
[ { "code": null, "e": 25893, "s": 25865, "text": "\n31 Jan, 2022" }, { "code": null, "e": 28252, "s": 25893, "text": "How many people must be there in a room to make the probability 100% that at-least two people in the room have same birthday? Answer: 367 (since there are 366 possible birthdays, including February 29). The above question was simple. Try the below question yourself.How many people must be there in a room to make the probability 50% that at-least two people in the room have same birthday? Answer: 23 The number is surprisingly very low. In fact, we need only 70 people to make the probability 99.9 %.Let us discuss the generalized formula.What is the probability that two persons among n have same birthday? Let the probability that two people in a room with n have same birthday be P(same). P(Same) can be easily evaluated in terms of P(different) where P(different) is the probability that all of them have different birthday.P(same) = 1 – P(different)P(different) can be written as 1 x (364/365) x (363/365) x (362/365) x .... x (1 – (n-1)/365)How did we get the above expression? Persons from first to last can get birthdays in following order for all birthdays to be distinct: The first person can have any birthday among 365 The second person should have a birthday which is not same as first person The third person should have a birthday which is not same as first two persons. ................ ............... The n’th person should have a birthday which is not same as any of the earlier considered (n-1) persons.Approximation of above expression The above expression can be approximated using Taylor’s Series. provides a first-order approximation for ex for x << 1:To apply this approximation to the first expression derived for p(different), set x = -a / 365. Thus,The above expression derived for p(different) can be written as 1 x (1 – 1/365) x (1 – 2/365) x (1 – 3/365) x .... x (1 – (n-1)/365)By putting the value of 1 – a/365 as e-a/365, we get following.Therefore,p(same) = 1- p(different) An even coarser approximation is given byp(same) By taking Log on both sides, we get the reverse formula.Using the above approximate formula, we can approximate number of people for a given probability. For example the following C++ function find() returns the smallest n for which the probability is greater than the given p.Implementation of approximate formula. The following is program to approximate number of people for a given probability. " }, { "code": null, "e": 28256, "s": 28252, "text": "C++" }, { "code": null, "e": 28261, "s": 28256, "text": "Java" }, { "code": null, "e": 28269, "s": 28261, "text": "Python3" }, { "code": null, "e": 28272, "s": 28269, "text": "C#" }, { "code": null, "e": 28276, "s": 28272, "text": "PHP" }, { "code": null, "e": 28287, "s": 28276, "text": "Javascript" }, { "code": "// C++ program to approximate number of people in Birthday Paradox// problem#include <cmath>#include <iostream>using namespace std; // Returns approximate number of people for a given probabilityint find(double p){ return ceil(sqrt(2*365*log(1/(1-p))));} int main(){ cout << find(0.70);}", "e": 28580, "s": 28287, "text": null }, { "code": "// Java program to approximate number// of people in Birthday Paradox problemclass GFG { // Returns approximate number of people // for a given probability static double find(double p) { return Math.ceil(Math.sqrt(2 * 365 * Math.log(1 / (1 - p)))); } // Driver code public static void main(String[] args) { System.out.println(find(0.70)); }} // This code is contributed by Anant Agarwal.", "e": 29045, "s": 28580, "text": null }, { "code": "# Python3 code to approximate number# of people in Birthday Paradox problemimport math # Returns approximate number of# people for a given probabilitydef find( p ): return math.ceil(math.sqrt(2 * 365 * math.log(1/(1-p)))); # Driver Codeprint(find(0.70)) # This code is contributed by \"Sharad_Bhardwaj\".", "e": 29371, "s": 29045, "text": null }, { "code": "// C# program to approximate number// of people in Birthday Paradox problem.using System; class GFG { // Returns approximate number of people // for a given probability static double find(double p) { return Math.Ceiling(Math.Sqrt(2 * 365 * Math.Log(1 / (1 - p)))); } // Driver code public static void Main() { Console.Write(find(0.70)); }} // This code is contributed by nitin mittal.", "e": 29832, "s": 29371, "text": null }, { "code": "<?php// PHP program to approximate// number of people in Birthday// Paradox problem // Returns approximate number// of people for a given probabilityfunction find( $p){ return ceil(sqrt(2 * 365 * log(1 / (1 - $p))));} // Driver Codeecho find(0.70); // This code is contributed by aj_36?>", "e": 30143, "s": 29832, "text": null }, { "code": "<script> // JavaScript program to approximate number// of people in Birthday Paradox problem // Returns approximate number of// people for a given probabilityfunction find( p){ return Math.ceil(Math.sqrt(2*365*Math.log(1/(1-p))));}document.write(find(0.70)); </script>", "e": 30415, "s": 30143, "text": null }, { "code": null, "e": 30425, "s": 30415, "text": "Output : " }, { "code": null, "e": 30428, "s": 30425, "text": "30" }, { "code": null, "e": 30454, "s": 30428, "text": "Time Complexity: O(log n)" }, { "code": null, "e": 30476, "s": 30454, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 30743, "s": 30476, "text": "Source: http://en.wikipedia.org/wiki/Birthday_problemApplications: 1) Birthday Paradox is generally discussed with hashing to show importance of collision handling even for a small set of keys. 2) Birthday AttackBelow is an alternate implementation in C language : " }, { "code": null, "e": 30745, "s": 30743, "text": "C" }, { "code": null, "e": 30750, "s": 30745, "text": "Java" }, { "code": null, "e": 30758, "s": 30750, "text": "Python3" }, { "code": null, "e": 30761, "s": 30758, "text": "C#" }, { "code": null, "e": 30772, "s": 30761, "text": "Javascript" }, { "code": "#include<stdio.h>int main(){ // Assuming non-leap year float num = 365; float denom = 365; float pr; int n = 0; printf(\"Probability to find : \"); scanf(\"%f\", &pr); float p = 1; while (p > pr){ p *= (num/denom); num--; n++; } printf(\"\\nTotal no. of people out of which there \" \" is %0.1f probability that two of them \" \"have same birthdays is %d \", p, n); return 0;}", "e": 31218, "s": 30772, "text": null }, { "code": "class GFG{ public static void main(String[] args){ // Assuming non-leap year float num = 365; float denom = 365; double pr=0.7; int n = 0; float p = 1; while (p > pr){ p *= (num/denom); num--; n++; } System.out.printf(\"\\nTotal no. of people out of which there is \"); System.out.printf( \"%.1f probability that two of them \" + \"have same birthdays is %d \", p, n); }} // This code is contributed by Rajput-Ji", "e": 31697, "s": 31218, "text": null }, { "code": "if __name__ == '__main__': # Assuming non-leap year num = 365; denom = 365; pr = 0.7; n = 0; p = 1; while (p > pr): p *= (num / denom); num -= 1; n += 1; print(\"Total no. of people out of which there is \", end=\"\"); print (\"{0:.1f}\".format(p), end=\"\") print(\" probability that two of them \" + \"have same birthdays is \", n); # This code is contributed by Rajput-Ji", "e": 32120, "s": 31697, "text": null }, { "code": "using System;public class GFG { public static void Main(String[] args) { // Assuming non-leap year float num = 365; float denom = 365; double pr = 0.7; int n = 0; float p = 1; while (p > pr) { p *= (num / denom); num--; n++; } Console.Write(\"\\nTotal no. of people out of which there is \"); Console.Write(\"{0:F1} probability that two of them have same birthdays is {1} \", p, n); }} // This code is contributed by Rajput-Ji", "e": 32593, "s": 32120, "text": null }, { "code": "<script> // Assuming non-leap year var num = 365; var denom = 365; var pr = 0.7; var n = 0; var p = 1; while (p > pr) { p *= (num / denom); num--; n++; } document.write(\"\\nTotal no. of people out of which there is \"); document.write(p.toFixed(1)+\" probability that two of them \" + \"have same birthdays is \"+ n); // This code is contributed by Rajput-Ji</script>", "e": 33062, "s": 32593, "text": null }, { "code": null, "e": 33088, "s": 33062, "text": "Time Complexity: O(log n)" }, { "code": null, "e": 33110, "s": 33088, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 33276, "s": 33110, "text": "This article is contributed by Shubham. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 33289, "s": 33276, "text": "nitin mittal" }, { "code": null, "e": 33295, "s": 33289, "text": "jit_t" }, { "code": null, "e": 33307, "s": 33295, "text": "KrishnaHare" }, { "code": null, "e": 33315, "s": 33307, "text": "rohan07" }, { "code": null, "e": 33325, "s": 33315, "text": "subham348" }, { "code": null, "e": 33335, "s": 33325, "text": "Rajput-Ji" }, { "code": null, "e": 33347, "s": 33335, "text": "Probability" }, { "code": null, "e": 33360, "s": 33347, "text": "Mathematical" }, { "code": null, "e": 33371, "s": 33360, "text": "Randomized" }, { "code": null, "e": 33384, "s": 33371, "text": "Mathematical" }, { "code": null, "e": 33396, "s": 33384, "text": "Probability" }, { "code": null, "e": 33494, "s": 33396, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33518, "s": 33494, "text": "Merge two sorted arrays" }, { "code": null, "e": 33561, "s": 33518, "text": "Modulo Operator (%) in C/C++ with Examples" }, { "code": null, "e": 33634, "s": 33561, "text": "Print all possible combinations of r elements in a given array of size n" }, { "code": null, "e": 33655, "s": 33634, "text": "Operators in C / C++" }, { "code": null, "e": 33689, "s": 33655, "text": "Program for factorial of a number" }, { "code": null, "e": 33768, "s": 33689, "text": "K'th Smallest/Largest Element in Unsorted Array | Set 2 (Expected Linear Time)" }, { "code": null, "e": 33827, "s": 33768, "text": "Shuffle a given array using Fisher–Yates shuffle Algorithm" }, { "code": null, "e": 33859, "s": 33827, "text": "QuickSort using Random Pivoting" }, { "code": null, "e": 33895, "s": 33859, "text": "Shuffle or Randomize a list in Java" } ]
PostgreSQL - ROLLUP - GeeksforGeeks
26 Feb, 2021 The PostgreSQL ROLLUP belongs to the GROUP BY clause that provides a short cut for defining multiple grouping sets. Multiple columns grouped together forms a group set. Unlike the CUBE subclause, ROLLUP does not yield all possible grouping sets based on the specified columns. It just makes a subset of those. The ROLLUP presupposes a hierarchy between the input columns and yields all grouping sets that make sense only if the hierarchy is considered. That’s why ROLLUP is usually used to generate the subtotals and the grand total for reports. Syntax: SELECT column1, column2, column3, aggregate(column4) FROM table_name GROUP BY ROLLUP (column1, column2, column3); To better understand the concept let’s create a new table and proceed to the examples.To create a sample table use the below command: CREATE TABLE geeksforgeeks_courses( course_name VARCHAR NOT NULL, segment VARCHAR NOT NULL, quantity INT NOT NULL, PRIMARY KEY (course_name, segment) ); Now insert some data into it using the below command: INSERT INTO geeksforgeeks_courses(course_name, segment, quantity) VALUES ('Data Structure in Python', 'Premium', 100), ('Algorithm Design in Python', 'Basic', 200), ('Data Structure in Java', 'Premium', 100), ('Algorithm Design in Java', 'Basic', 300); Now that our table is set let’s look into examples. Example 1:The following query uses the ROLLUP subclause to find the number of products sold by course_name(subtotal) and by all course_name and segments (total) as follows: SELECT course_name, segment, SUM (quantity) FROM geeksforgeeks_courses GROUP BY ROLLUP (course_name, segment) ORDER BY course_name, segment; Output: Example 2:The following statement performs a partial ROLL UP as follows: SELECT segment, course_name, SUM (quantity) FROM geeksforgeeks_courses GROUP BY segment, ROLLUP (course_name) ORDER BY segment, course_name; Output: postgreSQL-grouping-sets PostgreSQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. PostgreSQL - Create Auto-increment Column using SERIAL PostgreSQL - CREATE PROCEDURE PostgreSQL - GROUP BY clause PostgreSQL - DROP INDEX PostgreSQL - REPLACE Function PostgreSQL - Copy Table PostgreSQL - TIME Data Type PostgreSQL - CREATE SCHEMA PostgreSQL - Rename Table PostgreSQL - Cursor
[ { "code": null, "e": 25343, "s": 25315, "text": "\n26 Feb, 2021" }, { "code": null, "e": 25512, "s": 25343, "text": "The PostgreSQL ROLLUP belongs to the GROUP BY clause that provides a short cut for defining multiple grouping sets. Multiple columns grouped together forms a group set." }, { "code": null, "e": 25889, "s": 25512, "text": "Unlike the CUBE subclause, ROLLUP does not yield all possible grouping sets based on the specified columns. It just makes a subset of those. The ROLLUP presupposes a hierarchy between the input columns and yields all grouping sets that make sense only if the hierarchy is considered. That’s why ROLLUP is usually used to generate the subtotals and the grand total for reports." }, { "code": null, "e": 25897, "s": 25889, "text": "Syntax:" }, { "code": null, "e": 26035, "s": 25897, "text": "SELECT\n column1,\n column2,\n column3,\n aggregate(column4)\nFROM\n table_name\nGROUP BY\n ROLLUP (column1, column2, column3);" }, { "code": null, "e": 26169, "s": 26035, "text": "To better understand the concept let’s create a new table and proceed to the examples.To create a sample table use the below command:" }, { "code": null, "e": 26338, "s": 26169, "text": "CREATE TABLE geeksforgeeks_courses(\n course_name VARCHAR NOT NULL,\n segment VARCHAR NOT NULL,\n quantity INT NOT NULL,\n PRIMARY KEY (course_name, segment)\n);" }, { "code": null, "e": 26392, "s": 26338, "text": "Now insert some data into it using the below command:" }, { "code": null, "e": 26661, "s": 26392, "text": "INSERT INTO geeksforgeeks_courses(course_name, segment, quantity)\nVALUES\n ('Data Structure in Python', 'Premium', 100),\n ('Algorithm Design in Python', 'Basic', 200),\n ('Data Structure in Java', 'Premium', 100),\n ('Algorithm Design in Java', 'Basic', 300);" }, { "code": null, "e": 26713, "s": 26661, "text": "Now that our table is set let’s look into examples." }, { "code": null, "e": 26886, "s": 26713, "text": "Example 1:The following query uses the ROLLUP subclause to find the number of products sold by course_name(subtotal) and by all course_name and segments (total) as follows:" }, { "code": null, "e": 27055, "s": 26886, "text": "SELECT\n course_name,\n segment,\n SUM (quantity)\nFROM\n geeksforgeeks_courses\nGROUP BY\n ROLLUP (course_name, segment)\nORDER BY\n course_name,\n segment;" }, { "code": null, "e": 27063, "s": 27055, "text": "Output:" }, { "code": null, "e": 27136, "s": 27063, "text": "Example 2:The following statement performs a partial ROLL UP as follows:" }, { "code": null, "e": 27309, "s": 27136, "text": "SELECT\n segment,\n course_name,\n SUM (quantity)\nFROM\n geeksforgeeks_courses\nGROUP BY\n segment,\n ROLLUP (course_name)\nORDER BY\n segment,\n course_name;" }, { "code": null, "e": 27317, "s": 27309, "text": "Output:" }, { "code": null, "e": 27342, "s": 27317, "text": "postgreSQL-grouping-sets" }, { "code": null, "e": 27353, "s": 27342, "text": "PostgreSQL" }, { "code": null, "e": 27451, "s": 27353, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27506, "s": 27451, "text": "PostgreSQL - Create Auto-increment Column using SERIAL" }, { "code": null, "e": 27536, "s": 27506, "text": "PostgreSQL - CREATE PROCEDURE" }, { "code": null, "e": 27565, "s": 27536, "text": "PostgreSQL - GROUP BY clause" }, { "code": null, "e": 27589, "s": 27565, "text": "PostgreSQL - DROP INDEX" }, { "code": null, "e": 27619, "s": 27589, "text": "PostgreSQL - REPLACE Function" }, { "code": null, "e": 27643, "s": 27619, "text": "PostgreSQL - Copy Table" }, { "code": null, "e": 27671, "s": 27643, "text": "PostgreSQL - TIME Data Type" }, { "code": null, "e": 27698, "s": 27671, "text": "PostgreSQL - CREATE SCHEMA" }, { "code": null, "e": 27724, "s": 27698, "text": "PostgreSQL - Rename Table" } ]
Add, subtract, multiple and divide two Pandas Series - GeeksforGeeks
28 Jul, 2020 Let us see how to perform basic arithmetic operations like addition, subtraction, multiplication, and division on 2 Pandas Series. For all the 4 operations we will follow the basic algorithm : Import the Pandas module.Create 2 Pandas Series objects.Perform the required arithmetic operation using the respective arithmetic operator between the 2 Series and assign the result to another Series.Display the resultant Series. Import the Pandas module. Create 2 Pandas Series objects. Perform the required arithmetic operation using the respective arithmetic operator between the 2 Series and assign the result to another Series. Display the resultant Series. # importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # adding the 2 Seriesseries3 = series1 + series2 # displaying the resultprint(series3) Output : # importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # subtracting the 2 Seriesseries3 = series1 - series2 # displaying the resultprint(series3) Output : # importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # multiplying the 2 Seriesseries3 = series1 * series2 # displaying the resultprint(series3) Output : # importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # dividing the 2 Seriesseries3 = series1 / series2 # displaying the resultprint(series3) Output : pandas-dataframe-program Python-pandas Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? Check if element exists in list in Python How to drop one or multiple columns in Pandas Dataframe Python Classes and Objects Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 25665, "s": 25637, "text": "\n28 Jul, 2020" }, { "code": null, "e": 25796, "s": 25665, "text": "Let us see how to perform basic arithmetic operations like addition, subtraction, multiplication, and division on 2 Pandas Series." }, { "code": null, "e": 25858, "s": 25796, "text": "For all the 4 operations we will follow the basic algorithm :" }, { "code": null, "e": 26088, "s": 25858, "text": "Import the Pandas module.Create 2 Pandas Series objects.Perform the required arithmetic operation using the respective arithmetic operator between the 2 Series and assign the result to another Series.Display the resultant Series." }, { "code": null, "e": 26114, "s": 26088, "text": "Import the Pandas module." }, { "code": null, "e": 26146, "s": 26114, "text": "Create 2 Pandas Series objects." }, { "code": null, "e": 26291, "s": 26146, "text": "Perform the required arithmetic operation using the respective arithmetic operator between the 2 Series and assign the result to another Series." }, { "code": null, "e": 26321, "s": 26291, "text": "Display the resultant Series." }, { "code": "# importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # adding the 2 Seriesseries3 = series1 + series2 # displaying the resultprint(series3)", "e": 26553, "s": 26321, "text": null }, { "code": null, "e": 26562, "s": 26553, "text": "Output :" }, { "code": "# importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # subtracting the 2 Seriesseries3 = series1 - series2 # displaying the resultprint(series3)", "e": 26799, "s": 26562, "text": null }, { "code": null, "e": 26808, "s": 26799, "text": "Output :" }, { "code": "# importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # multiplying the 2 Seriesseries3 = series1 * series2 # displaying the resultprint(series3)", "e": 27045, "s": 26808, "text": null }, { "code": null, "e": 27054, "s": 27045, "text": "Output :" }, { "code": "# importing the moduleimport pandas as pd # creating 2 Pandas Seriesseries1 = pd.Series([1, 2, 3, 4, 5])series2 = pd.Series([6, 7, 8, 9, 10]) # dividing the 2 Seriesseries3 = series1 / series2 # displaying the resultprint(series3)", "e": 27288, "s": 27054, "text": null }, { "code": null, "e": 27297, "s": 27288, "text": "Output :" }, { "code": null, "e": 27322, "s": 27297, "text": "pandas-dataframe-program" }, { "code": null, "e": 27336, "s": 27322, "text": "Python-pandas" }, { "code": null, "e": 27343, "s": 27336, "text": "Python" }, { "code": null, "e": 27359, "s": 27343, "text": "Python Programs" }, { "code": null, "e": 27457, "s": 27359, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27489, "s": 27457, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27531, "s": 27489, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27573, "s": 27531, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27629, "s": 27573, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27656, "s": 27629, "text": "Python Classes and Objects" }, { "code": null, "e": 27678, "s": 27656, "text": "Defaultdict in Python" }, { "code": null, "e": 27717, "s": 27678, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 27763, "s": 27717, "text": "Python | Split string into list of characters" }, { "code": null, "e": 27801, "s": 27763, "text": "Python | Convert a list to dictionary" } ]
How to Create a Sparse Matrix in Python - GeeksforGeeks
18 Aug, 2020 If most of the elements of the matrix have 0 value, then it is called a sparse matrix. The two major benefits of using sparse matrix instead of a simple matrix are: Storage: There are lesser non-zero elements than zeros and thus lesser memory can be used to store only those elements. Computing time: Computing time can be saved by logically designing a data structure traversing only non-zero elements. Sparse matrices are generally utilized in applied machine learning such as in data containing data-encodings that map categories to count and also in entire subfields of machine learning such as natural language processing (NLP). Example: 0 0 3 0 4 0 0 5 7 0 0 0 0 0 0 0 2 6 0 0 Representing a sparse matrix by a 2D array leads to wastage of lots of memory as zeroes in the matrix are of no use in most of the cases. So, instead of storing zeroes with non-zero elements, we only store non-zero elements. This means storing non-zero elements with triples- (Row, Column, value). Python’s SciPy gives tools for creating sparse matrices using multiple data structures, as well as tools for converting a dense matrix to a sparse matrix. The function csr_matrix() is used to create a sparse matrix of compressed sparse row format whereas csc_matrix() is used to create a sparse matrix of compressed sparse column format. Syntax: scipy.sparse.csr_matrix(shape=None, dtype=None) Parameters: shape: Get shape of a matrix dtype: Data type of the matrix Example 1: Python # Python program to create# sparse matrix using csr_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csr_matrix # Creating a 3 * 4 sparse matrixsparseMatrix = csr_matrix((3, 4), dtype = np.int8).toarray() # Print the sparse matrixprint(sparseMatrix) Output: [[0 0 0 0] [0 0 0 0] [0 0 0 0]] Example 2: Python # Python program to create# sparse matrix using csr_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csr_matrix row = np.array([0, 0, 1, 1, 2, 1])col = np.array([0, 1, 2, 0, 2, 2]) # taking datadata = np.array([1, 4, 5, 8, 9, 6]) # creating sparse matrixsparseMatrix = csr_matrix((data, (row, col)), shape = (3, 3)).toarray() # print the sparse matrixprint(sparseMatrix) Output: [[ 1 4 0] [ 8 0 11] [ 0 0 9]] Syntax: scipy.sparse.csc_matrix(shape=None, dtype=None) Parameters: shape: Get shape of a matrix dtype: Data type of the matrix Example 1: Python # Python program to create# sparse matrix using csc_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csc_matrix # Creating a 3 * 4 sparse matrixsparseMatrix = csc_matrix((3, 4), dtype = np.int8).toarray() # Print the sparse matrixprint(sparseMatrix) Output: [[0 0 0 0] [0 0 0 0] [0 0 0 0]] Example 2: Python # Python program to create# sparse matrix using csc_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csc_matrix row = np.array([0, 0, 1, 1, 2, 1])col = np.array([0, 1, 2, 0, 2, 2]) # taking datadata = np.array([1, 4, 5, 8, 9, 6]) # creating sparse matrixsparseMatrix = csc_matrix((data, (row, col)), shape = (3, 3)).toarray() # print the sparse matrixprint(sparseMatrix) Output: [[ 1 4 0] [ 8 0 11] [ 0 0 9]] Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Enumerate() in Python Different ways to create Pandas Dataframe Python String | replace() Reading and Writing to text files in Python Defaultdict in Python Python | Get dictionary keys as a list Python | Split string into list of characters Python | Convert a list to dictionary How to print without newline in Python?
[ { "code": null, "e": 25655, "s": 25627, "text": "\n18 Aug, 2020" }, { "code": null, "e": 25820, "s": 25655, "text": "If most of the elements of the matrix have 0 value, then it is called a sparse matrix. The two major benefits of using sparse matrix instead of a simple matrix are:" }, { "code": null, "e": 25940, "s": 25820, "text": "Storage: There are lesser non-zero elements than zeros and thus lesser memory can be used to store only those elements." }, { "code": null, "e": 26059, "s": 25940, "text": "Computing time: Computing time can be saved by logically designing a data structure traversing only non-zero elements." }, { "code": null, "e": 26289, "s": 26059, "text": "Sparse matrices are generally utilized in applied machine learning such as in data containing data-encodings that map categories to count and also in entire subfields of machine learning such as natural language processing (NLP)." }, { "code": null, "e": 26298, "s": 26289, "text": "Example:" }, { "code": null, "e": 26350, "s": 26298, "text": "0 0 3 0 4 \n0 0 5 7 0\n0 0 0 0 0\n0 2 6 0 0" }, { "code": null, "e": 26648, "s": 26350, "text": "Representing a sparse matrix by a 2D array leads to wastage of lots of memory as zeroes in the matrix are of no use in most of the cases. So, instead of storing zeroes with non-zero elements, we only store non-zero elements. This means storing non-zero elements with triples- (Row, Column, value)." }, { "code": null, "e": 26986, "s": 26648, "text": "Python’s SciPy gives tools for creating sparse matrices using multiple data structures, as well as tools for converting a dense matrix to a sparse matrix. The function csr_matrix() is used to create a sparse matrix of compressed sparse row format whereas csc_matrix() is used to create a sparse matrix of compressed sparse column format." }, { "code": null, "e": 26994, "s": 26986, "text": "Syntax:" }, { "code": null, "e": 27042, "s": 26994, "text": "scipy.sparse.csr_matrix(shape=None, dtype=None)" }, { "code": null, "e": 27056, "s": 27044, "text": "Parameters:" }, { "code": null, "e": 27085, "s": 27056, "text": "shape: Get shape of a matrix" }, { "code": null, "e": 27116, "s": 27085, "text": "dtype: Data type of the matrix" }, { "code": null, "e": 27127, "s": 27116, "text": "Example 1:" }, { "code": null, "e": 27134, "s": 27127, "text": "Python" }, { "code": "# Python program to create# sparse matrix using csr_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csr_matrix # Creating a 3 * 4 sparse matrixsparseMatrix = csr_matrix((3, 4), dtype = np.int8).toarray() # Print the sparse matrixprint(sparseMatrix)", "e": 27441, "s": 27134, "text": null }, { "code": null, "e": 27449, "s": 27441, "text": "Output:" }, { "code": null, "e": 27484, "s": 27449, "text": "[[0 0 0 0]\n [0 0 0 0]\n [0 0 0 0]]\n" }, { "code": null, "e": 27495, "s": 27484, "text": "Example 2:" }, { "code": null, "e": 27502, "s": 27495, "text": "Python" }, { "code": "# Python program to create# sparse matrix using csr_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csr_matrix row = np.array([0, 0, 1, 1, 2, 1])col = np.array([0, 1, 2, 0, 2, 2]) # taking datadata = np.array([1, 4, 5, 8, 9, 6]) # creating sparse matrixsparseMatrix = csr_matrix((data, (row, col)), shape = (3, 3)).toarray() # print the sparse matrixprint(sparseMatrix)", "e": 27932, "s": 27502, "text": null }, { "code": null, "e": 27940, "s": 27932, "text": "Output:" }, { "code": null, "e": 27978, "s": 27940, "text": "[[ 1 4 0]\n [ 8 0 11]\n [ 0 0 9]]\n" }, { "code": null, "e": 27986, "s": 27978, "text": "Syntax:" }, { "code": null, "e": 28034, "s": 27986, "text": "scipy.sparse.csc_matrix(shape=None, dtype=None)" }, { "code": null, "e": 28048, "s": 28036, "text": "Parameters:" }, { "code": null, "e": 28077, "s": 28048, "text": "shape: Get shape of a matrix" }, { "code": null, "e": 28108, "s": 28077, "text": "dtype: Data type of the matrix" }, { "code": null, "e": 28119, "s": 28108, "text": "Example 1:" }, { "code": null, "e": 28126, "s": 28119, "text": "Python" }, { "code": "# Python program to create# sparse matrix using csc_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csc_matrix # Creating a 3 * 4 sparse matrixsparseMatrix = csc_matrix((3, 4), dtype = np.int8).toarray() # Print the sparse matrixprint(sparseMatrix)", "e": 28433, "s": 28126, "text": null }, { "code": null, "e": 28441, "s": 28433, "text": "Output:" }, { "code": null, "e": 28476, "s": 28441, "text": "[[0 0 0 0]\n [0 0 0 0]\n [0 0 0 0]]\n" }, { "code": null, "e": 28487, "s": 28476, "text": "Example 2:" }, { "code": null, "e": 28494, "s": 28487, "text": "Python" }, { "code": "# Python program to create# sparse matrix using csc_matrix() # Import required packageimport numpy as npfrom scipy.sparse import csc_matrix row = np.array([0, 0, 1, 1, 2, 1])col = np.array([0, 1, 2, 0, 2, 2]) # taking datadata = np.array([1, 4, 5, 8, 9, 6]) # creating sparse matrixsparseMatrix = csc_matrix((data, (row, col)), shape = (3, 3)).toarray() # print the sparse matrixprint(sparseMatrix)", "e": 28923, "s": 28494, "text": null }, { "code": null, "e": 28931, "s": 28923, "text": "Output:" }, { "code": null, "e": 28969, "s": 28931, "text": "[[ 1 4 0]\n [ 8 0 11]\n [ 0 0 9]]\n" }, { "code": null, "e": 28976, "s": 28969, "text": "Python" }, { "code": null, "e": 28992, "s": 28976, "text": "Python Programs" }, { "code": null, "e": 29090, "s": 28992, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29122, "s": 29090, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 29144, "s": 29122, "text": "Enumerate() in Python" }, { "code": null, "e": 29186, "s": 29144, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 29212, "s": 29186, "text": "Python String | replace()" }, { "code": null, "e": 29256, "s": 29212, "text": "Reading and Writing to text files in Python" }, { "code": null, "e": 29278, "s": 29256, "text": "Defaultdict in Python" }, { "code": null, "e": 29317, "s": 29278, "text": "Python | Get dictionary keys as a list" }, { "code": null, "e": 29363, "s": 29317, "text": "Python | Split string into list of characters" }, { "code": null, "e": 29401, "s": 29363, "text": "Python | Convert a list to dictionary" } ]
Check if a given string can be converted to a Balanced Bracket Sequence - GeeksforGeeks
03 Mar, 2022 Given a string S of size N consisting of ‘(‘, ‘)’, and ‘$’, the task is to check whether the given string can be converted into a balanced bracket sequence by replacing every occurrence of $ with either ) or (. A balanced bracket sequence is a sequence where every opening bracket “(“ has a corresponding closing bracket “)”. Examples: Input: S = “()($”Output: YesExplanation: Convert the string into a balanced bracket sequence: ()(). Input: S = “$()$(“Output: NoExplanation: Possible replacements are “(((((“, “(())(“, “)(()(“, “)()((“, none of which are balanced. Hence, a balanced bracket sequence can not be obtained. Approach: The above problem can be solved by using a Stack. The idea is to check if all ) can be balanced with ( or $ and vice versa. Follow the steps below to solve this problem: Store the frequency of “(“, “)” and “$” in variables like countOpen, countClosed, and countSymbol respectively. Initialize a variable ans as 1 to store the required result and a stack stack_1 to check if all “)” can be balanced with “(“ or $. Traverse the string S using the variable i and do the following:If the current character S[i] is “)”, if stack_1 is empty, then set ans to 0, Else pop character from stack_1. Else push the character S[i] to stack_1. If the current character S[i] is “)”, if stack_1 is empty, then set ans to 0, Else pop character from stack_1. Else push the character S[i] to stack_1. Reverse the string S, and follow the same procedure to check if all “(“ can be balanced with “)” or “$”. If the value of countSymbol is less than the absolute difference of countOpen and countClosed then set ans to 0. Else balance the extra parenthesis with the symbols. After balancing if countSymbol is odd, set ans as 0. After the above steps, print the value of ans as the result. Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to check if the string// can be balanced by replacing the// '$' with opening or closing bracketsbool canBeBalanced(string sequence){ // If string can never be balanced if (sequence.size() % 2) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa stack<char> stack_, stack2_; // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the string for (int i = 0; i < sequence.size(); i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.empty()) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.push(sequence[i]); } } // Traverse the string from end // and repeat the same process for (int i = sequence.size() - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.empty()) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet int extra = abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false;} // Driver Codeint main(){ string S = "()($"; // Function Call if (canBeBalanced(S)) { cout << "Yes"; } else { cout << "No"; } return 0;} // Java program for the above approachimport java.util.*;class GFG{ // Function to check if the String// can be balanced by replacing the// '$' with opening or closing bracketsstatic boolean canBeBalanced(String sequence){ // If String can never be balanced if (sequence.length() % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa Stack<Character> stack_ = new Stack<Character>(); Stack<Character> stack2_ = new Stack<Character>(); // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the String for (int i = 0; i < sequence.length(); i++) { if (sequence.charAt(i) == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.isEmpty()) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence.charAt(i) == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.add(sequence.charAt(i)); } } // Traverse the String from end // and repeat the same process for (int i = sequence.length() - 1; i >= 0; i--) { if (sequence.charAt(i) == '(') { // If there are no closing // brackets to match it if (stack2_.isEmpty()) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.add(sequence.charAt(i)); } } // Store the extra ( or ) which // are not balanced yet int extra = Math.abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false;} // Driver Codepublic static void main(String[] args){ String S = "()($"; // Function Call if (canBeBalanced(S)) { System.out.print("Yes"); } else { System.out.print("No"); }}} // This code is contributed by 29AjayKumar # Python3 program for the above approach # Function to check if the# can be balanced by replacing the# '$' with opening or closing bracketsdef canBeBalanced(sequence): # If string can never be balanced if (len(sequence) % 2): return False # Declare 2 stacks to check if all # ) can be balanced with ( or $ # and vice-versa stack_, stack2_ = [], [] # Store the count the occurence # of (, ) and $ countOpen ,countClosed = 0, 0 countSymbol = 0 # Traverse the string for i in range(len(sequence)): if (sequence[i] == ')'): # Increment closed bracket # count by 1 countClosed += 1 # If there are no opening # bracket to match it # then return False if (len(stack_) == 0): return False # Otherwise, pop the character # from the stack else: del stack_[-1] else: # If current character is # an opening bracket or $, # push it to the stack if (sequence[i] == '$'): # Increment symbol # count by 1 countSymbol += 1 else: # Increment open # bracket count by 1 countOpen += 1 stack_.append(sequence[i]) # Traverse the string from end # and repeat the same process for i in range(len(sequence)-1, -1, -1): if (sequence[i] == '('): # If there are no closing # brackets to match it if (len(stack2_) == 0): return False # Otherwise, pop character # from stack else: del stack2_[-1] else : stack2_.append(sequence[i]) # Store the extra ( or ) which # are not balanced yet extra = abs(countClosed - countOpen) # Check if $ is available to # balance the extra brackets if (countSymbol < extra): return False else : # Count ramaining $ after # balancing extra ( and ) countSymbol -= extra # Check if each pair of $ # is convertible in () if (countSymbol % 2 == 0) : return True return False # Driver Codeif __name__ == '__main__': S = "()($" # Function Call if (canBeBalanced(S)): print("Yes") else: print("No") # This code is contributed by mohit kumar 29 // C# program for the above approachusing System;using System.Collections.Generic;class GFG{ // Function to check if the String // can be balanced by replacing the // '$' with opening or closing brackets static bool canBeBalanced(String sequence) { // If String can never be balanced if (sequence.Length % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa Stack<char> stack_ = new Stack<char>(); Stack<char> stack2_ = new Stack<char>(); // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the String for (int i = 0; i < sequence.Length; i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.Count==0) { return false; } // Otherwise, pop the character // from the stack else { stack_.Pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.Push(sequence[i]); } } // Traverse the String from end // and repeat the same process for (int i = sequence.Length - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.Count == 0) { return false; } // Otherwise, pop character // from stack else { stack2_.Pop(); } } else { stack2_.Push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet int extra = Math.Abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false; } // Driver Code public static void Main(String[] args) { String S = "()($"; // Function Call if (canBeBalanced(S)) { Console.Write("Yes"); } else { Console.Write("No"); } }} // This code is contributed by 29AjayKumar <script> // Javascript program for the above approach // Function to check if the String // can be balanced by replacing the // '$' with opening or closing brackets function canBeBalanced(sequence) { // If String can never be balanced if (sequence.length % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa let stack_ = []; let stack2_ = []; // Store the count the occurence // of (, ) and $ let countOpen = 0, countClosed = 0; let countSymbol = 0; // Traverse the String for (let i = 0; i < sequence.length; i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.length==0) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.push(sequence[i]); } } // Traverse the String from end // and repeat the same process for (let i = sequence.length - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.length==0) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet let extra = Math.abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false; } // Driver Code let S = "()($"; // Function Call if (canBeBalanced(S)) { document.write("Yes"); } else { document.write("No"); } // This code is contributed by patel2127</script> Yes Time Complexity: O(N)Auxiliary Space: O(N) mohit kumar 29 29AjayKumar patel2127 rkbhola5 cpp-stack-functions interview-preparation Stack Strings Strings Stack Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Real-time application of Data Structures Sort a stack using a temporary stack Iterative Tower of Hanoi Reverse individual words ZigZag Tree Traversal Reverse a string in Java Write a program to reverse an array or string Longest Common Subsequence | DP-4 Write a program to print all permutations of a given string C++ Data Types
[ { "code": null, "e": 24688, "s": 24660, "text": "\n03 Mar, 2022" }, { "code": null, "e": 24899, "s": 24688, "text": "Given a string S of size N consisting of ‘(‘, ‘)’, and ‘$’, the task is to check whether the given string can be converted into a balanced bracket sequence by replacing every occurrence of $ with either ) or (." }, { "code": null, "e": 25014, "s": 24899, "text": "A balanced bracket sequence is a sequence where every opening bracket “(“ has a corresponding closing bracket “)”." }, { "code": null, "e": 25024, "s": 25014, "text": "Examples:" }, { "code": null, "e": 25124, "s": 25024, "text": "Input: S = “()($”Output: YesExplanation: Convert the string into a balanced bracket sequence: ()()." }, { "code": null, "e": 25311, "s": 25124, "text": "Input: S = “$()$(“Output: NoExplanation: Possible replacements are “(((((“, “(())(“, “)(()(“, “)()((“, none of which are balanced. Hence, a balanced bracket sequence can not be obtained." }, { "code": null, "e": 25492, "s": 25311, "text": "Approach: The above problem can be solved by using a Stack. The idea is to check if all ) can be balanced with ( or $ and vice versa. Follow the steps below to solve this problem:" }, { "code": null, "e": 25604, "s": 25492, "text": "Store the frequency of “(“, “)” and “$” in variables like countOpen, countClosed, and countSymbol respectively." }, { "code": null, "e": 25735, "s": 25604, "text": "Initialize a variable ans as 1 to store the required result and a stack stack_1 to check if all “)” can be balanced with “(“ or $." }, { "code": null, "e": 25951, "s": 25735, "text": "Traverse the string S using the variable i and do the following:If the current character S[i] is “)”, if stack_1 is empty, then set ans to 0, Else pop character from stack_1. Else push the character S[i] to stack_1." }, { "code": null, "e": 26062, "s": 25951, "text": "If the current character S[i] is “)”, if stack_1 is empty, then set ans to 0, Else pop character from stack_1." }, { "code": null, "e": 26104, "s": 26062, "text": " Else push the character S[i] to stack_1." }, { "code": null, "e": 26209, "s": 26104, "text": "Reverse the string S, and follow the same procedure to check if all “(“ can be balanced with “)” or “$”." }, { "code": null, "e": 26428, "s": 26209, "text": "If the value of countSymbol is less than the absolute difference of countOpen and countClosed then set ans to 0. Else balance the extra parenthesis with the symbols. After balancing if countSymbol is odd, set ans as 0." }, { "code": null, "e": 26489, "s": 26428, "text": "After the above steps, print the value of ans as the result." }, { "code": null, "e": 26540, "s": 26489, "text": "Below is the implementation of the above approach:" }, { "code": null, "e": 26544, "s": 26540, "text": "C++" }, { "code": null, "e": 26549, "s": 26544, "text": "Java" }, { "code": null, "e": 26557, "s": 26549, "text": "Python3" }, { "code": null, "e": 26560, "s": 26557, "text": "C#" }, { "code": null, "e": 26571, "s": 26560, "text": "Javascript" }, { "code": "// C++ program for the above approach#include <bits/stdc++.h>using namespace std; // Function to check if the string// can be balanced by replacing the// '$' with opening or closing bracketsbool canBeBalanced(string sequence){ // If string can never be balanced if (sequence.size() % 2) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa stack<char> stack_, stack2_; // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the string for (int i = 0; i < sequence.size(); i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.empty()) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.push(sequence[i]); } } // Traverse the string from end // and repeat the same process for (int i = sequence.size() - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.empty()) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet int extra = abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false;} // Driver Codeint main(){ string S = \"()($\"; // Function Call if (canBeBalanced(S)) { cout << \"Yes\"; } else { cout << \"No\"; } return 0;}", "e": 29303, "s": 26571, "text": null }, { "code": "// Java program for the above approachimport java.util.*;class GFG{ // Function to check if the String// can be balanced by replacing the// '$' with opening or closing bracketsstatic boolean canBeBalanced(String sequence){ // If String can never be balanced if (sequence.length() % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa Stack<Character> stack_ = new Stack<Character>(); Stack<Character> stack2_ = new Stack<Character>(); // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the String for (int i = 0; i < sequence.length(); i++) { if (sequence.charAt(i) == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.isEmpty()) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence.charAt(i) == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.add(sequence.charAt(i)); } } // Traverse the String from end // and repeat the same process for (int i = sequence.length() - 1; i >= 0; i--) { if (sequence.charAt(i) == '(') { // If there are no closing // brackets to match it if (stack2_.isEmpty()) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.add(sequence.charAt(i)); } } // Store the extra ( or ) which // are not balanced yet int extra = Math.abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false;} // Driver Codepublic static void main(String[] args){ String S = \"()($\"; // Function Call if (canBeBalanced(S)) { System.out.print(\"Yes\"); } else { System.out.print(\"No\"); }}} // This code is contributed by 29AjayKumar", "e": 32359, "s": 29303, "text": null }, { "code": "# Python3 program for the above approach # Function to check if the# can be balanced by replacing the# '$' with opening or closing bracketsdef canBeBalanced(sequence): # If string can never be balanced if (len(sequence) % 2): return False # Declare 2 stacks to check if all # ) can be balanced with ( or $ # and vice-versa stack_, stack2_ = [], [] # Store the count the occurence # of (, ) and $ countOpen ,countClosed = 0, 0 countSymbol = 0 # Traverse the string for i in range(len(sequence)): if (sequence[i] == ')'): # Increment closed bracket # count by 1 countClosed += 1 # If there are no opening # bracket to match it # then return False if (len(stack_) == 0): return False # Otherwise, pop the character # from the stack else: del stack_[-1] else: # If current character is # an opening bracket or $, # push it to the stack if (sequence[i] == '$'): # Increment symbol # count by 1 countSymbol += 1 else: # Increment open # bracket count by 1 countOpen += 1 stack_.append(sequence[i]) # Traverse the string from end # and repeat the same process for i in range(len(sequence)-1, -1, -1): if (sequence[i] == '('): # If there are no closing # brackets to match it if (len(stack2_) == 0): return False # Otherwise, pop character # from stack else: del stack2_[-1] else : stack2_.append(sequence[i]) # Store the extra ( or ) which # are not balanced yet extra = abs(countClosed - countOpen) # Check if $ is available to # balance the extra brackets if (countSymbol < extra): return False else : # Count ramaining $ after # balancing extra ( and ) countSymbol -= extra # Check if each pair of $ # is convertible in () if (countSymbol % 2 == 0) : return True return False # Driver Codeif __name__ == '__main__': S = \"()($\" # Function Call if (canBeBalanced(S)): print(\"Yes\") else: print(\"No\") # This code is contributed by mohit kumar 29", "e": 34812, "s": 32359, "text": null }, { "code": "// C# program for the above approachusing System;using System.Collections.Generic;class GFG{ // Function to check if the String // can be balanced by replacing the // '$' with opening or closing brackets static bool canBeBalanced(String sequence) { // If String can never be balanced if (sequence.Length % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa Stack<char> stack_ = new Stack<char>(); Stack<char> stack2_ = new Stack<char>(); // Store the count the occurence // of (, ) and $ int countOpen = 0, countClosed = 0; int countSymbol = 0; // Traverse the String for (int i = 0; i < sequence.Length; i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.Count==0) { return false; } // Otherwise, pop the character // from the stack else { stack_.Pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.Push(sequence[i]); } } // Traverse the String from end // and repeat the same process for (int i = sequence.Length - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.Count == 0) { return false; } // Otherwise, pop character // from stack else { stack2_.Pop(); } } else { stack2_.Push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet int extra = Math.Abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false; } // Driver Code public static void Main(String[] args) { String S = \"()($\"; // Function Call if (canBeBalanced(S)) { Console.Write(\"Yes\"); } else { Console.Write(\"No\"); } }} // This code is contributed by 29AjayKumar", "e": 37590, "s": 34812, "text": null }, { "code": "<script> // Javascript program for the above approach // Function to check if the String // can be balanced by replacing the // '$' with opening or closing brackets function canBeBalanced(sequence) { // If String can never be balanced if (sequence.length % 2 == 1) return false; // Declare 2 stacks to check if all // ) can be balanced with ( or $ // and vice-versa let stack_ = []; let stack2_ = []; // Store the count the occurence // of (, ) and $ let countOpen = 0, countClosed = 0; let countSymbol = 0; // Traverse the String for (let i = 0; i < sequence.length; i++) { if (sequence[i] == ')') { // Increment closed bracket // count by 1 countClosed++; // If there are no opening // bracket to match it // then return false if (stack_.length==0) { return false; } // Otherwise, pop the character // from the stack else { stack_.pop(); } } else { // If current character is // an opening bracket or $, // push it to the stack if (sequence[i] == '$') { // Increment symbol // count by 1 countSymbol++; } else { // Increment open // bracket count by 1 countOpen++; } stack_.push(sequence[i]); } } // Traverse the String from end // and repeat the same process for (let i = sequence.length - 1; i >= 0; i--) { if (sequence[i] == '(') { // If there are no closing // brackets to match it if (stack2_.length==0) { return false; } // Otherwise, pop character // from stack else { stack2_.pop(); } } else { stack2_.push(sequence[i]); } } // Store the extra ( or ) which // are not balanced yet let extra = Math.abs(countClosed - countOpen); // Check if $ is available to // balance the extra brackets if (countSymbol < extra) { return false; } else { // Count ramaining $ after // balancing extra ( and ) countSymbol -= extra; // Check if each pair of $ // is convertible in () if (countSymbol % 2 == 0) { return true; } } return false; } // Driver Code let S = \"()($\"; // Function Call if (canBeBalanced(S)) { document.write(\"Yes\"); } else { document.write(\"No\"); } // This code is contributed by patel2127</script>", "e": 40960, "s": 37590, "text": null }, { "code": null, "e": 40964, "s": 40960, "text": "Yes" }, { "code": null, "e": 41009, "s": 40966, "text": "Time Complexity: O(N)Auxiliary Space: O(N)" }, { "code": null, "e": 41024, "s": 41009, "text": "mohit kumar 29" }, { "code": null, "e": 41036, "s": 41024, "text": "29AjayKumar" }, { "code": null, "e": 41046, "s": 41036, "text": "patel2127" }, { "code": null, "e": 41055, "s": 41046, "text": "rkbhola5" }, { "code": null, "e": 41075, "s": 41055, "text": "cpp-stack-functions" }, { "code": null, "e": 41097, "s": 41075, "text": "interview-preparation" }, { "code": null, "e": 41103, "s": 41097, "text": "Stack" }, { "code": null, "e": 41111, "s": 41103, "text": "Strings" }, { "code": null, "e": 41119, "s": 41111, "text": "Strings" }, { "code": null, "e": 41125, "s": 41119, "text": "Stack" }, { "code": null, "e": 41223, "s": 41125, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 41232, "s": 41223, "text": "Comments" }, { "code": null, "e": 41245, "s": 41232, "text": "Old Comments" }, { "code": null, "e": 41286, "s": 41245, "text": "Real-time application of Data Structures" }, { "code": null, "e": 41323, "s": 41286, "text": "Sort a stack using a temporary stack" }, { "code": null, "e": 41348, "s": 41323, "text": "Iterative Tower of Hanoi" }, { "code": null, "e": 41373, "s": 41348, "text": "Reverse individual words" }, { "code": null, "e": 41395, "s": 41373, "text": "ZigZag Tree Traversal" }, { "code": null, "e": 41420, "s": 41395, "text": "Reverse a string in Java" }, { "code": null, "e": 41466, "s": 41420, "text": "Write a program to reverse an array or string" }, { "code": null, "e": 41500, "s": 41466, "text": "Longest Common Subsequence | DP-4" }, { "code": null, "e": 41560, "s": 41500, "text": "Write a program to print all permutations of a given string" } ]
ASP.NET Core - Razor Edit Form
In this chapter, we will continue discussing the tag helpers. We will also add a new feature in our application and give it the ability to edit the details of an existing employee. We will start by adding a link on the side of each employee that will go to an Edit action on the HomeController. @model HomePageViewModel @{ ViewBag.Title = "Home"; } <h1>Welcome!</h1> <table> @foreach (var employee in Model.Employees) { <tr> <td>@employee.Name</td> <td> <a asp-controller = "Home" asp-action = "Details" asp-routeid = "@employee.Id">Details</a> <a asp-controller = "Home" asp-action = "Edit" asp-routeid = "@employee.Id">Edit</a> </td> </tr> } </table> We don't have the Edit action yet, but we will need an employee ID that we can edit. So let us first create a new view by right-clicking on the Views →Home folder and select Add → New Items. In the middle pane, select the MVC View Page; call the page Edit.cshtml. Now, click on the Add button. Add the following code in the Edit.cshtml file. @model Employee @{ ViewBag.Title = $"Edit {Model.Name}"; } <h1>Edit @Model.Name</h1> <form asp-action="Edit" method="post"> <div> <label asp-for = "Name"></label> <input asp-for = "Name" /> <span asp-validation-for = "Name"></span> </div> <div> <input type = "submit" value = "Save" /> </div> </form> For the title of this page, we can say that we want to edit and then provide the employee name. The dollar sign in front of Edit will allow the runtime to replace Model.Name with a value that is in that property like employee name. The dollar sign in front of Edit will allow the runtime to replace Model.Name with a value that is in that property like employee name. Inside the form tag, we can use tag helpers like asp-action and asp-controller. so that when the user submits this form it goes directly to a specific controller action. Inside the form tag, we can use tag helpers like asp-action and asp-controller. so that when the user submits this form it goes directly to a specific controller action. In this case, we want to go to the Edit action on the same controller and we want to explicitly say that for the method on this form, it should be using an HttpPost. In this case, we want to go to the Edit action on the same controller and we want to explicitly say that for the method on this form, it should be using an HttpPost. The default method for a form is a GET, and we do not want to edit an employee using a GET operation. The default method for a form is a GET, and we do not want to edit an employee using a GET operation. In the label tag, we have used asp-for tag helper which says that this is a label for the Name property of the model. This tag helper can set up the Html.For attribute to have the correct value and to set the inner text of this label so that it actually displays what we want, like employee name. In the label tag, we have used asp-for tag helper which says that this is a label for the Name property of the model. This tag helper can set up the Html.For attribute to have the correct value and to set the inner text of this label so that it actually displays what we want, like employee name. Let us go to the HomeController class and add Edit action that returns the view that gives the user a form to edit an employee and then we will need a second Edit action that will respond to an HttpPost as shown below. [HttpGet] public IActionResult Edit(int id) { var context = new FirstAppDemoDbContext(); SQLEmployeeData sqlData = new SQLEmployeeData(context); var model = sqlData.Get(id); if (model == null) { return RedirectToAction("Index"); } return View(model); } First, we need an edit action that will respond to a GET request. It will take an employee ID. The code here will be similar to the code that we have in the Details action. We will first extract the data of the employee that the user wants to edit. We also need to make sure that the employee actually exists. If it doesn't exist, we will redirect the user back to the Index view. But when an employee exists, we will render the Edit view. We also need to respond to the HttpPost that the form will send. Let us add a new class in the HomeController.cs file as shown in the following program. public class EmployeeEditViewModel { [Required, MaxLength(80)] public string Name { get; set; } } In the Edit Action which will respond to the HttpPost will take an EmployeeEditViewModel, but not an employee itself, because we only want to capture items that are in form in the Edit.cshtml file. The following is the implementation of the Edit action. [HttpPost] public IActionResult Edit(int id, EmployeeEditViewModel input) { var context = new FirstAppDemoDbContext(); SQLEmployeeData sqlData = new SQLEmployeeData(context); var employee = sqlData.Get(id); if (employee != null && ModelState.IsValid) { employee.Name = input.Name; context.SaveChanges(); return RedirectToAction("Details", new { id = employee.Id }); } return View(employee); } The edit form should always be delivered from an URL that has an ID in the URL according to our routing rules, something like /home/edit/1. The form is always going to post back to that same URL, /home/edit/1. The form is always going to post back to that same URL, /home/edit/1. The MVC framework will be able to pull that ID out of the URL and pass it as a parameter. The MVC framework will be able to pull that ID out of the URL and pass it as a parameter. We always need to check if the ModelState is valid and also make sure that this employee is in the database and it is not null before we perform an update operation in the database. We always need to check if the ModelState is valid and also make sure that this employee is in the database and it is not null before we perform an update operation in the database. If none of that is true, we will return a view and allow the user to try again. Although in a real application with concurrent users, if the employee is null, it could be because the employee details were deleted by someone. If none of that is true, we will return a view and allow the user to try again. Although in a real application with concurrent users, if the employee is null, it could be because the employee details were deleted by someone. If that employee doesn't exist, tell the user that the employee doesn't exist. If that employee doesn't exist, tell the user that the employee doesn't exist. Otherwise, check the ModelState. If the ModelState is invalid, then return a view. This allows to fix the edit and make the ModelState valid. Otherwise, check the ModelState. If the ModelState is invalid, then return a view. This allows to fix the edit and make the ModelState valid. Copy the name from the Input view model to the employee retrieved from the database and save the changes. The SaveChagnes() method is going to flush all those changes to the database. Copy the name from the Input view model to the employee retrieved from the database and save the changes. The SaveChagnes() method is going to flush all those changes to the database. The following is the complete implementation of the HomeController. using Microsoft.AspNet.Mvc; using FirstAppDemo.ViewModels; using FirstAppDemo.Services; using FirstAppDemo.Entities; using FirstAppDemo.Models; using System.Collections.Generic; using System.Linq; using System.ComponentModel.DataAnnotations; namespace FirstAppDemo.Controllers { public class HomeController : Controller { public ViewResult Index() { var model = new HomePageViewModel(); using (var context = new FirstAppDemoDbContext()) { SQLEmployeeData sqlData = new SQLEmployeeData(context); model.Employees = sqlData.GetAll(); } return View(model); } public IActionResult Details(int id) { var context = new FirstAppDemoDbContext(); SQLEmployeeData sqlData = new SQLEmployeeData(context); var model = sqlData.Get(id) if (model == null) { return RedirectToAction("Index"); } return View(model); } [HttpGet] public IActionResult Edit(int id) { var context = new FirstAppDemoDbContext(); SQLEmployeeData sqlData = new SQLEmployeeData(context); var model = sqlData.Get(id); if (model == null) { return RedirectToAction("Index"); } return View(model); } [HttpPost] public IActionResult Edit(int id, EmployeeEditViewModel input) { var context = new FirstAppDemoDbContext(); SQLEmployeeData sqlData = new SQLEmployeeData(context); var employee = sqlData.Get(id); if (employee != null && ModelState.IsValid) { employee.Name = input.Name; context.SaveChanges(); return RedirectToAction("Details", new { id = employee.Id }); } return View(employee); } } public class SQLEmployeeData { private FirstAppDemoDbContext _context { get; set; } public SQLEmployeeData(FirstAppDemoDbContext context) { _context = context; } public void Add(Employee emp) { _context.Add(emp); _context.SaveChanges(); } public Employee Get(int ID) { return _context.Employees.FirstOrDefault(e => e.Id == ID); } public IEnumerable<Employee> GetAll() { return _context.Employees.ToList<Employee>(); } } public class HomePageViewModel { public IEnumerable<Employee> Employees { get; set; } } public class EmployeeEditViewModel { [Required, MaxLength(80)] public string Name { get; set; } } } Let us compile the program and run the application. We now have an Edit link available; let us edit the details of Josh by clicking on the Edit link. Let us change the name to Josh Groban. Click the Save button. You can see that the name has been changed to Josh Groban as in the above screenshot. Let us now click on the Home link. On the home page, you will now see the updated name. 51 Lectures 5.5 hours Anadi Sharma 44 Lectures 4.5 hours Kaushik Roy Chowdhury 42 Lectures 18 hours SHIVPRASAD KOIRALA 57 Lectures 3.5 hours University Code 40 Lectures 2.5 hours University Code 138 Lectures 9 hours Bhrugen Patel Print Add Notes Bookmark this page
[ { "code": null, "e": 2756, "s": 2461, "text": "In this chapter, we will continue discussing the tag helpers. We will also add a new feature in our application and give it the ability to edit the details of an existing employee. We will start by adding a link on the side of each employee that will go to an Edit action on the HomeController." }, { "code": null, "e": 3275, "s": 2756, "text": "@model HomePageViewModel \n@{ \n ViewBag.Title = \"Home\"; \n} \n<h1>Welcome!</h1> \n\n<table> \n @foreach (var employee in Model.Employees) { \n <tr> \n <td>@employee.Name</td> \n \n <td> \n <a asp-controller = \"Home\" asp-action = \"Details\" \n asp-routeid = \"@employee.Id\">Details</a> \n \n <a asp-controller = \"Home\" asp-action = \"Edit\" \n asp-routeid = \"@employee.Id\">Edit</a> \n \n </td> \n </tr> \n } \n</table>" }, { "code": null, "e": 3466, "s": 3275, "text": "We don't have the Edit action yet, but we will need an employee ID that we can edit. So let us first create a new view by right-clicking on the Views →Home folder and select Add → New Items." }, { "code": null, "e": 3569, "s": 3466, "text": "In the middle pane, select the MVC View Page; call the page Edit.cshtml. Now, click on the Add button." }, { "code": null, "e": 3617, "s": 3569, "text": "Add the following code in the Edit.cshtml file." }, { "code": null, "e": 3977, "s": 3617, "text": "@model Employee \n@{ \n ViewBag.Title = $\"Edit {Model.Name}\"; \n} \n<h1>Edit @Model.Name</h1> \n\n<form asp-action=\"Edit\" method=\"post\"> \n <div> \n <label asp-for = \"Name\"></label> \n <input asp-for = \"Name\" /> \n <span asp-validation-for = \"Name\"></span> \n </div> \n \n <div> \n <input type = \"submit\" value = \"Save\" /> \n </div> \n</form>" }, { "code": null, "e": 4073, "s": 3977, "text": "For the title of this page, we can say that we want to edit and then provide the employee name." }, { "code": null, "e": 4209, "s": 4073, "text": "The dollar sign in front of Edit will allow the runtime to replace Model.Name with a value that is in that property like employee name." }, { "code": null, "e": 4345, "s": 4209, "text": "The dollar sign in front of Edit will allow the runtime to replace Model.Name with a value that is in that property like employee name." }, { "code": null, "e": 4515, "s": 4345, "text": "Inside the form tag, we can use tag helpers like asp-action and asp-controller. so that when the user submits this form it goes directly to a specific controller action." }, { "code": null, "e": 4685, "s": 4515, "text": "Inside the form tag, we can use tag helpers like asp-action and asp-controller. so that when the user submits this form it goes directly to a specific controller action." }, { "code": null, "e": 4851, "s": 4685, "text": "In this case, we want to go to the Edit action on the same controller and we want to explicitly say that for the method on this form, it should be using an HttpPost." }, { "code": null, "e": 5017, "s": 4851, "text": "In this case, we want to go to the Edit action on the same controller and we want to explicitly say that for the method on this form, it should be using an HttpPost." }, { "code": null, "e": 5119, "s": 5017, "text": "The default method for a form is a GET, and we do not want to edit an employee using a GET operation." }, { "code": null, "e": 5221, "s": 5119, "text": "The default method for a form is a GET, and we do not want to edit an employee using a GET operation." }, { "code": null, "e": 5518, "s": 5221, "text": "In the label tag, we have used asp-for tag helper which says that this is a label for the Name property of the model. This tag helper can set up the Html.For attribute to have the correct value and to set the inner text of this label so that it actually displays what we want, like employee name." }, { "code": null, "e": 5815, "s": 5518, "text": "In the label tag, we have used asp-for tag helper which says that this is a label for the Name property of the model. This tag helper can set up the Html.For attribute to have the correct value and to set the inner text of this label so that it actually displays what we want, like employee name." }, { "code": null, "e": 6034, "s": 5815, "text": "Let us go to the HomeController class and add Edit action that returns the view that gives the user a form to edit an employee and then we will need a second Edit action that will respond to an HttpPost as shown below." }, { "code": null, "e": 6324, "s": 6034, "text": "[HttpGet] \npublic IActionResult Edit(int id) { \n var context = new FirstAppDemoDbContext(); \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n var model = sqlData.Get(id); \n \n if (model == null) { \n return RedirectToAction(\"Index\"); \n } \n return View(model); \n}" }, { "code": null, "e": 6764, "s": 6324, "text": "First, we need an edit action that will respond to a GET request. It will take an employee ID. The code here will be similar to the code that we have in the Details action. We will first extract the data of the employee that the user wants to edit. We also need to make sure that the employee actually exists. If it doesn't exist, we will redirect the user back to the Index view. But when an employee exists, we will render the Edit view." }, { "code": null, "e": 6829, "s": 6764, "text": "We also need to respond to the HttpPost that the form will send." }, { "code": null, "e": 6917, "s": 6829, "text": "Let us add a new class in the HomeController.cs file as shown in the following program." }, { "code": null, "e": 7024, "s": 6917, "text": "public class EmployeeEditViewModel { \n [Required, MaxLength(80)] \n public string Name { get; set; } \n}" }, { "code": null, "e": 7222, "s": 7024, "text": "In the Edit Action which will respond to the HttpPost will take an EmployeeEditViewModel, but not an employee itself, because we only want to capture items that are in form in the Edit.cshtml file." }, { "code": null, "e": 7278, "s": 7222, "text": "The following is the implementation of the Edit action." }, { "code": null, "e": 7723, "s": 7278, "text": "[HttpPost] \npublic IActionResult Edit(int id, EmployeeEditViewModel input) { \n var context = new FirstAppDemoDbContext(); \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n var employee = sqlData.Get(id); \n \n if (employee != null && ModelState.IsValid) { \n employee.Name = input.Name; \n context.SaveChanges(); \n return RedirectToAction(\"Details\", new { id = employee.Id }); \n } \n return View(employee); \n}" }, { "code": null, "e": 7863, "s": 7723, "text": "The edit form should always be delivered from an URL that has an ID in the URL according to our routing rules, something like /home/edit/1." }, { "code": null, "e": 7933, "s": 7863, "text": "The form is always going to post back to that same URL, /home/edit/1." }, { "code": null, "e": 8003, "s": 7933, "text": "The form is always going to post back to that same URL, /home/edit/1." }, { "code": null, "e": 8093, "s": 8003, "text": "The MVC framework will be able to pull that ID out of the URL and pass it as a parameter." }, { "code": null, "e": 8183, "s": 8093, "text": "The MVC framework will be able to pull that ID out of the URL and pass it as a parameter." }, { "code": null, "e": 8365, "s": 8183, "text": "We always need to check if the ModelState is valid and also make sure that this employee is in the database and it is not null before we perform an update operation in the database." }, { "code": null, "e": 8547, "s": 8365, "text": "We always need to check if the ModelState is valid and also make sure that this employee is in the database and it is not null before we perform an update operation in the database." }, { "code": null, "e": 8772, "s": 8547, "text": "If none of that is true, we will return a view and allow the user to try again. Although in a real application with concurrent users, if the employee is null, it could be because the employee details were deleted by someone." }, { "code": null, "e": 8997, "s": 8772, "text": "If none of that is true, we will return a view and allow the user to try again. Although in a real application with concurrent users, if the employee is null, it could be because the employee details were deleted by someone." }, { "code": null, "e": 9076, "s": 8997, "text": "If that employee doesn't exist, tell the user that the employee doesn't exist." }, { "code": null, "e": 9155, "s": 9076, "text": "If that employee doesn't exist, tell the user that the employee doesn't exist." }, { "code": null, "e": 9297, "s": 9155, "text": "Otherwise, check the ModelState. If the ModelState is invalid, then return a view. This allows to fix the edit and make the ModelState valid." }, { "code": null, "e": 9439, "s": 9297, "text": "Otherwise, check the ModelState. If the ModelState is invalid, then return a view. This allows to fix the edit and make the ModelState valid." }, { "code": null, "e": 9623, "s": 9439, "text": "Copy the name from the Input view model to the employee retrieved from the database and save the changes. The SaveChagnes() method is going to flush all those changes to the database." }, { "code": null, "e": 9807, "s": 9623, "text": "Copy the name from the Input view model to the employee retrieved from the database and save the changes. The SaveChagnes() method is going to flush all those changes to the database." }, { "code": null, "e": 9875, "s": 9807, "text": "The following is the complete implementation of the HomeController." }, { "code": null, "e": 12503, "s": 9875, "text": "using Microsoft.AspNet.Mvc; \n\nusing FirstAppDemo.ViewModels; \nusing FirstAppDemo.Services; \nusing FirstAppDemo.Entities; \nusing FirstAppDemo.Models; \n\nusing System.Collections.Generic; \nusing System.Linq; \nusing System.ComponentModel.DataAnnotations; \n\nnamespace FirstAppDemo.Controllers { \n public class HomeController : Controller { \n public ViewResult Index() { \n var model = new HomePageViewModel(); \n using (var context = new FirstAppDemoDbContext()) { \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n model.Employees = sqlData.GetAll(); \n } \n return View(model); \n } \n public IActionResult Details(int id) { \n var context = new FirstAppDemoDbContext(); \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n var model = sqlData.Get(id)\n \n if (model == null) { \n return RedirectToAction(\"Index\"); \n } \n return View(model); \n } \n [HttpGet] \n public IActionResult Edit(int id) { \n var context = new FirstAppDemoDbContext(); \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n var model = sqlData.Get(id); \n \n if (model == null) { \n return RedirectToAction(\"Index\"); \n } \n return View(model); \n } \n [HttpPost] \n public IActionResult Edit(int id, EmployeeEditViewModel input) { \n var context = new FirstAppDemoDbContext(); \n SQLEmployeeData sqlData = new SQLEmployeeData(context); \n var employee = sqlData.Get(id); \n \n if (employee != null && ModelState.IsValid) { \n employee.Name = input.Name; \n context.SaveChanges(); \n return RedirectToAction(\"Details\", new { id = employee.Id }); \n } \n return View(employee); \n } \n }\n public class SQLEmployeeData {\n private FirstAppDemoDbContext _context { get; set; }\n public SQLEmployeeData(FirstAppDemoDbContext context) {\n _context = context;\n }\n public void Add(Employee emp) {\n _context.Add(emp);\n _context.SaveChanges();\n }\n public Employee Get(int ID) {\n return _context.Employees.FirstOrDefault(e => e.Id == ID);\n }\n public IEnumerable<Employee> GetAll() {\n return _context.Employees.ToList<Employee>();\n }\n }\n public class HomePageViewModel {\n public IEnumerable<Employee> Employees { get; set; }\n }\n public class EmployeeEditViewModel {\n [Required, MaxLength(80)]\n public string Name { get; set; }\n }\n}" }, { "code": null, "e": 12555, "s": 12503, "text": "Let us compile the program and run the application." }, { "code": null, "e": 12653, "s": 12555, "text": "We now have an Edit link available; let us edit the details of Josh by clicking on the Edit link." }, { "code": null, "e": 12692, "s": 12653, "text": "Let us change the name to Josh Groban." }, { "code": null, "e": 12715, "s": 12692, "text": "Click the Save button." }, { "code": null, "e": 12836, "s": 12715, "text": "You can see that the name has been changed to Josh Groban as in the above screenshot. Let us now click on the Home link." }, { "code": null, "e": 12889, "s": 12836, "text": "On the home page, you will now see the updated name." }, { "code": null, "e": 12924, "s": 12889, "text": "\n 51 Lectures \n 5.5 hours \n" }, { "code": null, "e": 12938, "s": 12924, "text": " Anadi Sharma" }, { "code": null, "e": 12973, "s": 12938, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 12996, "s": 12973, "text": " Kaushik Roy Chowdhury" }, { "code": null, "e": 13030, "s": 12996, "text": "\n 42 Lectures \n 18 hours \n" }, { "code": null, "e": 13050, "s": 13030, "text": " SHIVPRASAD KOIRALA" }, { "code": null, "e": 13085, "s": 13050, "text": "\n 57 Lectures \n 3.5 hours \n" }, { "code": null, "e": 13102, "s": 13085, "text": " University Code" }, { "code": null, "e": 13137, "s": 13102, "text": "\n 40 Lectures \n 2.5 hours \n" }, { "code": null, "e": 13154, "s": 13137, "text": " University Code" }, { "code": null, "e": 13188, "s": 13154, "text": "\n 138 Lectures \n 9 hours \n" }, { "code": null, "e": 13203, "s": 13188, "text": " Bhrugen Patel" }, { "code": null, "e": 13210, "s": 13203, "text": " Print" }, { "code": null, "e": 13221, "s": 13210, "text": " Add Notes" } ]
Area and Perimeter of Rectangle in PL/SQL - GeeksforGeeks
04 Oct, 2018 Prerequisite – PL/SQL introductionIn PL/SQL code groups of commands are arranged within a block. A block group related declarations or statements. In declare part, we declare variables and between begin and end part, we perform the operations. Given the value of the length, breadth, and the task is to calculate the area and perimeter of the Rectangle. Examples: Input: l = 2, b = 2 Output: Area = 4, Perimeter = 8 Input: l = 4, b = 8 Output: Area = 32, Perimeter=24 Mathematical Formula:Area of rectangle: Perimeter of rectangle: Below is the required implementation: -- Declaration statement DECLARE -- Declaration of length and assigning values l NUMBER(4, 2) := 3; --Declaration of breadth and assigning values b NUMBER(4, 2) := 7; --Declaration of a variable for Area of rectangle a NUMBER(4, 2); --Declaration of a variable for perimeter p NUMBER(4, 2); BEGIN -- calculate area and perimeter a := l * b; p := 2 * (l + b); --Display result dbms_output.Put_line('Area of the rectangle is ' || a); dbms_output.Put_line('Perimeter of the rectangle is ' || p); END; --ENd program Output: Area of the rectangle is 21 Perimeter of the rectangle is 20 SQL-PL/SQL square-rectangle SQL SQL Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. SQL Interview Questions CTE in SQL How to Update Multiple Columns in Single Update Statement in SQL? SQL | Views Difference between DELETE, DROP and TRUNCATE Difference between SQL and NoSQL MySQL | Group_CONCAT() Function How to Create a Table With Multiple Foreign Keys in SQL? Difference between DELETE and TRUNCATE Difference between DDL and DML in DBMS
[ { "code": null, "e": 25779, "s": 25751, "text": "\n04 Oct, 2018" }, { "code": null, "e": 26023, "s": 25779, "text": "Prerequisite – PL/SQL introductionIn PL/SQL code groups of commands are arranged within a block. A block group related declarations or statements. In declare part, we declare variables and between begin and end part, we perform the operations." }, { "code": null, "e": 26133, "s": 26023, "text": "Given the value of the length, breadth, and the task is to calculate the area and perimeter of the Rectangle." }, { "code": null, "e": 26143, "s": 26133, "text": "Examples:" }, { "code": null, "e": 26249, "s": 26143, "text": "Input: l = 2, b = 2\nOutput: Area = 4, Perimeter = 8\n\nInput: l = 4, b = 8\nOutput: Area = 32, Perimeter=24\n" }, { "code": null, "e": 26289, "s": 26249, "text": "Mathematical Formula:Area of rectangle:" }, { "code": null, "e": 26313, "s": 26289, "text": "Perimeter of rectangle:" }, { "code": null, "e": 26351, "s": 26313, "text": "Below is the required implementation:" }, { "code": "-- Declaration statement DECLARE -- Declaration of length and assigning values l NUMBER(4, 2) := 3; --Declaration of breadth and assigning values b NUMBER(4, 2) := 7; --Declaration of a variable for Area of rectangle a NUMBER(4, 2); --Declaration of a variable for perimeter p NUMBER(4, 2); BEGIN -- calculate area and perimeter a := l * b; p := 2 * (l + b); --Display result dbms_output.Put_line('Area of the rectangle is ' || a); dbms_output.Put_line('Perimeter of the rectangle is ' || p); END; --ENd program", "e": 26902, "s": 26351, "text": null }, { "code": null, "e": 26910, "s": 26902, "text": "Output:" }, { "code": null, "e": 26972, "s": 26910, "text": "Area of the rectangle is 21\nPerimeter of the rectangle is 20\n" }, { "code": null, "e": 26983, "s": 26972, "text": "SQL-PL/SQL" }, { "code": null, "e": 27000, "s": 26983, "text": "square-rectangle" }, { "code": null, "e": 27004, "s": 27000, "text": "SQL" }, { "code": null, "e": 27008, "s": 27004, "text": "SQL" }, { "code": null, "e": 27106, "s": 27008, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27130, "s": 27106, "text": "SQL Interview Questions" }, { "code": null, "e": 27141, "s": 27130, "text": "CTE in SQL" }, { "code": null, "e": 27207, "s": 27141, "text": "How to Update Multiple Columns in Single Update Statement in SQL?" }, { "code": null, "e": 27219, "s": 27207, "text": "SQL | Views" }, { "code": null, "e": 27264, "s": 27219, "text": "Difference between DELETE, DROP and TRUNCATE" }, { "code": null, "e": 27297, "s": 27264, "text": "Difference between SQL and NoSQL" }, { "code": null, "e": 27329, "s": 27297, "text": "MySQL | Group_CONCAT() Function" }, { "code": null, "e": 27386, "s": 27329, "text": "How to Create a Table With Multiple Foreign Keys in SQL?" }, { "code": null, "e": 27425, "s": 27386, "text": "Difference between DELETE and TRUNCATE" } ]
Implementation of neural network from scratch using NumPy - GeeksforGeeks
07 Jul, 2021 DNN(Deep neural network) in a machine learning algorithm that is inspired by the way the human brain works. DNN is mainly used as a classification algorithm. In this article, we will look at the stepwise approach on how to implement the basic DNN algorithm in NumPy(Python library) from scratch. The purpose of this article is to create a sense of understanding for the beginners, on how neural network works and its implementation details. We are going to build a three-letter(A, B, C) classifier, for simplicity we are going to create the letters (A, B, C) as NumPy array of 0s and 1s, also we are going to ignore the bias term related with each node. Step 1 : Creating the data set using numpy array of 0s and 1s. As the image is a collection of pixel values in matrix, we will create those matrix of pixel for A, B, C using 0 and 1 #A 0 0 1 1 0 0 0 1 0 0 1 0 1 1 1 1 1 1 1 0 0 0 0 1 1 0 0 0 0 1 #B 0 1 1 1 1 0 0 1 0 0 1 0 0 1 1 1 1 0 0 1 0 0 1 0 0 1 1 1 1 0 #C 0 1 1 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 0 #Labels for each Letter A=[1, 0, 0] B=[0, 1, 0] C=[0, 0, 1] Code: Python3 # Creating data set # Aa =[0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1]# Bb =[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0]# Cc =[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0] # Creating labelsy =[[1, 0, 0], [0, 1, 0], [0, 0, 1]] Step 2 : Visualization of data set Python3 import numpy as npimport matplotlib.pyplot as plt# visualizing the data, ploting A.plt.imshow(np.array(a).reshape(5, 6))plt.show() Output: Step 3 :As the data set is in the form of list we will convert it into numpy array. Python3 # converting data and labels into numpy array """Convert the matrix of 0 and 1 into one hot vectorso that we can directly feed it to the neural network,these vectors are then stored in a list x.""" x =[np.array(a).reshape(1, 30), np.array(b).reshape(1, 30), np.array(c).reshape(1, 30)] # Labels are also converted into NumPy arrayy = np.array(y) print(x, "\n\n", y) Output: [array([[0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1]]), array([[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0]]), array([[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0]])] [[1 0 0] [0 1 0] [0 0 1]] Step 4 : Defining the architecture or structure of the deep neural network. This includes deciding the number of layers and the number of nodes in each layer. Our neural network is going to have the following structure. 1st layer: Input layer(1, 30) 2nd layer: Hidden layer (1, 5) 3rd layer: Output layer(3, 3) Step 5: Declaring and defining all the function to build deep neural network. Python3 # activation function def sigmoid(x): return(1/(1 + np.exp(-x))) # Creating the Feed forward neural network# 1 Input layer(1, 30)# 1 hidden layer (1, 5)# 1 output layer(3, 3) def f_forward(x, w1, w2): # hidden z1 = x.dot(w1)# input from layer 1 a1 = sigmoid(z1)# out put of layer 2 # Output layer z2 = a1.dot(w2)# input of out layer a2 = sigmoid(z2)# output of out layer return(a2) # initializing the weights randomlydef generate_wt(x, y): l =[] for i in range(x * y): l.append(np.random.randn()) return(np.array(l).reshape(x, y)) # for loss we will be using mean square error(MSE)def loss(out, Y): s =(np.square(out-Y)) s = np.sum(s)/len(y) return(s) # Back propagation of errordef back_prop(x, y, w1, w2, alpha): # hidden layer z1 = x.dot(w1)# input from layer 1 a1 = sigmoid(z1)# output of layer 2 # Output layer z2 = a1.dot(w2)# input of out layer a2 = sigmoid(z2)# output of out layer # error in output layer d2 =(a2-y) d1 = np.multiply((w2.dot((d2.transpose()))).transpose(), (np.multiply(a1, 1-a1))) # Gradient for w1 and w2 w1_adj = x.transpose().dot(d1) w2_adj = a1.transpose().dot(d2) # Updating parameters w1 = w1-(alpha*(w1_adj)) w2 = w2-(alpha*(w2_adj)) return(w1, w2) def train(x, Y, w1, w2, alpha = 0.01, epoch = 10): acc =[] losss =[] for j in range(epoch): l =[] for i in range(len(x)): out = f_forward(x[i], w1, w2) l.append((loss(out, Y[i]))) w1, w2 = back_prop(x[i], y[i], w1, w2, alpha) print("epochs:", j + 1, "======== acc:", (1-(sum(l)/len(x)))*100) acc.append((1-(sum(l)/len(x)))*100) losss.append(sum(l)/len(x)) return(acc, losss, w1, w2) def predict(x, w1, w2): Out = f_forward(x, w1, w2) maxm = 0 k = 0 for i in range(len(Out[0])): if(maxm<Out[0][i]): maxm = Out[0][i] k = i if(k == 0): print("Image is of letter A.") elif(k == 1): print("Image is of letter B.") else: print("Image is of letter C.") plt.imshow(x.reshape(5, 6)) plt.show() Step 6: Initializing the weights, as the neural network is having 3 layers, so there will be 2 weight matrix associate with it. The size of each matrix depends on the number of nodes in two connecting layers. Code: Python3 w1 = generate_wt(30, 5)w2 = generate_wt(5, 3)print(w1, "\n\n", w2) Output: [[ 0.75696605 -0.15959223 -1.43034587 0.17885107 -0.75859483] [-0.22870119 1.05882236 -0.15880572 0.11692122 0.58621482] [ 0.13926738 0.72963505 0.36050426 0.79866465 -0.17471235] [ 1.00708386 0.68803291 0.14110839 -0.7162728 0.69990794] [-0.90437131 0.63977434 -0.43317212 0.67134205 -0.9316605 ] [ 0.15860963 -1.17967773 -0.70747245 0.22870289 0.00940404] [ 1.40511247 -1.29543461 1.41613069 -0.97964787 -2.86220777] [ 0.66293564 -1.94013093 -0.78189238 1.44904122 -1.81131482] [ 0.4441061 -0.18751726 -2.58252033 0.23076863 0.12182448] [-0.60061323 0.39855851 -0.55612255 2.0201934 0.70525187] [-1.82925367 1.32004437 0.03226202 -0.79073523 -0.20750692] [-0.25756077 -1.37543232 -0.71369897 -0.13556156 -0.34918718] [ 0.26048374 2.49871398 1.01139237 -1.73242425 -0.67235417] [ 0.30351062 -0.45425039 -0.84046541 -0.60435352 -0.06281934] [ 0.43562048 0.66297676 1.76386981 -1.11794675 2.2012095 ] [-1.11051533 0.3462945 0.19136933 0.19717914 -1.78323674] [ 1.1219638 -0.04282422 -0.0142484 -0.73210071 -0.58364205] [-1.24046375 0.23368434 0.62323707 -1.66265946 -0.87481714] [ 0.19484897 0.12629217 -1.01575241 -0.47028007 -0.58278292] [ 0.16703418 -0.50993283 -0.90036661 2.33584006 0.96395524] [-0.72714199 0.39000914 -1.3215123 0.92744032 -1.44239943] [-2.30234278 -0.52677889 -0.09759073 -0.63982215 -0.51416013] [ 1.25338899 -0.58950956 -0.86009159 -0.7752274 2.24655146] [ 0.07553743 -1.2292084 0.46184872 -0.56390328 0.15901276] [-0.52090565 -2.42754589 -0.78354152 -0.44405857 1.16228247] [-1.21805132 -0.40358444 -0.65942185 0.76753095 -0.19664978] [-1.5866041 1.17100962 -1.50840821 -0.61750557 1.56003127] [ 1.33045269 -0.85811272 1.88869376 0.79491455 -0.96199293] [-2.34456987 0.1005953 -0.99376025 -0.94402235 -0.3078695 ] [ 0.93611909 0.58522915 -0.15553566 -1.03352997 -2.7210093 ]] [[-0.50650286 -0.41168428 -0.7107231 ] [ 1.86861492 -0.36446849 0.97721539] [-0.12792125 0.69578056 -0.6639736 ] [ 0.58190462 -0.98941614 0.40932723] [ 0.89758789 -0.49250365 -0.05023684]] Step 7 : Training the model. Python3 """The arguments of train function are data set list x,correct labels y, weights w1, w2, learning rate = 0.1,no of epochs or iteration.The function will return thematrix of accuracy and loss and also the matrix oftrained weights w1, w2""" acc, losss, w1, w2 = train(x, y, w1, w2, 0.1, 100) Output: epochs: 1 ======== acc: 59.24962411875523 epochs: 2 ======== acc: 63.68540644266716 epochs: 3 ======== acc: 68.23850165512243 epochs: 4 ======== acc: 71.30325758406262 epochs: 5 ======== acc: 73.52710796040974 epochs: 6 ======== acc: 75.32860090824263 epochs: 7 ======== acc: 76.8094120430158 epochs: 8 ======== acc: 78.00977196942078 epochs: 9 ======== acc: 78.97728263498026 epochs: 10 ======== acc: 79.76587293092753 epochs: 11 ======== acc: 80.42246589416287 epochs: 12 ======== acc: 80.98214842153129 epochs: 13 ======== acc: 81.4695736928823 epochs: 14 ======== acc: 81.90184308791194 epochs: 15 ======== acc: 82.29094665963427 epochs: 16 ======== acc: 82.64546024973251 epochs: 17 ======== acc: 82.97165532985433 epochs: 18 ======== acc: 83.27421706795944 epochs: 19 ======== acc: 83.55671426703763 epochs: 20 ======== acc: 83.82191341206628 epochs: 21 ======== acc: 84.07199359659367 epochs: 22 ======== acc: 84.30869706017322 epochs: 23 ======== acc: 84.53343682891021 epochs: 24 ======== acc: 84.74737503832276 epochs: 25 ======== acc: 84.95148074055622 epochs: 26 ======== acc: 85.1465730591422 epochs: 27 ======== acc: 85.33335370190892 epochs: 28 ======== acc: 85.51243164226796 epochs: 29 ======== acc: 85.68434197894798 epochs: 30 ======== acc: 85.84956043619462 epochs: 31 ======== acc: 86.0085145818298 epochs: 32 ======== acc: 86.16159256503643 epochs: 33 ======== acc: 86.30914997510234 epochs: 34 ======== acc: 86.45151527443966 epochs: 35 ======== acc: 86.58899414916453 epochs: 36 ======== acc: 86.72187303817682 epochs: 37 ======== acc: 86.85042203982091 epochs: 38 ======== acc: 86.97489734865094 epochs: 39 ======== acc: 87.09554333976325 epochs: 40 ======== acc: 87.21259439177474 epochs: 41 ======== acc: 87.32627651970255 epochs: 42 ======== acc: 87.43680887413676 epochs: 43 ======== acc: 87.54440515197342 epochs: 44 ======== acc: 87.64927495564211 epochs: 45 ======== acc: 87.75162513147157 epochs: 46 ======== acc: 87.85166111297174 epochs: 47 ======== acc: 87.94958829083211 epochs: 48 ======== acc: 88.0456134278342 epochs: 49 ======== acc: 88.13994613312185 epochs: 50 ======== acc: 88.2328004057654 epochs: 51 ======== acc: 88.32439625156803 epochs: 52 ======== acc: 88.4149613686817 epochs: 53 ======== acc: 88.5047328856618 epochs: 54 ======== acc: 88.59395911861766 epochs: 55 ======== acc: 88.68290129028868 epochs: 56 ======== acc: 88.77183512103412 epochs: 57 ======== acc: 88.86105215751232 epochs: 58 ======== acc: 88.95086064702116 epochs: 59 ======== acc: 89.04158569269322 epochs: 60 ======== acc: 89.13356833768444 epochs: 61 ======== acc: 89.22716312996127 epochs: 62 ======== acc: 89.32273362510695 epochs: 63 ======== acc: 89.42064521532092 epochs: 64 ======== acc: 89.52125466556964 epochs: 65 ======== acc: 89.62489584606081 epochs: 66 ======== acc: 89.73186143973956 epochs: 67 ======== acc: 89.84238093800867 epochs: 68 ======== acc: 89.95659604815005 epochs: 69 ======== acc: 90.07453567327377 epochs: 70 ======== acc: 90.19609371190103 epochs: 71 ======== acc: 90.32101373021872 epochs: 72 ======== acc: 90.44888465704626 epochs: 73 ======== acc: 90.57915066786961 epochs: 74 ======== acc: 90.7111362751668 epochs: 75 ======== acc: 90.84408471463895 epochs: 76 ======== acc: 90.97720484616241 epochs: 77 ======== acc: 91.10971995033672 epochs: 78 ======== acc: 91.24091164815938 epochs: 79 ======== acc: 91.37015369432306 epochs: 80 ======== acc: 91.49693294991012 epochs: 81 ======== acc: 91.62085750782504 epochs: 82 ======== acc: 91.74165396819595 epochs: 83 ======== acc: 91.8591569057493 epochs: 84 ======== acc: 91.97329371114765 epochs: 85 ======== acc: 92.0840675282122 epochs: 86 ======== acc: 92.19154028777587 epochs: 87 ======== acc: 92.29581711003155 epochs: 88 ======== acc: 92.3970327467751 epochs: 89 ======== acc: 92.49534030435096 epochs: 90 ======== acc: 92.59090221343706 epochs: 91 ======== acc: 92.68388325695001 epochs: 92 ======== acc: 92.77444539437016 epochs: 93 ======== acc: 92.86274409885533 epochs: 94 ======== acc: 92.94892593090393 epochs: 95 ======== acc: 93.03312709510452 epochs: 96 ======== acc: 93.11547275630565 epochs: 97 ======== acc: 93.19607692356153 epochs: 98 ======== acc: 93.27504274176297 epochs: 99 ======== acc: 93.35246306044819 epochs: 100 ======== acc: 93.42842117607569 Step 8 : Plotting the graphs of loss and accuracy with respect to number of epochs(Iteration). Python3 import matplotlib.pyplot as plt1 # ploting accuraccyplt1.plot(acc)plt1.ylabel('Accuracy')plt1.xlabel("Epochs:")plt1.show() # plotting Lossplt1.plot(losss)plt1.ylabel('Loss')plt1.xlabel("Epochs:")plt1.show() Output: Python3 # the trained weigths areprint(w1, "\n", w2) Output: [[-0.23769169 -0.1555992 0.81616823 0.1219152 -0.69572168] [ 0.36399972 0.37509723 1.5474053 0.85900477 -1.14106725] [ 1.0477069 0.13061485 0.16802893 -1.04450602 -2.76037811] [-0.83364475 -0.63609797 0.61527206 -0.42998096 0.248886 ] [ 0.16293725 -0.49443901 0.47638257 -0.89786531 -1.63778409] [ 0.10750411 -1.74967435 0.03086382 0.9906433 -0.9976104 ] [ 0.48454172 -0.68739134 0.78150251 -0.1220987 0.68659854] [-1.53100416 -0.33999119 -1.07657716 0.81305349 -0.79595135] [ 2.06592829 1.25309796 -2.03200199 0.03984423 -0.76893089] [-0.08285231 -0.33820853 -1.08239104 -0.22017196 -0.37606984] [-0.24784192 -0.36731598 -0.58394944 -0.0434036 0.58383408] [ 0.28121367 -1.84909298 -0.97302413 1.58393025 0.24571332] [-0.21185018 0.29358204 -0.79433164 -0.20634606 -0.69157617] [ 0.13666222 -0.31704319 0.03924342 0.54618961 -1.72226768] [ 1.06043825 -1.02009526 -1.39511479 -0.98141073 0.78304473] [ 1.44167174 -2.17432498 0.95281672 -0.76748692 1.16231747] [ 0.25971927 -0.59872416 1.01291689 -1.45200634 -0.72575161] [-0.27036828 -1.36721091 -0.43858778 -0.78527025 -0.36159359] [ 0.91786563 -0.97465418 1.26518387 -0.21425247 -0.25097618] [-0.00964162 -1.05122248 -1.2747124 1.65647842 1.15216675] [ 2.63067561 -1.3626307 2.44355269 -0.87960091 -0.39903453] [ 0.30513627 -0.77390359 -0.57135017 0.72661218 1.44234861] [ 2.49165837 -0.77744044 -0.14062449 -1.6659343 0.27033269] [ 1.30530805 -0.93488645 -0.66026013 -0.2839123 -1.21397584] [ 0.41042422 0.20086176 -2.07552916 -0.12391564 -0.67647955] [ 0.21339152 0.79963834 1.19499535 -2.17004581 -1.03632954] [-1.2032222 0.46226132 -0.68314898 1.27665578 0.69930683] [ 0.11239785 -2.19280608 1.36181772 -0.36691734 -0.32239543] [-1.62958342 -0.55989702 1.62686431 1.59839946 -0.08719492] [ 1.09518451 -1.9542822 -1.18507834 -0.5537991 -0.28901241]] [[ 1.52837185 -0.33038873 -3.45127838] [ 1.0758812 -0.41879112 -1.00548735] [-3.59476021 0.55176444 1.14839625] [ 1.07525643 -1.6250444 0.77552561] [ 0.82785787 -1.79602953 1.15544384]] Step9: Making prediction. Python3 """The predict function will take the following arguments:1) image matrix2) w1 trained weights3) w2 trained weights"""predict(x[1], w1, w2) Output: Image is of letter B. kalrap615 Deep-Learning Neural Network Python-numpy Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Difference between Informed and Uninformed Search in AI Deploy Machine Learning Model using Flask Support Vector Machine Algorithm k-nearest neighbor algorithm in Python Types of Environments in AI Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 23979, "s": 23951, "text": "\n07 Jul, 2021" }, { "code": null, "e": 24276, "s": 23979, "text": "DNN(Deep neural network) in a machine learning algorithm that is inspired by the way the human brain works. DNN is mainly used as a classification algorithm. In this article, we will look at the stepwise approach on how to implement the basic DNN algorithm in NumPy(Python library) from scratch. " }, { "code": null, "e": 24635, "s": 24276, "text": "The purpose of this article is to create a sense of understanding for the beginners, on how neural network works and its implementation details. We are going to build a three-letter(A, B, C) classifier, for simplicity we are going to create the letters (A, B, C) as NumPy array of 0s and 1s, also we are going to ignore the bias term related with each node. " }, { "code": null, "e": 24804, "s": 24635, "text": "Step 1 : Creating the data set using numpy array of 0s and 1s. As the image is a collection of pixel values in matrix, we will create those matrix of pixel for A, B, C " }, { "code": null, "e": 25073, "s": 24804, "text": "using 0 and 1\n#A\n0 0 1 1 0 0\n0 1 0 0 1 0 \n1 1 1 1 1 1\n1 0 0 0 0 1 \n1 0 0 0 0 1\n\n#B\n0 1 1 1 1 0\n0 1 0 0 1 0 \n0 1 1 1 1 0\n0 1 0 0 1 0\n0 1 1 1 1 0\n\n#C\n0 1 1 1 1 0\n0 1 0 0 0 0\n0 1 0 0 0 0\n0 1 0 0 0 0\n0 1 1 1 1 0\n\n#Labels for each Letter\nA=[1, 0, 0]\nB=[0, 1, 0]\nC=[0, 0, 1]" }, { "code": null, "e": 25080, "s": 25073, "text": "Code: " }, { "code": null, "e": 25088, "s": 25080, "text": "Python3" }, { "code": "# Creating data set # Aa =[0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1]# Bb =[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0]# Cc =[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0] # Creating labelsy =[[1, 0, 0], [0, 1, 0], [0, 0, 1]]", "e": 25479, "s": 25088, "text": null }, { "code": null, "e": 25515, "s": 25479, "text": "Step 2 : Visualization of data set " }, { "code": null, "e": 25523, "s": 25515, "text": "Python3" }, { "code": "import numpy as npimport matplotlib.pyplot as plt# visualizing the data, ploting A.plt.imshow(np.array(a).reshape(5, 6))plt.show()", "e": 25654, "s": 25523, "text": null }, { "code": null, "e": 25664, "s": 25654, "text": "Output: " }, { "code": null, "e": 25749, "s": 25664, "text": "Step 3 :As the data set is in the form of list we will convert it into numpy array. " }, { "code": null, "e": 25757, "s": 25749, "text": "Python3" }, { "code": "# converting data and labels into numpy array \"\"\"Convert the matrix of 0 and 1 into one hot vectorso that we can directly feed it to the neural network,these vectors are then stored in a list x.\"\"\" x =[np.array(a).reshape(1, 30), np.array(b).reshape(1, 30), np.array(c).reshape(1, 30)] # Labels are also converted into NumPy arrayy = np.array(y) print(x, \"\\n\\n\", y)", "e": 26156, "s": 25757, "text": null }, { "code": null, "e": 26165, "s": 26156, "text": "Output: " }, { "code": null, "e": 26501, "s": 26165, "text": "[array([[0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1]]), \n array([[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0]]),\n array([[0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0]])] \n\n\n[[1 0 0]\n[0 1 0]\n[0 0 1]]" }, { "code": null, "e": 26722, "s": 26501, "text": "Step 4 : Defining the architecture or structure of the deep neural network. This includes deciding the number of layers and the number of nodes in each layer. Our neural network is going to have the following structure. " }, { "code": null, "e": 26815, "s": 26722, "text": "1st layer: Input layer(1, 30)\n2nd layer: Hidden layer (1, 5)\n3rd layer: Output layer(3, 3)\n " }, { "code": null, "e": 26894, "s": 26815, "text": "Step 5: Declaring and defining all the function to build deep neural network. " }, { "code": null, "e": 26902, "s": 26894, "text": "Python3" }, { "code": "# activation function def sigmoid(x): return(1/(1 + np.exp(-x))) # Creating the Feed forward neural network# 1 Input layer(1, 30)# 1 hidden layer (1, 5)# 1 output layer(3, 3) def f_forward(x, w1, w2): # hidden z1 = x.dot(w1)# input from layer 1 a1 = sigmoid(z1)# out put of layer 2 # Output layer z2 = a1.dot(w2)# input of out layer a2 = sigmoid(z2)# output of out layer return(a2) # initializing the weights randomlydef generate_wt(x, y): l =[] for i in range(x * y): l.append(np.random.randn()) return(np.array(l).reshape(x, y)) # for loss we will be using mean square error(MSE)def loss(out, Y): s =(np.square(out-Y)) s = np.sum(s)/len(y) return(s) # Back propagation of errordef back_prop(x, y, w1, w2, alpha): # hidden layer z1 = x.dot(w1)# input from layer 1 a1 = sigmoid(z1)# output of layer 2 # Output layer z2 = a1.dot(w2)# input of out layer a2 = sigmoid(z2)# output of out layer # error in output layer d2 =(a2-y) d1 = np.multiply((w2.dot((d2.transpose()))).transpose(), (np.multiply(a1, 1-a1))) # Gradient for w1 and w2 w1_adj = x.transpose().dot(d1) w2_adj = a1.transpose().dot(d2) # Updating parameters w1 = w1-(alpha*(w1_adj)) w2 = w2-(alpha*(w2_adj)) return(w1, w2) def train(x, Y, w1, w2, alpha = 0.01, epoch = 10): acc =[] losss =[] for j in range(epoch): l =[] for i in range(len(x)): out = f_forward(x[i], w1, w2) l.append((loss(out, Y[i]))) w1, w2 = back_prop(x[i], y[i], w1, w2, alpha) print(\"epochs:\", j + 1, \"======== acc:\", (1-(sum(l)/len(x)))*100) acc.append((1-(sum(l)/len(x)))*100) losss.append(sum(l)/len(x)) return(acc, losss, w1, w2) def predict(x, w1, w2): Out = f_forward(x, w1, w2) maxm = 0 k = 0 for i in range(len(Out[0])): if(maxm<Out[0][i]): maxm = Out[0][i] k = i if(k == 0): print(\"Image is of letter A.\") elif(k == 1): print(\"Image is of letter B.\") else: print(\"Image is of letter C.\") plt.imshow(x.reshape(5, 6)) plt.show() ", "e": 29104, "s": 26902, "text": null }, { "code": null, "e": 29314, "s": 29104, "text": "Step 6: Initializing the weights, as the neural network is having 3 layers, so there will be 2 weight matrix associate with it. The size of each matrix depends on the number of nodes in two connecting layers. " }, { "code": null, "e": 29321, "s": 29314, "text": "Code: " }, { "code": null, "e": 29329, "s": 29321, "text": "Python3" }, { "code": "w1 = generate_wt(30, 5)w2 = generate_wt(5, 3)print(w1, \"\\n\\n\", w2)", "e": 29396, "s": 29329, "text": null }, { "code": null, "e": 29405, "s": 29396, "text": "Output: " }, { "code": null, "e": 31495, "s": 29405, "text": "[[ 0.75696605 -0.15959223 -1.43034587 0.17885107 -0.75859483]\n [-0.22870119 1.05882236 -0.15880572 0.11692122 0.58621482]\n [ 0.13926738 0.72963505 0.36050426 0.79866465 -0.17471235]\n [ 1.00708386 0.68803291 0.14110839 -0.7162728 0.69990794]\n [-0.90437131 0.63977434 -0.43317212 0.67134205 -0.9316605 ]\n [ 0.15860963 -1.17967773 -0.70747245 0.22870289 0.00940404]\n [ 1.40511247 -1.29543461 1.41613069 -0.97964787 -2.86220777]\n [ 0.66293564 -1.94013093 -0.78189238 1.44904122 -1.81131482]\n [ 0.4441061 -0.18751726 -2.58252033 0.23076863 0.12182448]\n [-0.60061323 0.39855851 -0.55612255 2.0201934 0.70525187]\n [-1.82925367 1.32004437 0.03226202 -0.79073523 -0.20750692]\n [-0.25756077 -1.37543232 -0.71369897 -0.13556156 -0.34918718]\n [ 0.26048374 2.49871398 1.01139237 -1.73242425 -0.67235417]\n [ 0.30351062 -0.45425039 -0.84046541 -0.60435352 -0.06281934]\n [ 0.43562048 0.66297676 1.76386981 -1.11794675 2.2012095 ]\n [-1.11051533 0.3462945 0.19136933 0.19717914 -1.78323674]\n [ 1.1219638 -0.04282422 -0.0142484 -0.73210071 -0.58364205]\n [-1.24046375 0.23368434 0.62323707 -1.66265946 -0.87481714]\n [ 0.19484897 0.12629217 -1.01575241 -0.47028007 -0.58278292]\n [ 0.16703418 -0.50993283 -0.90036661 2.33584006 0.96395524]\n [-0.72714199 0.39000914 -1.3215123 0.92744032 -1.44239943]\n [-2.30234278 -0.52677889 -0.09759073 -0.63982215 -0.51416013]\n [ 1.25338899 -0.58950956 -0.86009159 -0.7752274 2.24655146]\n [ 0.07553743 -1.2292084 0.46184872 -0.56390328 0.15901276]\n [-0.52090565 -2.42754589 -0.78354152 -0.44405857 1.16228247]\n [-1.21805132 -0.40358444 -0.65942185 0.76753095 -0.19664978]\n [-1.5866041 1.17100962 -1.50840821 -0.61750557 1.56003127]\n [ 1.33045269 -0.85811272 1.88869376 0.79491455 -0.96199293]\n [-2.34456987 0.1005953 -0.99376025 -0.94402235 -0.3078695 ]\n [ 0.93611909 0.58522915 -0.15553566 -1.03352997 -2.7210093 ]] \n\n [[-0.50650286 -0.41168428 -0.7107231 ]\n [ 1.86861492 -0.36446849 0.97721539]\n [-0.12792125 0.69578056 -0.6639736 ]\n [ 0.58190462 -0.98941614 0.40932723]\n [ 0.89758789 -0.49250365 -0.05023684]]" }, { "code": null, "e": 31524, "s": 31495, "text": "Step 7 : Training the model." }, { "code": null, "e": 31532, "s": 31524, "text": "Python3" }, { "code": "\"\"\"The arguments of train function are data set list x,correct labels y, weights w1, w2, learning rate = 0.1,no of epochs or iteration.The function will return thematrix of accuracy and loss and also the matrix oftrained weights w1, w2\"\"\" acc, losss, w1, w2 = train(x, y, w1, w2, 0.1, 100)", "e": 31822, "s": 31532, "text": null }, { "code": null, "e": 31831, "s": 31822, "text": "Output: " }, { "code": null, "e": 36111, "s": 31831, "text": "epochs: 1 ======== acc: 59.24962411875523\nepochs: 2 ======== acc: 63.68540644266716\nepochs: 3 ======== acc: 68.23850165512243\nepochs: 4 ======== acc: 71.30325758406262\nepochs: 5 ======== acc: 73.52710796040974\nepochs: 6 ======== acc: 75.32860090824263\nepochs: 7 ======== acc: 76.8094120430158\nepochs: 8 ======== acc: 78.00977196942078\nepochs: 9 ======== acc: 78.97728263498026\nepochs: 10 ======== acc: 79.76587293092753\nepochs: 11 ======== acc: 80.42246589416287\nepochs: 12 ======== acc: 80.98214842153129\nepochs: 13 ======== acc: 81.4695736928823\nepochs: 14 ======== acc: 81.90184308791194\nepochs: 15 ======== acc: 82.29094665963427\nepochs: 16 ======== acc: 82.64546024973251\nepochs: 17 ======== acc: 82.97165532985433\nepochs: 18 ======== acc: 83.27421706795944\nepochs: 19 ======== acc: 83.55671426703763\nepochs: 20 ======== acc: 83.82191341206628\nepochs: 21 ======== acc: 84.07199359659367\nepochs: 22 ======== acc: 84.30869706017322\nepochs: 23 ======== acc: 84.53343682891021\nepochs: 24 ======== acc: 84.74737503832276\nepochs: 25 ======== acc: 84.95148074055622\nepochs: 26 ======== acc: 85.1465730591422\nepochs: 27 ======== acc: 85.33335370190892\nepochs: 28 ======== acc: 85.51243164226796\nepochs: 29 ======== acc: 85.68434197894798\nepochs: 30 ======== acc: 85.84956043619462\nepochs: 31 ======== acc: 86.0085145818298\nepochs: 32 ======== acc: 86.16159256503643\nepochs: 33 ======== acc: 86.30914997510234\nepochs: 34 ======== acc: 86.45151527443966\nepochs: 35 ======== acc: 86.58899414916453\nepochs: 36 ======== acc: 86.72187303817682\nepochs: 37 ======== acc: 86.85042203982091\nepochs: 38 ======== acc: 86.97489734865094\nepochs: 39 ======== acc: 87.09554333976325\nepochs: 40 ======== acc: 87.21259439177474\nepochs: 41 ======== acc: 87.32627651970255\nepochs: 42 ======== acc: 87.43680887413676\nepochs: 43 ======== acc: 87.54440515197342\nepochs: 44 ======== acc: 87.64927495564211\nepochs: 45 ======== acc: 87.75162513147157\nepochs: 46 ======== acc: 87.85166111297174\nepochs: 47 ======== acc: 87.94958829083211\nepochs: 48 ======== acc: 88.0456134278342\nepochs: 49 ======== acc: 88.13994613312185\nepochs: 50 ======== acc: 88.2328004057654\nepochs: 51 ======== acc: 88.32439625156803\nepochs: 52 ======== acc: 88.4149613686817\nepochs: 53 ======== acc: 88.5047328856618\nepochs: 54 ======== acc: 88.59395911861766\nepochs: 55 ======== acc: 88.68290129028868\nepochs: 56 ======== acc: 88.77183512103412\nepochs: 57 ======== acc: 88.86105215751232\nepochs: 58 ======== acc: 88.95086064702116\nepochs: 59 ======== acc: 89.04158569269322\nepochs: 60 ======== acc: 89.13356833768444\nepochs: 61 ======== acc: 89.22716312996127\nepochs: 62 ======== acc: 89.32273362510695\nepochs: 63 ======== acc: 89.42064521532092\nepochs: 64 ======== acc: 89.52125466556964\nepochs: 65 ======== acc: 89.62489584606081\nepochs: 66 ======== acc: 89.73186143973956\nepochs: 67 ======== acc: 89.84238093800867\nepochs: 68 ======== acc: 89.95659604815005\nepochs: 69 ======== acc: 90.07453567327377\nepochs: 70 ======== acc: 90.19609371190103\nepochs: 71 ======== acc: 90.32101373021872\nepochs: 72 ======== acc: 90.44888465704626\nepochs: 73 ======== acc: 90.57915066786961\nepochs: 74 ======== acc: 90.7111362751668\nepochs: 75 ======== acc: 90.84408471463895\nepochs: 76 ======== acc: 90.97720484616241\nepochs: 77 ======== acc: 91.10971995033672\nepochs: 78 ======== acc: 91.24091164815938\nepochs: 79 ======== acc: 91.37015369432306\nepochs: 80 ======== acc: 91.49693294991012\nepochs: 81 ======== acc: 91.62085750782504\nepochs: 82 ======== acc: 91.74165396819595\nepochs: 83 ======== acc: 91.8591569057493\nepochs: 84 ======== acc: 91.97329371114765\nepochs: 85 ======== acc: 92.0840675282122\nepochs: 86 ======== acc: 92.19154028777587\nepochs: 87 ======== acc: 92.29581711003155\nepochs: 88 ======== acc: 92.3970327467751\nepochs: 89 ======== acc: 92.49534030435096\nepochs: 90 ======== acc: 92.59090221343706\nepochs: 91 ======== acc: 92.68388325695001\nepochs: 92 ======== acc: 92.77444539437016\nepochs: 93 ======== acc: 92.86274409885533\nepochs: 94 ======== acc: 92.94892593090393\nepochs: 95 ======== acc: 93.03312709510452\nepochs: 96 ======== acc: 93.11547275630565\nepochs: 97 ======== acc: 93.19607692356153\nepochs: 98 ======== acc: 93.27504274176297\nepochs: 99 ======== acc: 93.35246306044819\nepochs: 100 ======== acc: 93.42842117607569" }, { "code": null, "e": 36207, "s": 36111, "text": "Step 8 : Plotting the graphs of loss and accuracy with respect to number of epochs(Iteration). " }, { "code": null, "e": 36215, "s": 36207, "text": "Python3" }, { "code": "import matplotlib.pyplot as plt1 # ploting accuraccyplt1.plot(acc)plt1.ylabel('Accuracy')plt1.xlabel(\"Epochs:\")plt1.show() # plotting Lossplt1.plot(losss)plt1.ylabel('Loss')plt1.xlabel(\"Epochs:\")plt1.show()", "e": 36422, "s": 36215, "text": null }, { "code": null, "e": 36432, "s": 36422, "text": "Output: " }, { "code": null, "e": 36442, "s": 36434, "text": "Python3" }, { "code": "# the trained weigths areprint(w1, \"\\n\", w2)", "e": 36487, "s": 36442, "text": null }, { "code": null, "e": 36496, "s": 36487, "text": "Output: " }, { "code": null, "e": 38585, "s": 36496, "text": "[[-0.23769169 -0.1555992 0.81616823 0.1219152 -0.69572168]\n [ 0.36399972 0.37509723 1.5474053 0.85900477 -1.14106725]\n [ 1.0477069 0.13061485 0.16802893 -1.04450602 -2.76037811]\n [-0.83364475 -0.63609797 0.61527206 -0.42998096 0.248886 ]\n [ 0.16293725 -0.49443901 0.47638257 -0.89786531 -1.63778409]\n [ 0.10750411 -1.74967435 0.03086382 0.9906433 -0.9976104 ]\n [ 0.48454172 -0.68739134 0.78150251 -0.1220987 0.68659854]\n [-1.53100416 -0.33999119 -1.07657716 0.81305349 -0.79595135]\n [ 2.06592829 1.25309796 -2.03200199 0.03984423 -0.76893089]\n [-0.08285231 -0.33820853 -1.08239104 -0.22017196 -0.37606984]\n [-0.24784192 -0.36731598 -0.58394944 -0.0434036 0.58383408]\n [ 0.28121367 -1.84909298 -0.97302413 1.58393025 0.24571332]\n [-0.21185018 0.29358204 -0.79433164 -0.20634606 -0.69157617]\n [ 0.13666222 -0.31704319 0.03924342 0.54618961 -1.72226768]\n [ 1.06043825 -1.02009526 -1.39511479 -0.98141073 0.78304473]\n [ 1.44167174 -2.17432498 0.95281672 -0.76748692 1.16231747]\n [ 0.25971927 -0.59872416 1.01291689 -1.45200634 -0.72575161]\n [-0.27036828 -1.36721091 -0.43858778 -0.78527025 -0.36159359]\n [ 0.91786563 -0.97465418 1.26518387 -0.21425247 -0.25097618]\n [-0.00964162 -1.05122248 -1.2747124 1.65647842 1.15216675]\n [ 2.63067561 -1.3626307 2.44355269 -0.87960091 -0.39903453]\n [ 0.30513627 -0.77390359 -0.57135017 0.72661218 1.44234861]\n [ 2.49165837 -0.77744044 -0.14062449 -1.6659343 0.27033269]\n [ 1.30530805 -0.93488645 -0.66026013 -0.2839123 -1.21397584]\n [ 0.41042422 0.20086176 -2.07552916 -0.12391564 -0.67647955]\n [ 0.21339152 0.79963834 1.19499535 -2.17004581 -1.03632954]\n [-1.2032222 0.46226132 -0.68314898 1.27665578 0.69930683]\n [ 0.11239785 -2.19280608 1.36181772 -0.36691734 -0.32239543]\n [-1.62958342 -0.55989702 1.62686431 1.59839946 -0.08719492]\n [ 1.09518451 -1.9542822 -1.18507834 -0.5537991 -0.28901241]] \n [[ 1.52837185 -0.33038873 -3.45127838]\n [ 1.0758812 -0.41879112 -1.00548735]\n [-3.59476021 0.55176444 1.14839625]\n [ 1.07525643 -1.6250444 0.77552561]\n [ 0.82785787 -1.79602953 1.15544384]]" }, { "code": null, "e": 38612, "s": 38585, "text": "Step9: Making prediction. " }, { "code": null, "e": 38620, "s": 38612, "text": "Python3" }, { "code": "\"\"\"The predict function will take the following arguments:1) image matrix2) w1 trained weights3) w2 trained weights\"\"\"predict(x[1], w1, w2)", "e": 38760, "s": 38620, "text": null }, { "code": null, "e": 38769, "s": 38760, "text": "Output: " }, { "code": null, "e": 38791, "s": 38769, "text": "Image is of letter B." }, { "code": null, "e": 38805, "s": 38795, "text": "kalrap615" }, { "code": null, "e": 38819, "s": 38805, "text": "Deep-Learning" }, { "code": null, "e": 38834, "s": 38819, "text": "Neural Network" }, { "code": null, "e": 38847, "s": 38834, "text": "Python-numpy" }, { "code": null, "e": 38864, "s": 38847, "text": "Machine Learning" }, { "code": null, "e": 38871, "s": 38864, "text": "Python" }, { "code": null, "e": 38888, "s": 38871, "text": "Machine Learning" }, { "code": null, "e": 38986, "s": 38888, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 38995, "s": 38986, "text": "Comments" }, { "code": null, "e": 39008, "s": 38995, "text": "Old Comments" }, { "code": null, "e": 39064, "s": 39008, "text": "Difference between Informed and Uninformed Search in AI" }, { "code": null, "e": 39106, "s": 39064, "text": "Deploy Machine Learning Model using Flask" }, { "code": null, "e": 39139, "s": 39106, "text": "Support Vector Machine Algorithm" }, { "code": null, "e": 39178, "s": 39139, "text": "k-nearest neighbor algorithm in Python" }, { "code": null, "e": 39206, "s": 39178, "text": "Types of Environments in AI" }, { "code": null, "e": 39234, "s": 39206, "text": "Read JSON file using Python" }, { "code": null, "e": 39284, "s": 39234, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 39306, "s": 39284, "text": "Python map() function" } ]
Spiral Matrix in C++
Suppose we have a matrix and we have to print the matrix elements in a spiral way. At first starting from the first row, print the whole content and then follow the last column to print, then the last row, and so on, thus it prints the elements in a spiral fashion. So if the matrix is like − Then the output will be like [1 2 3 4 5 6 12 18 17 16 15 14 13 7 8 9 10 11 15 16] To solve this, we will follow these steps − currRow := 0 and currCol := 0 currRow := 0 and currCol := 0 while currRow and currCol are in the matrix rangefor i in range currCol and n-1,display mat[currRow, i]increase currRow by 1for i in range currRow and m-1, dodisplay mat[i, n-1]decrease n by 1if currRow < m, thenfor i := n-1 down to currCol, dodisplay mat[m-1, i]decrease m by 1if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1 while currRow and currCol are in the matrix range for i in range currCol and n-1,display mat[currRow, i] for i in range currCol and n-1, display mat[currRow, i] display mat[currRow, i] increase currRow by 1 increase currRow by 1 for i in range currRow and m-1, dodisplay mat[i, n-1] for i in range currRow and m-1, do display mat[i, n-1] display mat[i, n-1] decrease n by 1 decrease n by 1 if currRow < m, thenfor i := n-1 down to currCol, dodisplay mat[m-1, i]decrease m by 1if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1 if currRow < m, then for i := n-1 down to currCol, dodisplay mat[m-1, i] for i := n-1 down to currCol, do display mat[m-1, i] display mat[m-1, i] decrease m by 1 decrease m by 1 if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1 if currCol < n, then for i := m-1 down to currRow, dodisplay mat[i, currCol] for i := m-1 down to currRow, do display mat[i, currCol] display mat[i, currCol] increase currCol by 1 increase currCol by 1 Let us see the following implementation to get a better understanding − Live Demo #include <iostream> #define ROW 3 #define COL 6 using namespace std; int array[ROW][COL] = {{1, 2, 3, 4, 5, 6}, {7, 8, 9, 10, 11, 12}, {13, 14, 15, 16, 17, 18}}; void dispSpiral(int m, int n){ int i, currRow = 0, currCol = 0; while (currRow < ROW && currCol < COL){ for (i = currCol; i < n; i++){ //print the first row normally cout << array[currRow][i]<<" "; } currRow++; //point to next row for (i = currRow; i < m; ++i){ //Print the last column cout << array[i][n-1]<<" "; } n--; //set the n-1th column is current last column if ( currRow < m){ //when currRow is in the range, print the last row for (i = n-1; i >= currCol; --i){ cout << array[m-1][i]<<" "; } m--; //decrease the row range } if (currCol < n){ //when currCol is in the range, print the fist column for (i = m-1; i >= currRow; --i){ cout << array[i][currCol]<<" "; } currCol++; } } } int main(){ dispSpiral(ROW, COL); } [[1,2,3,4,5,6] [7,8,9,10,11,12] [13,14,15,16,17,18]] 1 2 3 4 5 6 12 18 17 16 15 14 13 7 8 9 10 11 15 16
[ { "code": null, "e": 1355, "s": 1062, "text": "Suppose we have a matrix and we have to print the matrix elements in a spiral way. At first starting from the first row, print the whole content and then follow the last column to print, then the last row, and so on, thus it prints the elements in a spiral fashion. So if the matrix is like −" }, { "code": null, "e": 1437, "s": 1355, "text": "Then the output will be like [1 2 3 4 5 6 12 18 17 16 15 14 13 7 8 9 10 11 15 16]" }, { "code": null, "e": 1481, "s": 1437, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1511, "s": 1481, "text": "currRow := 0 and currCol := 0" }, { "code": null, "e": 1541, "s": 1511, "text": "currRow := 0 and currCol := 0" }, { "code": null, "e": 1916, "s": 1541, "text": "while currRow and currCol are in the matrix rangefor i in range currCol and n-1,display mat[currRow, i]increase currRow by 1for i in range currRow and m-1, dodisplay mat[i, n-1]decrease n by 1if currRow < m, thenfor i := n-1 down to currCol, dodisplay mat[m-1, i]decrease m by 1if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1" }, { "code": null, "e": 1966, "s": 1916, "text": "while currRow and currCol are in the matrix range" }, { "code": null, "e": 2021, "s": 1966, "text": "for i in range currCol and n-1,display mat[currRow, i]" }, { "code": null, "e": 2053, "s": 2021, "text": "for i in range currCol and n-1," }, { "code": null, "e": 2077, "s": 2053, "text": "display mat[currRow, i]" }, { "code": null, "e": 2101, "s": 2077, "text": "display mat[currRow, i]" }, { "code": null, "e": 2123, "s": 2101, "text": "increase currRow by 1" }, { "code": null, "e": 2145, "s": 2123, "text": "increase currRow by 1" }, { "code": null, "e": 2199, "s": 2145, "text": "for i in range currRow and m-1, dodisplay mat[i, n-1]" }, { "code": null, "e": 2234, "s": 2199, "text": "for i in range currRow and m-1, do" }, { "code": null, "e": 2254, "s": 2234, "text": "display mat[i, n-1]" }, { "code": null, "e": 2274, "s": 2254, "text": "display mat[i, n-1]" }, { "code": null, "e": 2290, "s": 2274, "text": "decrease n by 1" }, { "code": null, "e": 2306, "s": 2290, "text": "decrease n by 1" }, { "code": null, "e": 2489, "s": 2306, "text": "if currRow < m, thenfor i := n-1 down to currCol, dodisplay mat[m-1, i]decrease m by 1if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1" }, { "code": null, "e": 2510, "s": 2489, "text": "if currRow < m, then" }, { "code": null, "e": 2562, "s": 2510, "text": "for i := n-1 down to currCol, dodisplay mat[m-1, i]" }, { "code": null, "e": 2595, "s": 2562, "text": "for i := n-1 down to currCol, do" }, { "code": null, "e": 2615, "s": 2595, "text": "display mat[m-1, i]" }, { "code": null, "e": 2635, "s": 2615, "text": "display mat[m-1, i]" }, { "code": null, "e": 2651, "s": 2635, "text": "decrease m by 1" }, { "code": null, "e": 2667, "s": 2651, "text": "decrease m by 1" }, { "code": null, "e": 2764, "s": 2667, "text": "if currCol < n, thenfor i := m-1 down to currRow, dodisplay mat[i, currCol]increase currCol by 1" }, { "code": null, "e": 2785, "s": 2764, "text": "if currCol < n, then" }, { "code": null, "e": 2841, "s": 2785, "text": "for i := m-1 down to currRow, dodisplay mat[i, currCol]" }, { "code": null, "e": 2874, "s": 2841, "text": "for i := m-1 down to currRow, do" }, { "code": null, "e": 2898, "s": 2874, "text": "display mat[i, currCol]" }, { "code": null, "e": 2922, "s": 2898, "text": "display mat[i, currCol]" }, { "code": null, "e": 2944, "s": 2922, "text": "increase currCol by 1" }, { "code": null, "e": 2966, "s": 2944, "text": "increase currCol by 1" }, { "code": null, "e": 3038, "s": 2966, "text": "Let us see the following implementation to get a better understanding −" }, { "code": null, "e": 3049, "s": 3038, "text": " Live Demo" }, { "code": null, "e": 4057, "s": 3049, "text": "#include <iostream>\n#define ROW 3\n#define COL 6\nusing namespace std;\nint array[ROW][COL] = {{1, 2, 3, 4, 5, 6},\n {7, 8, 9, 10, 11, 12},\n {13, 14, 15, 16, 17, 18}};\nvoid dispSpiral(int m, int n){\n int i, currRow = 0, currCol = 0;\n while (currRow < ROW && currCol < COL){\n for (i = currCol; i < n; i++){ //print the first row normally\n cout << array[currRow][i]<<\" \";\n }\n currRow++; //point to next row\n for (i = currRow; i < m; ++i){ //Print the last column\n cout << array[i][n-1]<<\" \";\n }\n n--; //set the n-1th column is current last column\n if ( currRow < m){ //when currRow is in the range, print the last row\n for (i = n-1; i >= currCol; --i){\n cout << array[m-1][i]<<\" \";\n }\n m--; //decrease the row range\n }\n if (currCol < n){ //when currCol is in the range, print the fist column\n for (i = m-1; i >= currRow; --i){\n cout << array[i][currCol]<<\" \";\n }\n currCol++;\n }\n }\n}\nint main(){\n dispSpiral(ROW, COL);\n}" }, { "code": null, "e": 4110, "s": 4057, "text": "[[1,2,3,4,5,6]\n[7,8,9,10,11,12]\n[13,14,15,16,17,18]]" }, { "code": null, "e": 4161, "s": 4110, "text": "1 2 3 4 5 6 12 18 17 16 15 14 13 7 8 9 10 11 15 16" } ]
MySQL - ADDTIME() Function
The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values. The MYSQL ADDTIME() function is used to add the specified time interval to a date time or, time value. Following is the syntax of the above function – ADDTIME(expr1,expr2) where, expr1 is the expression representing the datetime or time. expr1 is the expression representing the datetime or time. expr2 is the expression representing the time interval to be added. expr2 is the expression representing the time interval to be added. Following example demonstrates the usage of the ADDTIME() function – mysql> SELECT ADDTIME('10:40:32.88558', '06:04:01.222222'); +----------------------------------------------+ | ADDTIME('10:40:32.88558', '06:04:01.222222') | +----------------------------------------------+ | 16:44:34.107802 | +----------------------------------------------+ 1 row in set (0.00 sec) Following is another example of this function – mysql> SELECT ADDTIME('06:23:15.99999', '12:25:11.11111'); +---------------------------------------------+ | ADDTIME('06:23:15.99999', '12:25:11.11111') | +---------------------------------------------+ | 18:48:27.111100 | +---------------------------------------------+ 1 row in set (0.00 sec) In the following example we are passing DATETIME value for time – mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', '06:04:01.222222'); +---------------------------------------------------------+ | ADDTIME('2018-05-23 05:40:32.88558', '06:04:01.222222') | +---------------------------------------------------------+ | 2018-05-23 11:44:34.107802 | +---------------------------------------------------------+ 1 row in set (0.00 sec) In the following example we are passing the result of the CURTIME() function as the time interval — mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', CURTIME()); +-------------------------------------------------+ | ADDTIME('2018-05-23 05:40:32.88558', CURTIME()) | +-------------------------------------------------+ | 2018-05-23 17:58:41.885580 | +-------------------------------------------------+ 1 row in set (0.00 sec) We can also pass negative values as arguments to this function – mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', '-06:04:01.222222'); +----------------------------------------------------------+ | ADDTIME('2018-05-23 05:40:32.88558', '-06:04:01.222222') | +----------------------------------------------------------+ | 2018-05-22 23:36:31.663358 | +----------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT ADDTIME('06:23:15.99999', '-02:25:11.11111'); +----------------------------------------------+ | ADDTIME('06:23:15.99999', '-02:25:11.11111') | +----------------------------------------------+ | 03:58:04.888880 | +----------------------------------------------+ 1 row in set (0.00 sec) Let us create another table with name Sales in MySQL database using CREATE statement as follows – mysql> CREATE TABLE sales( ID INT, ProductName VARCHAR(255), CustomerName VARCHAR(255), DispatchDate date, DispatchTime time, Price INT, Location VARCHAR(255) ); Query OK, 0 rows affected (2.22 sec) Now, we will insert 5 records in Sales table using INSERT statements − insert into sales values (1, 'Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad'); insert into sales values (2, 'Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam'); insert into sales values (3, 'Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada'); insert into sales values (4, 'Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai'); insert into sales values (5, 'Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa'); Following query adds time interval to the values in the column named DispatchTime — mysql> SELECT ProductName, CustomerName, DispatchDate, DispatchTime, Price, ADDTIME(DispatchTime,'12:45:50') FROM Sales; +-------------+--------------+--------------+--------------+-------+----------------------------------+ | ProductName | CustomerName | DispatchDate | DispatchTime | Price | ADDTIME(DispatchTime,'12:45:50') | +-------------+--------------+--------------+--------------+-------+----------------------------------+ | Key-Board | Raja | 2019-09-01 | 11:00:00 | 7000 | 23:45:50 | | Earphones | Roja | 2019-05-01 | 11:00:00 | 2000 | 23:45:50 | | Mouse | Puja | 2019-03-01 | 10:59:59 | 3000 | 23:45:49 | | Mobile | Vanaja | 2019-03-01 | 10:10:52 | 9000 | 22:56:42 | | Headset | Jalaja | 2019-04-06 | 11:08:59 | 6000 | 23:54:49 | +-------------+--------------+--------------+--------------+-------+----------------------------------+ 5 rows in set (0.00 sec) Suppose we have created a table named dispatches_data with 5 records in it using the following queries – mysql> CREATE TABLE dispatches_data( ProductName VARCHAR(255), CustomerName VARCHAR(255), DispatchTimeStamp timestamp, Price INT, Location VARCHAR(255) ); insert into dispatches_data values('Key-Board', 'Raja', TIMESTAMP('2019-05-04', '15:02:45'), 7000, 'Hyderabad'); insert into dispatches_data values('Earphones', 'Roja', TIMESTAMP('2019-06-26', '14:13:12'), 2000, 'Vishakhapatnam'); insert into dispatches_data values('Mouse', 'Puja', TIMESTAMP('2019-12-07', '07:50:37'), 3000, 'Vijayawada'); insert into dispatches_data values('Mobile', 'Vanaja' , TIMESTAMP ('2018-03-21', '16:00:45'), 9000, 'Chennai'); insert into dispatches_data values('Headset', 'Jalaja' , TIMESTAMP('2018-12-30', '10:49:27'), 6000, 'Goa'); Following query adds time interval to the column named DispatchTimeStamp — mysql> SELECT ProductName, CustomerName, DispatchTimeStamp, Price, ADDTIME(DispatchTimeStamp, '08:25:46') FROM dispatches_data; +-------------+--------------+---------------------+-------+----------------------------------------+ | ProductName | CustomerName | DispatchTimeStamp | Price | ADDTIME(DispatchTimeStamp, '08:25:46') | +-------------+--------------+---------------------+-------+----------------------------------------+ | Key-Board | Raja | 2019-05-04 15:02:45 | 7000 | 2019-05-04 23:28:31 | | Earphones | Roja | 2019-06-26 14:13:12 | 2000 | 2019-06-26 22:38:58 | | Mouse | Puja | 2019-12-07 07:50:37 | 3000 | 2019-12-07 16:16:23 | | Mobile | Vanaja | 2018-03-21 16:00:45 | 9000 | 2018-03-22 00:26:31 | | Headset | Jalaja | 2018-12-30 10:49:27 | 6000 | 2018-12-30 19:15:13 | +-------------+--------------+---------------------+-------+----------------------------------------+ 5 rows in set (0.21 sec) Suppose we have created a table named SubscriberDetails with 5 records in it using the following queries – mysql> CREATE TABLE SubscriberDetails ( SubscriberName VARCHAR(255), PackageName VARCHAR(255), SubscriptionTimeStamp timestamp ); insert into SubscriberDetails values('Ram', 'Premium', TimeStamp('2020-10-21 20:53:49')); insert into SubscriberDetails values('Rahman', 'Basic', TimeStamp('2020-11-26 10:13:19')); insert into SubscriberDetails values('Robert', 'Moderate', TimeStamp('2021-03-07 05:43:20')); insert into SubscriberDetails values('Radha', 'Basic', TimeStamp('2021-02-21 16:36:39')); insert into SubscriberDetails values('Rajiya', 'Premium', TimeStamp('2021-01-30 12:45:45')); Following query adds time interval to the SubscriptionTimeStamp values of all the records – mysql> SELECT SubscriberName, PackageName, SubscriptionTimeStamp, ADDTIME(SubscriptionTimeStamp, '10:05:20') FROM SubscriberDetails; +----------------+-------------+-----------------------+--------------------------------------------+ | SubscriberName | PackageName | SubscriptionTimeStamp | ADDTIME(SubscriptionTimeStamp, '10:05:20') | +----------------+-------------+-----------------------+--------------------------------------------+ | Ram | Premium | 2020-10-21 20:53:49 | 2020-10-22 06:59:09 | | Rahman | Basic | 2020-11-26 10:13:19 | 2020-11-26 20:18:39 | | Robert | Moderate | 2021-03-07 05:43:20 | 2021-03-07 15:48:40 | | Radha | Basic | 2021-02-21 16:36:39 | 2021-02-22 02:41:59 | | Rajiya | Premium | 2021-01-30 12:45:45 | 2021-01-30 22:51:05 | +----------------+-------------+-----------------------+--------------------------------------------+ 5 rows in set (0.00 sec) 31 Lectures 6 hours Eduonix Learning Solutions 84 Lectures 5.5 hours Frahaan Hussain 6 Lectures 3.5 hours DATAhill Solutions Srinivas Reddy 60 Lectures 10 hours Vijay Kumar Parvatha Reddy 10 Lectures 1 hours Harshit Srivastava 25 Lectures 4 hours Trevoir Williams Print Add Notes Bookmark this page
[ { "code": null, "e": 2664, "s": 2333, "text": "The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values." }, { "code": null, "e": 2767, "s": 2664, "text": "The MYSQL ADDTIME() function is used to add the specified time interval to a date time or, time value." }, { "code": null, "e": 2815, "s": 2767, "text": "Following is the syntax of the above function –" }, { "code": null, "e": 2837, "s": 2815, "text": "ADDTIME(expr1,expr2)\n" }, { "code": null, "e": 2844, "s": 2837, "text": "where," }, { "code": null, "e": 2903, "s": 2844, "text": "expr1 is the expression representing the datetime or time." }, { "code": null, "e": 2962, "s": 2903, "text": "expr1 is the expression representing the datetime or time." }, { "code": null, "e": 3030, "s": 2962, "text": "expr2 is the expression representing the time interval to be added." }, { "code": null, "e": 3098, "s": 3030, "text": "expr2 is the expression representing the time interval to be added." }, { "code": null, "e": 3167, "s": 3098, "text": "Following example demonstrates the usage of the ADDTIME() function –" }, { "code": null, "e": 3496, "s": 3167, "text": "mysql> SELECT ADDTIME('10:40:32.88558', '06:04:01.222222');\n+----------------------------------------------+\n| ADDTIME('10:40:32.88558', '06:04:01.222222') |\n+----------------------------------------------+\n| 16:44:34.107802 |\n+----------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 3544, "s": 3496, "text": "Following is another example of this function –" }, { "code": null, "e": 3867, "s": 3544, "text": "mysql> SELECT ADDTIME('06:23:15.99999', '12:25:11.11111');\n+---------------------------------------------+\n| ADDTIME('06:23:15.99999', '12:25:11.11111') |\n+---------------------------------------------+\n| 18:48:27.111100 |\n+---------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 3933, "s": 3867, "text": "In the following example we are passing DATETIME value for time –" }, { "code": null, "e": 4328, "s": 3933, "text": "mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', '06:04:01.222222');\n+---------------------------------------------------------+\n| ADDTIME('2018-05-23 05:40:32.88558', '06:04:01.222222') |\n+---------------------------------------------------------+\n| 2018-05-23 11:44:34.107802 |\n+---------------------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4428, "s": 4328, "text": "In the following example we are passing the result of the CURTIME() function as the time interval —" }, { "code": null, "e": 4775, "s": 4428, "text": "mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', CURTIME());\n+-------------------------------------------------+\n| ADDTIME('2018-05-23 05:40:32.88558', CURTIME()) |\n+-------------------------------------------------+\n| 2018-05-23 17:58:41.885580 |\n+-------------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 4840, "s": 4775, "text": "We can also pass negative values as arguments to this function –" }, { "code": null, "e": 5570, "s": 4840, "text": "mysql> SELECT ADDTIME('2018-05-23 05:40:32.88558', '-06:04:01.222222');\n+----------------------------------------------------------+\n| ADDTIME('2018-05-23 05:40:32.88558', '-06:04:01.222222') |\n+----------------------------------------------------------+\n| 2018-05-22 23:36:31.663358 |\n+----------------------------------------------------------+\n1 row in set (0.00 sec)\nmysql> SELECT ADDTIME('06:23:15.99999', '-02:25:11.11111');\n+----------------------------------------------+\n| ADDTIME('06:23:15.99999', '-02:25:11.11111') |\n+----------------------------------------------+\n| 03:58:04.888880 |\n+----------------------------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 5668, "s": 5570, "text": "Let us create another table with name Sales in MySQL database using CREATE statement as follows –" }, { "code": null, "e": 5874, "s": 5668, "text": "mysql> CREATE TABLE sales(\n\tID INT,\n\tProductName VARCHAR(255),\n\tCustomerName VARCHAR(255),\n\tDispatchDate date,\n\tDispatchTime time,\n\tPrice INT,\n\tLocation VARCHAR(255)\n);\nQuery OK, 0 rows affected (2.22 sec)" }, { "code": null, "e": 5945, "s": 5874, "text": "Now, we will insert 5 records in Sales table using INSERT statements −" }, { "code": null, "e": 6478, "s": 5945, "text": "insert into sales values (1, 'Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad');\ninsert into sales values (2, 'Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam');\ninsert into sales values (3, 'Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada');\ninsert into sales values (4, 'Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai');\ninsert into sales values (5, 'Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa');" }, { "code": null, "e": 6562, "s": 6478, "text": "Following query adds time interval to the values in the column named DispatchTime —" }, { "code": null, "e": 7644, "s": 6562, "text": "mysql> SELECT ProductName, CustomerName, DispatchDate, DispatchTime, Price, ADDTIME(DispatchTime,'12:45:50') FROM Sales;\n+-------------+--------------+--------------+--------------+-------+----------------------------------+\n| ProductName | CustomerName | DispatchDate | DispatchTime | Price | ADDTIME(DispatchTime,'12:45:50') |\n+-------------+--------------+--------------+--------------+-------+----------------------------------+\n| Key-Board | Raja | 2019-09-01 | 11:00:00 | 7000 | 23:45:50 |\n| Earphones | Roja | 2019-05-01 | 11:00:00 | 2000 | 23:45:50 |\n| Mouse | Puja | 2019-03-01 | 10:59:59 | 3000 | 23:45:49 |\n| Mobile | Vanaja | 2019-03-01 | 10:10:52 | 9000 | 22:56:42 |\n| Headset | Jalaja | 2019-04-06 | 11:08:59 | 6000 | 23:54:49 |\n+-------------+--------------+--------------+--------------+-------+----------------------------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 7749, "s": 7644, "text": "Suppose we have created a table named dispatches_data with 5 records in it using the following queries –" }, { "code": null, "e": 8470, "s": 7749, "text": "mysql> CREATE TABLE dispatches_data(\n\tProductName VARCHAR(255),\n\tCustomerName VARCHAR(255),\n\tDispatchTimeStamp timestamp,\n\tPrice INT,\n\tLocation VARCHAR(255)\n);\ninsert into dispatches_data values('Key-Board', 'Raja', TIMESTAMP('2019-05-04', '15:02:45'), 7000, 'Hyderabad');\ninsert into dispatches_data values('Earphones', 'Roja', TIMESTAMP('2019-06-26', '14:13:12'), 2000, 'Vishakhapatnam');\ninsert into dispatches_data values('Mouse', 'Puja', TIMESTAMP('2019-12-07', '07:50:37'), 3000, 'Vijayawada');\ninsert into dispatches_data values('Mobile', 'Vanaja' , TIMESTAMP ('2018-03-21', '16:00:45'), 9000, 'Chennai');\ninsert into dispatches_data values('Headset', 'Jalaja' , TIMESTAMP('2018-12-30', '10:49:27'), 6000, 'Goa');" }, { "code": null, "e": 8545, "s": 8470, "text": "Following query adds time interval to the column named DispatchTimeStamp —" }, { "code": null, "e": 9616, "s": 8545, "text": "mysql> SELECT ProductName, CustomerName, DispatchTimeStamp, Price, ADDTIME(DispatchTimeStamp, '08:25:46') FROM dispatches_data;\n+-------------+--------------+---------------------+-------+----------------------------------------+\n| ProductName | CustomerName | DispatchTimeStamp | Price | ADDTIME(DispatchTimeStamp, '08:25:46') |\n+-------------+--------------+---------------------+-------+----------------------------------------+\n| Key-Board | Raja | 2019-05-04 15:02:45 | 7000 | 2019-05-04 23:28:31 |\n| Earphones | Roja | 2019-06-26 14:13:12 | 2000 | 2019-06-26 22:38:58 |\n| Mouse | Puja | 2019-12-07 07:50:37 | 3000 | 2019-12-07 16:16:23 |\n| Mobile | Vanaja | 2018-03-21 16:00:45 | 9000 | 2018-03-22 00:26:31 |\n| Headset | Jalaja | 2018-12-30 10:49:27 | 6000 | 2018-12-30 19:15:13 |\n+-------------+--------------+---------------------+-------+----------------------------------------+\n5 rows in set (0.21 sec)" }, { "code": null, "e": 9723, "s": 9616, "text": "Suppose we have created a table named SubscriberDetails with 5 records in it using the following queries –" }, { "code": null, "e": 10314, "s": 9723, "text": "mysql> CREATE TABLE SubscriberDetails (\n\tSubscriberName VARCHAR(255),\n\tPackageName VARCHAR(255),\n\tSubscriptionTimeStamp timestamp\n);\ninsert into SubscriberDetails values('Ram', 'Premium', TimeStamp('2020-10-21 20:53:49'));\ninsert into SubscriberDetails values('Rahman', 'Basic', TimeStamp('2020-11-26 10:13:19'));\ninsert into SubscriberDetails values('Robert', 'Moderate', TimeStamp('2021-03-07 05:43:20'));\ninsert into SubscriberDetails values('Radha', 'Basic', TimeStamp('2021-02-21 16:36:39'));\ninsert into SubscriberDetails values('Rajiya', 'Premium', TimeStamp('2021-01-30 12:45:45'));" }, { "code": null, "e": 10406, "s": 10314, "text": "Following query adds time interval to the SubscriptionTimeStamp values of all the records –" }, { "code": null, "e": 11482, "s": 10406, "text": "mysql> SELECT SubscriberName, PackageName, SubscriptionTimeStamp, ADDTIME(SubscriptionTimeStamp, '10:05:20') FROM SubscriberDetails;\n+----------------+-------------+-----------------------+--------------------------------------------+\n| SubscriberName | PackageName | SubscriptionTimeStamp | ADDTIME(SubscriptionTimeStamp, '10:05:20') |\n+----------------+-------------+-----------------------+--------------------------------------------+\n| Ram | Premium | 2020-10-21 20:53:49 | 2020-10-22 06:59:09 |\n| Rahman | Basic | 2020-11-26 10:13:19 | 2020-11-26 20:18:39 |\n| Robert | Moderate | 2021-03-07 05:43:20 | 2021-03-07 15:48:40 |\n| Radha | Basic | 2021-02-21 16:36:39 | 2021-02-22 02:41:59 |\n| Rajiya | Premium | 2021-01-30 12:45:45 | 2021-01-30 22:51:05 |\n+----------------+-------------+-----------------------+--------------------------------------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 11515, "s": 11482, "text": "\n 31 Lectures \n 6 hours \n" }, { "code": null, "e": 11543, "s": 11515, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 11578, "s": 11543, "text": "\n 84 Lectures \n 5.5 hours \n" }, { "code": null, "e": 11595, "s": 11578, "text": " Frahaan Hussain" }, { "code": null, "e": 11629, "s": 11595, "text": "\n 6 Lectures \n 3.5 hours \n" }, { "code": null, "e": 11664, "s": 11629, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 11698, "s": 11664, "text": "\n 60 Lectures \n 10 hours \n" }, { "code": null, "e": 11726, "s": 11698, "text": " Vijay Kumar Parvatha Reddy" }, { "code": null, "e": 11759, "s": 11726, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 11779, "s": 11759, "text": " Harshit Srivastava" }, { "code": null, "e": 11812, "s": 11779, "text": "\n 25 Lectures \n 4 hours \n" }, { "code": null, "e": 11830, "s": 11812, "text": " Trevoir Williams" }, { "code": null, "e": 11837, "s": 11830, "text": " Print" }, { "code": null, "e": 11848, "s": 11837, "text": " Add Notes" } ]
Different ways to print exception messages in Java
Following are the different ways to handle exception messages in Java. Using printStackTrace() method − It print the name of the exception, description and complete stack trace including the line where exception occurred.catch(Exception e) { e.printStackTrace(); } Using printStackTrace() method − It print the name of the exception, description and complete stack trace including the line where exception occurred. catch(Exception e) { e.printStackTrace(); } Using toString() method − It prints the name and description of the exception.catch(Exception e) { System.out.println(e.toString()); } Using toString() method − It prints the name and description of the exception. catch(Exception e) { System.out.println(e.toString()); } Using getMessage() method − Mostly used. It prints the description of the exception.catch(Exception e) { System.out.println(e.getMessage()); } Using getMessage() method − Mostly used. It prints the description of the exception. catch(Exception e) { System.out.println(e.getMessage()); } import java.io.Serializable; public class Tester implements Serializable, Cloneable { public static void main(String args[]) { try { int a = 0; int b = 10; int result = b/a; System.out.println(result); } catch(Exception e) { System.out.println("toString(): " + e.toString()); System.out.println("getMessage(): " + e.getMessage()); System.out.println("StackTrace: "); e.printStackTrace(); } } } toString(): java.lang.ArithmeticException: / by zero getMessage(): / by zero StackTrace: java.lang.ArithmeticException: / by zero at Tester.main(Tester.java:8)
[ { "code": null, "e": 1133, "s": 1062, "text": "Following are the different ways to handle exception messages in Java." }, { "code": null, "e": 1327, "s": 1133, "text": "Using printStackTrace() method − It print the name of the exception, description and complete stack trace including the line where exception occurred.catch(Exception e) {\ne.printStackTrace();\n}" }, { "code": null, "e": 1478, "s": 1327, "text": "Using printStackTrace() method − It print the name of the exception, description and complete stack trace including the line where exception occurred." }, { "code": null, "e": 1522, "s": 1478, "text": "catch(Exception e) {\ne.printStackTrace();\n}" }, { "code": null, "e": 1657, "s": 1522, "text": "Using toString() method − It prints the name and description of the exception.catch(Exception e) {\nSystem.out.println(e.toString());\n}" }, { "code": null, "e": 1736, "s": 1657, "text": "Using toString() method − It prints the name and description of the exception." }, { "code": null, "e": 1793, "s": 1736, "text": "catch(Exception e) {\nSystem.out.println(e.toString());\n}" }, { "code": null, "e": 1936, "s": 1793, "text": "Using getMessage() method − Mostly used. It prints the description of the exception.catch(Exception e) {\nSystem.out.println(e.getMessage());\n}" }, { "code": null, "e": 2021, "s": 1936, "text": "Using getMessage() method − Mostly used. It prints the description of the exception." }, { "code": null, "e": 2080, "s": 2021, "text": "catch(Exception e) {\nSystem.out.println(e.getMessage());\n}" }, { "code": null, "e": 2573, "s": 2080, "text": "import java.io.Serializable;\n\npublic class Tester implements Serializable, Cloneable {\n public static void main(String args[]) {\n\n try {\n int a = 0;\n int b = 10;\n int result = b/a;\n System.out.println(result);\n\n } catch(Exception e) {\n System.out.println(\"toString(): \" + e.toString());\n System.out.println(\"getMessage(): \" + e.getMessage());\n System.out.println(\"StackTrace: \");\n e.printStackTrace();\n }\n }\n}" }, { "code": null, "e": 2733, "s": 2573, "text": "toString(): java.lang.ArithmeticException: / by zero\ngetMessage(): / by zero\nStackTrace:\njava.lang.ArithmeticException: / by zero\nat Tester.main(Tester.java:8)" } ]
Two essential Pandas add-ons. These two must-have UIs will help you... | by Josh Taylor | Towards Data Science
The Python Data Analysis Library (Pandas) is the de facto analysis tool for Python. It still amazes me that such a powerful analysis library can be open-source and free to use. But it is not perfect... There are a couple of frustrations that I have with the library especially when it comes to performing simple filtering and pivoting. There are certain situations where a user interface can really speed-up analysis. Nothing beats ‘drag-and-drop’ for an intuitive way of exploring and filtering data and this is not something that Pandas allows you to do. Thankfully there are two libraries which address these issues and work perfectly alongside Pandas. The pivot table in Pandas is very powerful but it does not lend itself to quick and easy data exploration. In fact things can get very complex very quickly: Thankfully there is a fantastic interactive pivot-table and plotting add-on, pivottablejs. It can be installed and run in 4 lines of code: !pip install pivottablejsfrom pivottablejs import pivot_uipivot_ui(df,outfile_path=’pivottablejs.html’)HTML(‘pivottablejs.html’) This gives you an interactive HTML pivot chart. This can be displayed within a notebook or opened in a browser as an HTML file (this allows it to be easily shared with others): Fed-up of looking at the first and last 5 rows of a Pandas dataframe? How often have you wished that you could quickly filter and see what is happening with your data. Pandas does provide useful filtering functionality with loc and iloc however in the same way that pivot tables can become quite complex, so can statements using these indexing functions. QGrid allows you to do this and much more. Some of the key features are: Filter and sort dataframes Scroll through large dataframes without loosing performance (>1million rows) Edit cells in a dataframe directly through the UI Return a new dataframe with the filters/sorts/edits applied Compatible with Jupyter Notebooks and JupyterLab Installation is simple via pip or Conda: pip install qgridjupyter nbextension enable --py --sys-prefix qgridimport qgrid# only required if you have not enabled the ipywidgets nbextension yetjupyter nbextension enable --py --sys-prefix widgetsnbextension#to show a df simply use the below:qgrid.show_grid(df) To get an idea of what is possible, see the below demo from the QGrid Github page: That’s all. Hopefully these two tools will help speed up your data analysis in Python. If you know of any other UI libraries for Pandas, please let people know in the comments below.
[ { "code": null, "e": 349, "s": 172, "text": "The Python Data Analysis Library (Pandas) is the de facto analysis tool for Python. It still amazes me that such a powerful analysis library can be open-source and free to use." }, { "code": null, "e": 374, "s": 349, "text": "But it is not perfect..." }, { "code": null, "e": 828, "s": 374, "text": "There are a couple of frustrations that I have with the library especially when it comes to performing simple filtering and pivoting. There are certain situations where a user interface can really speed-up analysis. Nothing beats ‘drag-and-drop’ for an intuitive way of exploring and filtering data and this is not something that Pandas allows you to do. Thankfully there are two libraries which address these issues and work perfectly alongside Pandas." }, { "code": null, "e": 985, "s": 828, "text": "The pivot table in Pandas is very powerful but it does not lend itself to quick and easy data exploration. In fact things can get very complex very quickly:" }, { "code": null, "e": 1124, "s": 985, "text": "Thankfully there is a fantastic interactive pivot-table and plotting add-on, pivottablejs. It can be installed and run in 4 lines of code:" }, { "code": null, "e": 1253, "s": 1124, "text": "!pip install pivottablejsfrom pivottablejs import pivot_uipivot_ui(df,outfile_path=’pivottablejs.html’)HTML(‘pivottablejs.html’)" }, { "code": null, "e": 1430, "s": 1253, "text": "This gives you an interactive HTML pivot chart. This can be displayed within a notebook or opened in a browser as an HTML file (this allows it to be easily shared with others):" }, { "code": null, "e": 1785, "s": 1430, "text": "Fed-up of looking at the first and last 5 rows of a Pandas dataframe? How often have you wished that you could quickly filter and see what is happening with your data. Pandas does provide useful filtering functionality with loc and iloc however in the same way that pivot tables can become quite complex, so can statements using these indexing functions." }, { "code": null, "e": 1858, "s": 1785, "text": "QGrid allows you to do this and much more. Some of the key features are:" }, { "code": null, "e": 1885, "s": 1858, "text": "Filter and sort dataframes" }, { "code": null, "e": 1962, "s": 1885, "text": "Scroll through large dataframes without loosing performance (>1million rows)" }, { "code": null, "e": 2012, "s": 1962, "text": "Edit cells in a dataframe directly through the UI" }, { "code": null, "e": 2072, "s": 2012, "text": "Return a new dataframe with the filters/sorts/edits applied" }, { "code": null, "e": 2121, "s": 2072, "text": "Compatible with Jupyter Notebooks and JupyterLab" }, { "code": null, "e": 2162, "s": 2121, "text": "Installation is simple via pip or Conda:" }, { "code": null, "e": 2429, "s": 2162, "text": "pip install qgridjupyter nbextension enable --py --sys-prefix qgridimport qgrid# only required if you have not enabled the ipywidgets nbextension yetjupyter nbextension enable --py --sys-prefix widgetsnbextension#to show a df simply use the below:qgrid.show_grid(df)" }, { "code": null, "e": 2512, "s": 2429, "text": "To get an idea of what is possible, see the below demo from the QGrid Github page:" }, { "code": null, "e": 2599, "s": 2512, "text": "That’s all. Hopefully these two tools will help speed up your data analysis in Python." } ]
3D Surface Plots using Plotly in Python - GeeksforGeeks
05 Sep, 2020 Plotly is a Python library that is used to design graphs, especially interactive graphs. It can plot various graphs and charts like histogram, barplot, boxplot, spreadplot, and many more. It is mainly used in data analysis as well as financial analysis. plotly is an interactive visualization library. Surface plot is those plot which has three-dimensions data which is X, Y, and Z. Rather than showing individual data points, the surface plot has a functional relationship between dependent variable Y and have two independent variables X and Z. This plot is used to distinguish between dependent and independent variables. Syntax: plotly.graph_objects.Surface(arg=None, hoverinfo=None, x=None, y=None, z=None, **kwargs) Parameters: arg: dict of properties compatible with this constructor or an instance of plotly.graph_objects.Surface x: Sets the x coordinates. y: Sets the y coordinates. z: Sets the z coordinates. hoverinfo: Determines which trace information appear on hover. If none or skip are set, no information is displayed upon hovering. But, if none is set, click and hover events are still fired. Example: Python3 import plotly.graph_objects as goimport numpy as np x = np.outer(np.linspace(-2, 2, 30), np.ones(30))y = x.copy().Tz = np.cos(x ** 2 + y ** 2) fig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.show() Output: In plotly, contours attribute is used to Display and customize contour data for each axis. Example: Python3 import plotly.graph_objects as goimport numpy as np x = np.outer(np.linspace(-2, 2, 30), np.ones(30)) # transposey = x.copy().Tz = np.cos(x ** 2 + y ** 2) fig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.update_traces(contours_z=dict( show=True, usecolormap=True, highlightcolor="limegreen", project_z=True)) fig.show() Output: Python-Plotly Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe Python Dictionary Taking input in Python Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python
[ { "code": null, "e": 24046, "s": 24018, "text": "\n05 Sep, 2020" }, { "code": null, "e": 24348, "s": 24046, "text": "Plotly is a Python library that is used to design graphs, especially interactive graphs. It can plot various graphs and charts like histogram, barplot, boxplot, spreadplot, and many more. It is mainly used in data analysis as well as financial analysis. plotly is an interactive visualization library." }, { "code": null, "e": 24671, "s": 24348, "text": "Surface plot is those plot which has three-dimensions data which is X, Y, and Z. Rather than showing individual data points, the surface plot has a functional relationship between dependent variable Y and have two independent variables X and Z. This plot is used to distinguish between dependent and independent variables." }, { "code": null, "e": 24768, "s": 24671, "text": "Syntax: plotly.graph_objects.Surface(arg=None, hoverinfo=None, x=None, y=None, z=None, **kwargs)" }, { "code": null, "e": 24780, "s": 24768, "text": "Parameters:" }, { "code": null, "e": 24885, "s": 24780, "text": "arg: dict of properties compatible with this constructor or an instance of plotly.graph_objects.Surface" }, { "code": null, "e": 24912, "s": 24885, "text": "x: Sets the x coordinates." }, { "code": null, "e": 24939, "s": 24912, "text": "y: Sets the y coordinates." }, { "code": null, "e": 24966, "s": 24939, "text": "z: Sets the z coordinates." }, { "code": null, "e": 25158, "s": 24966, "text": "hoverinfo: Determines which trace information appear on hover. If none or skip are set, no information is displayed upon hovering. But, if none is set, click and hover events are still fired." }, { "code": null, "e": 25167, "s": 25158, "text": "Example:" }, { "code": null, "e": 25175, "s": 25167, "text": "Python3" }, { "code": "import plotly.graph_objects as goimport numpy as np x = np.outer(np.linspace(-2, 2, 30), np.ones(30))y = x.copy().Tz = np.cos(x ** 2 + y ** 2) fig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.show()", "e": 25382, "s": 25175, "text": null }, { "code": null, "e": 25390, "s": 25382, "text": "Output:" }, { "code": null, "e": 25481, "s": 25390, "text": "In plotly, contours attribute is used to Display and customize contour data for each axis." }, { "code": null, "e": 25490, "s": 25481, "text": "Example:" }, { "code": null, "e": 25498, "s": 25490, "text": "Python3" }, { "code": "import plotly.graph_objects as goimport numpy as np x = np.outer(np.linspace(-2, 2, 30), np.ones(30)) # transposey = x.copy().Tz = np.cos(x ** 2 + y ** 2) fig = go.Figure(data=[go.Surface(x=x, y=y, z=z)]) fig.update_traces(contours_z=dict( show=True, usecolormap=True, highlightcolor=\"limegreen\", project_z=True)) fig.show()", "e": 25837, "s": 25498, "text": null }, { "code": null, "e": 25845, "s": 25837, "text": "Output:" }, { "code": null, "e": 25859, "s": 25845, "text": "Python-Plotly" }, { "code": null, "e": 25866, "s": 25859, "text": "Python" }, { "code": null, "e": 25964, "s": 25866, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25992, "s": 25964, "text": "Read JSON file using Python" }, { "code": null, "e": 26042, "s": 25992, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 26064, "s": 26042, "text": "Python map() function" }, { "code": null, "e": 26108, "s": 26064, "text": "How to get column names in Pandas dataframe" }, { "code": null, "e": 26126, "s": 26108, "text": "Python Dictionary" }, { "code": null, "e": 26149, "s": 26126, "text": "Taking input in Python" }, { "code": null, "e": 26184, "s": 26149, "text": "Read a file line by line in Python" }, { "code": null, "e": 26206, "s": 26184, "text": "Enumerate() in Python" }, { "code": null, "e": 26238, "s": 26206, "text": "How to Install PIP on Windows ?" } ]
Find all triplets in a sorted array that forms Geometric Progression - GeeksforGeeks
08 Apr, 2021 Given a sorted array of distinct positive integers, print all triplets that forms Geometric Progression with integral common ratio.A geometric progression is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio. For example, the sequence 2, 6, 18, 54,... is a geometric progression with common ratio 3.Examples: Input: arr = [1, 2, 6, 10, 18, 54] Output: 2 6 18 6 18 54 Input: arr = [2, 8, 10, 15, 16, 30, 32, 64] Output: 2 8 32 8 16 32 16 32 64 Input: arr = [ 1, 2, 6, 18, 36, 54] Output: 2 6 18 1 6 36 6 18 54 The idea is to start from the second element and fix every element as middle element and search for the other two elements in a triplet (one smaller and one greater). For an element arr[j] to be middle of geometric progression, there must exist elements arr[i] and arr[k] such that – arr[j] / arr[i] = r and arr[k] / arr[j] = r where r is an positive integer and 0 <= i < j and j < k <= n - 1 Below is the implementation of above idea – C++ Java Python 3 C# Javascript // C++ program to find if there exist three elements in // Geometric Progression or not #include <iostream> using namespace std; // The function prints three elements in GP if exists // Assumption: arr[0..n-1] is sorted. void findGeometricTriplets(int arr[], int n) { // One by fix every element as middle element for (int j = 1; j < n - 1; j++) { // Initialize i and k for the current j int i = j - 1, k = j + 1; // Find all i and k such that (i, j, k) // forms a triplet of GP while (i >= 0 && k <= n - 1) { // if arr[j]/arr[i] = r and arr[k]/arr[j] = r // and r is an integer (i, j, k) forms Geometric // Progression while (arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0 && arr[j] / arr[i] == arr[k] / arr[j]) { // print the triplet cout << arr[i] << " " << arr[j] << " " << arr[k] << endl; // Since the array is sorted and elements // are distinct. k++ , i--; } // if arr[j] is multiple of arr[i] and arr[k] is // multiple of arr[j], then arr[j] / arr[i] != // arr[k] / arr[j]. We compare their values to // move to next k or previous i. if(arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0) { if(arr[j] / arr[i] < arr[k] / arr[j]) i--; else k++; } // else if arr[j] is multiple of arr[i], then // try next k. Else, try previous i. else if (arr[j] % arr[i] == 0) k++; else i--; } } } // Driver code int main() { // int arr[] = {1, 2, 6, 10, 18, 54}; // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64}; // int arr[] = {1, 2, 6, 18, 36, 54}; int arr[] = {1, 2, 4, 16}; // int arr[] = {1, 2, 3, 6, 18, 22}; int n = sizeof(arr) / sizeof(arr[0]); findGeometricTriplets(arr, n); return 0; } // Java program to find if there exist three elements in // Geometric Progression or not import java.util.*; class GFG { // The function prints three elements in GP if exists // Assumption: arr[0..n-1] is sorted. static void findGeometricTriplets(int arr[], int n) { // One by fix every element as middle element for (int j = 1; j < n - 1; j++) { // Initialize i and k for the current j int i = j - 1, k = j + 1; // Find all i and k such that (i, j, k) // forms a triplet of GP while (i >= 0 && k <= n - 1) { // if arr[j]/arr[i] = r and arr[k]/arr[j] = r // and r is an integer (i, j, k) forms Geometric // Progression while (i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0 && arr[j] / arr[i] == arr[k] / arr[j]) { // print the triplet System.out.println(arr[i] +" " + arr[j] + " " + arr[k]); // Since the array is sorted and elements // are distinct. k++ ; i--; } // if arr[j] is multiple of arr[i] and arr[k] is // multiple of arr[j], then arr[j] / arr[i] != // arr[k] / arr[j]. We compare their values to // move to next k or previous i. if(i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0) { if(i >= 0 && arr[j] / arr[i] < arr[k] / arr[j]) i--; else k++; } // else if arr[j] is multiple of arr[i], then // try next k. Else, try previous i. else if (i >= 0 && arr[j] % arr[i] == 0) k++; else i--; } } } // Driver code public static void main(String[] args) { // int arr[] = {1, 2, 6, 10, 18, 54}; // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64}; // int arr[] = {1, 2, 6, 18, 36, 54}; int arr[] = {1, 2, 4, 16}; // int arr[] = {1, 2, 3, 6, 18, 22}; int n = arr.length; findGeometricTriplets(arr, n); } } // This code is contributed by Rajput-Ji # Python 3 program to find if # there exist three elements in # Geometric Progression or not # The function prints three elements # in GP if exists. # Assumption: arr[0..n-1] is sorted. def findGeometricTriplets(arr, n): # One by fix every element # as middle element for j in range(1, n - 1): # Initialize i and k for # the current j i = j - 1 k = j + 1 # Find all i and k such that # (i, j, k) forms a triplet of GP while (i >= 0 and k <= n - 1): # if arr[j]/arr[i] = r and # arr[k]/arr[j] = r and r # is an integer (i, j, k) forms # Geometric Progression while (arr[j] % arr[i] == 0 and arr[k] % arr[j] == 0 and arr[j] // arr[i] == arr[k] // arr[j]): # print the triplet print( arr[i] , " " , arr[j], " " , arr[k]) # Since the array is sorted and # elements are distinct. k += 1 i -= 1 # if arr[j] is multiple of arr[i] # and arr[k] is multiple of arr[j], # then arr[j] / arr[i] != arr[k] / arr[j]. # We compare their values to # move to next k or previous i. if(arr[j] % arr[i] == 0 and arr[k] % arr[j] == 0): if(arr[j] // arr[i] < arr[k] // arr[j]): i -= 1 else: k += 1 # else if arr[j] is multiple of # arr[i], then try next k. Else, # try previous i. elif (arr[j] % arr[i] == 0): k += 1 else: i -= 1 # Driver code if __name__ =="__main__": arr = [1, 2, 4, 16] n = len(arr) findGeometricTriplets(arr, n) # This code is contributed # by ChitraNayal // C# program to find if there exist three elements // in Geometric Progression or not using System; class GFG { // The function prints three elements in GP if exists // Assumption: arr[0..n-1] is sorted. static void findGeometricTriplets(int []arr, int n) { // One by fix every element as middle element for (int j = 1; j < n - 1; j++) { // Initialize i and k for the current j int i = j - 1, k = j + 1; // Find all i and k such that (i, j, k) // forms a triplet of GP while (i >= 0 && k <= n - 1) { // if arr[j]/arr[i] = r and arr[k]/arr[j] = r // and r is an integer (i, j, k) forms Geometric // Progression while (i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0 && arr[j] / arr[i] == arr[k] / arr[j]) { // print the triplet Console.WriteLine(arr[i] +" " + arr[j] + " " + arr[k]); // Since the array is sorted and elements // are distinct. k++ ; i--; } // if arr[j] is multiple of arr[i] and arr[k] is // multiple of arr[j], then arr[j] / arr[i] != // arr[k] / arr[j]. We compare their values to // move to next k or previous i. if(i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0) { if(i >= 0 && arr[j] / arr[i] < arr[k] / arr[j]) i--; else k++; } // else if arr[j] is multiple of arr[i], then // try next k. Else, try previous i. else if (i >= 0 && arr[j] % arr[i] == 0) k++; else i--; } } } // Driver code static public void Main () { // int arr[] = {1, 2, 6, 10, 18, 54}; // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64}; // int arr[] = {1, 2, 6, 18, 36, 54}; int []arr = {1, 2, 4, 16}; // int arr[] = {1, 2, 3, 6, 18, 22}; int n = arr.Length; findGeometricTriplets(arr, n); } } // This code is contributed by ajit. <script> // Javascript program to find if there exist three elements in // Geometric Progression or not // The function prints three elements in GP if exists // Assumption: arr[0..n-1] is sorted. function findGeometricTriplets(arr,n) { // One by fix every element as middle element for (let j = 1; j < n - 1; j++) { // Initialize i and k for the current j let i = j - 1, k = j + 1; // Find all i and k such that (i, j, k) // forms a triplet of GP while (i >= 0 && k <= n - 1) { // if arr[j]/arr[i] = r and arr[k]/arr[j] = r // and r is an integer (i, j, k) forms Geometric // Progression while (i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0 && arr[j] / arr[i] == arr[k] / arr[j]) { // print the triplet document.write(arr[i] +" " + arr[j] + " " + arr[k]+"<br>"); // Since the array is sorted and elements // are distinct. k++ ; i--; } // if arr[j] is multiple of arr[i] and arr[k] is // multiple of arr[j], then arr[j] / arr[i] != // arr[k] / arr[j]. We compare their values to // move to next k or previous i. if(i >= 0 && arr[j] % arr[i] == 0 && arr[k] % arr[j] == 0) { if(i >= 0 && arr[j] / arr[i] < arr[k] / arr[j]) i--; else k++; } // else if arr[j] is multiple of arr[i], then // try next k. Else, try previous i. else if (i >= 0 && arr[j] % arr[i] == 0) k++; else i--; } } } // Driver code // int arr[] = {1, 2, 6, 10, 18, 54}; // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64}; // int arr[] = {1, 2, 6, 18, 36, 54}; let arr = [1, 2, 4, 16]; // int arr[] = {1, 2, 3, 6, 18, 22}; let n = arr.length; findGeometricTriplets(arr, n); // This code is contributed by avanitrachhadiya2155 </script> Output: 1 2 4 1 4 16 Time complexity of above solution is O(n2) as for every j, we are finding i and k in linear time.This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. ukasp Rajput-Ji jit_t avanitrachhadiya2155 Arrays Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Next Greater Element Window Sliding Technique Count pairs with given sum Program to find sum of elements in a given array Reversal algorithm for array rotation Find subarray with given sum | Set 1 (Nonnegative Numbers) Building Heap from Array Remove duplicates from sorted array Sliding Window Maximum (Maximum of all subarrays of size k) Move all negative numbers to beginning and positive to end with constant extra space
[ { "code": null, "e": 24507, "s": 24476, "text": " \n08 Apr, 2021\n" }, { "code": null, "e": 24915, "s": 24507, "text": "Given a sorted array of distinct positive integers, print all triplets that forms Geometric Progression with integral common ratio.A geometric progression is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio. For example, the sequence 2, 6, 18, 54,... is a geometric progression with common ratio 3.Examples: " }, { "code": null, "e": 25123, "s": 24915, "text": "Input: \narr = [1, 2, 6, 10, 18, 54]\nOutput: \n2 6 18\n6 18 54\n\nInput: \narr = [2, 8, 10, 15, 16, 30, 32, 64]\nOutput: \n2 8 32\n8 16 32\n16 32 64\n\nInput: \narr = [ 1, 2, 6, 18, 36, 54]\nOutput: \n2 6 18\n1 6 36\n6 18 54" }, { "code": null, "e": 25411, "s": 25125, "text": "The idea is to start from the second element and fix every element as middle element and search for the other two elements in a triplet (one smaller and one greater). For an element arr[j] to be middle of geometric progression, there must exist elements arr[i] and arr[k] such that – " }, { "code": null, "e": 25520, "s": 25411, "text": "arr[j] / arr[i] = r and arr[k] / arr[j] = r\nwhere r is an positive integer and 0 <= i < j and j < k <= n - 1" }, { "code": null, "e": 25565, "s": 25520, "text": "Below is the implementation of above idea – " }, { "code": null, "e": 25569, "s": 25565, "text": "C++" }, { "code": null, "e": 25574, "s": 25569, "text": "Java" }, { "code": null, "e": 25583, "s": 25574, "text": "Python 3" }, { "code": null, "e": 25586, "s": 25583, "text": "C#" }, { "code": null, "e": 25597, "s": 25586, "text": "Javascript" }, { "code": "\n\n\n\n\n\n\n// C++ program to find if there exist three elements in\n// Geometric Progression or not\n#include <iostream>\nusing namespace std;\n \n// The function prints three elements in GP if exists\n// Assumption: arr[0..n-1] is sorted.\nvoid findGeometricTriplets(int arr[], int n)\n{\n // One by fix every element as middle element\n for (int j = 1; j < n - 1; j++)\n {\n // Initialize i and k for the current j\n int i = j - 1, k = j + 1;\n \n // Find all i and k such that (i, j, k)\n // forms a triplet of GP\n while (i >= 0 && k <= n - 1)\n {\n // if arr[j]/arr[i] = r and arr[k]/arr[j] = r\n // and r is an integer (i, j, k) forms Geometric\n // Progression\n while (arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0 &&\n arr[j] / arr[i] == arr[k] / arr[j])\n {\n // print the triplet\n cout << arr[i] << \" \" << arr[j]\n << \" \" << arr[k] << endl;\n \n // Since the array is sorted and elements\n // are distinct.\n k++ , i--;\n }\n \n // if arr[j] is multiple of arr[i] and arr[k] is\n // multiple of arr[j], then arr[j] / arr[i] !=\n // arr[k] / arr[j]. We compare their values to\n // move to next k or previous i.\n if(arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0)\n {\n if(arr[j] / arr[i] < arr[k] / arr[j])\n i--;\n else k++;\n }\n \n // else if arr[j] is multiple of arr[i], then\n // try next k. Else, try previous i.\n else if (arr[j] % arr[i] == 0)\n k++;\n else i--;\n }\n }\n}\n \n// Driver code\nint main()\n{\n // int arr[] = {1, 2, 6, 10, 18, 54};\n // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64};\n // int arr[] = {1, 2, 6, 18, 36, 54};\n int arr[] = {1, 2, 4, 16};\n // int arr[] = {1, 2, 3, 6, 18, 22};\n int n = sizeof(arr) / sizeof(arr[0]);\n \n findGeometricTriplets(arr, n);\n \n return 0;\n}\n\n\n\n\n\n", "e": 27750, "s": 25607, "text": null }, { "code": "\n\n\n\n\n\n\n// Java program to find if there exist three elements in\n// Geometric Progression or not\nimport java.util.*;\n \nclass GFG \n{\n \n// The function prints three elements in GP if exists\n// Assumption: arr[0..n-1] is sorted.\nstatic void findGeometricTriplets(int arr[], int n)\n{\n // One by fix every element as middle element\n for (int j = 1; j < n - 1; j++)\n {\n // Initialize i and k for the current j\n int i = j - 1, k = j + 1;\n \n // Find all i and k such that (i, j, k)\n // forms a triplet of GP\n while (i >= 0 && k <= n - 1)\n {\n // if arr[j]/arr[i] = r and arr[k]/arr[j] = r\n // and r is an integer (i, j, k) forms Geometric\n // Progression\n while (i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0 &&\n arr[j] / arr[i] == arr[k] / arr[j])\n {\n // print the triplet\n System.out.println(arr[i] +\" \" + arr[j]\n + \" \" + arr[k]);\n \n // Since the array is sorted and elements\n // are distinct.\n k++ ; i--;\n }\n \n // if arr[j] is multiple of arr[i] and arr[k] is\n // multiple of arr[j], then arr[j] / arr[i] !=\n // arr[k] / arr[j]. We compare their values to\n // move to next k or previous i.\n if(i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0)\n {\n if(i >= 0 && arr[j] / arr[i] < arr[k] / arr[j])\n i--;\n else k++;\n }\n \n // else if arr[j] is multiple of arr[i], then\n // try next k. Else, try previous i.\n else if (i >= 0 && arr[j] % arr[i] == 0)\n k++;\n else i--;\n }\n }\n}\n \n// Driver code\npublic static void main(String[] args) \n{\n // int arr[] = {1, 2, 6, 10, 18, 54};\n // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64};\n // int arr[] = {1, 2, 6, 18, 36, 54};\n int arr[] = {1, 2, 4, 16};\n // int arr[] = {1, 2, 3, 6, 18, 22};\n int n = arr.length;\n \n findGeometricTriplets(arr, n);\n}\n}\n \n// This code is contributed by Rajput-Ji\n\n\n\n\n\n", "e": 29983, "s": 27760, "text": null }, { "code": "\n\n\n\n\n\n\n# Python 3 program to find if \n# there exist three elements in\n# Geometric Progression or not\n \n# The function prints three elements \n# in GP if exists.\n# Assumption: arr[0..n-1] is sorted.\ndef findGeometricTriplets(arr, n):\n \n # One by fix every element \n # as middle element\n for j in range(1, n - 1):\n \n # Initialize i and k for \n # the current j\n i = j - 1\n k = j + 1\n \n # Find all i and k such that \n # (i, j, k) forms a triplet of GP\n while (i >= 0 and k <= n - 1):\n \n # if arr[j]/arr[i] = r and \n # arr[k]/arr[j] = r and r \n # is an integer (i, j, k) forms \n # Geometric Progression\n while (arr[j] % arr[i] == 0 and\n arr[k] % arr[j] == 0 and\n arr[j] // arr[i] == arr[k] // arr[j]):\n \n # print the triplet\n print( arr[i] , \" \" , arr[j], \n \" \" , arr[k]) \n \n # Since the array is sorted and \n # elements are distinct.\n k += 1\n i -= 1\n \n # if arr[j] is multiple of arr[i]\n # and arr[k] is multiple of arr[j], \n # then arr[j] / arr[i] != arr[k] / arr[j].\n # We compare their values to\n # move to next k or previous i.\n if(arr[j] % arr[i] == 0 and\n arr[k] % arr[j] == 0):\n \n if(arr[j] // arr[i] < arr[k] // arr[j]):\n i -= 1\n else:\n k += 1\n \n # else if arr[j] is multiple of \n # arr[i], then try next k. Else, \n # try previous i.\n elif (arr[j] % arr[i] == 0):\n k += 1\n else:\n i -= 1\n \n# Driver code\nif __name__ ==\"__main__\":\n \n arr = [1, 2, 4, 16]\n n = len(arr)\n \n findGeometricTriplets(arr, n)\n \n# This code is contributed \n# by ChitraNayal\n\n\n\n\n\n", "e": 32009, "s": 29993, "text": null }, { "code": "\n\n\n\n\n\n\n// C# program to find if there exist three elements \n// in Geometric Progression or not\nusing System;\n \nclass GFG\n{\n \n// The function prints three elements in GP if exists\n// Assumption: arr[0..n-1] is sorted.\nstatic void findGeometricTriplets(int []arr, int n)\n{\n \n // One by fix every element as middle element\n for (int j = 1; j < n - 1; j++)\n {\n // Initialize i and k for the current j\n int i = j - 1, k = j + 1;\n \n // Find all i and k such that (i, j, k)\n // forms a triplet of GP\n while (i >= 0 && k <= n - 1)\n {\n // if arr[j]/arr[i] = r and arr[k]/arr[j] = r\n // and r is an integer (i, j, k) forms Geometric\n // Progression\n while (i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0 &&\n arr[j] / arr[i] == arr[k] / arr[j])\n {\n // print the triplet\n Console.WriteLine(arr[i] +\" \" + \n arr[j] + \" \" + arr[k]);\n \n // Since the array is sorted and elements\n // are distinct.\n k++ ; i--;\n }\n \n // if arr[j] is multiple of arr[i] and arr[k] is\n // multiple of arr[j], then arr[j] / arr[i] !=\n // arr[k] / arr[j]. We compare their values to\n // move to next k or previous i.\n if(i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0)\n {\n if(i >= 0 && arr[j] / arr[i] < \n arr[k] / arr[j])\n i--;\n else k++;\n }\n \n // else if arr[j] is multiple of arr[i], then\n // try next k. Else, try previous i.\n else if (i >= 0 && arr[j] % arr[i] == 0)\n k++;\n else i--;\n }\n }\n}\n \n// Driver code\nstatic public void Main ()\n{\n \n // int arr[] = {1, 2, 6, 10, 18, 54};\n // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64};\n // int arr[] = {1, 2, 6, 18, 36, 54};\n int []arr = {1, 2, 4, 16};\n \n // int arr[] = {1, 2, 3, 6, 18, 22};\n int n = arr.Length;\n \n findGeometricTriplets(arr, n);\n}\n}\n \n// This code is contributed by ajit.\n\n\n\n\n\n", "e": 34293, "s": 32019, "text": null }, { "code": "\n\n\n\n\n\n\n<script>\n// Javascript program to find if there exist three elements in\n// Geometric Progression or not\n \n // The function prints three elements in GP if exists\n // Assumption: arr[0..n-1] is sorted.\n function findGeometricTriplets(arr,n)\n {\n \n // One by fix every element as middle element\n for (let j = 1; j < n - 1; j++)\n {\n \n // Initialize i and k for the current j\n let i = j - 1, k = j + 1;\n \n // Find all i and k such that (i, j, k)\n // forms a triplet of GP\n while (i >= 0 && k <= n - 1)\n {\n \n // if arr[j]/arr[i] = r and arr[k]/arr[j] = r\n // and r is an integer (i, j, k) forms Geometric\n // Progression\n while (i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0 &&\n arr[j] / arr[i] == arr[k] / arr[j])\n {\n \n // print the triplet\n document.write(arr[i] +\" \" + arr[j]\n + \" \" + arr[k]+\"<br>\");\n \n // Since the array is sorted and elements\n // are distinct.\n k++ ; i--;\n }\n \n // if arr[j] is multiple of arr[i] and arr[k] is\n // multiple of arr[j], then arr[j] / arr[i] !=\n // arr[k] / arr[j]. We compare their values to\n // move to next k or previous i.\n if(i >= 0 && arr[j] % arr[i] == 0 &&\n arr[k] % arr[j] == 0)\n {\n if(i >= 0 && arr[j] / arr[i] < arr[k] / arr[j])\n i--;\n else k++;\n }\n \n // else if arr[j] is multiple of arr[i], then\n // try next k. Else, try previous i.\n else if (i >= 0 && arr[j] % arr[i] == 0)\n k++;\n else i--;\n }\n }\n }\n \n // Driver code\n // int arr[] = {1, 2, 6, 10, 18, 54};\n // int arr[] = {2, 8, 10, 15, 16, 30, 32, 64};\n // int arr[] = {1, 2, 6, 18, 36, 54};\n \n let arr = [1, 2, 4, 16];\n \n // int arr[] = {1, 2, 3, 6, 18, 22};\n let n = arr.length;\n findGeometricTriplets(arr, n);\n \n // This code is contributed by avanitrachhadiya2155\n</script>\n\n\n\n\n\n", "e": 36750, "s": 34303, "text": null }, { "code": null, "e": 36760, "s": 36750, "text": "Output: " }, { "code": null, "e": 36773, "s": 36760, "text": "1 2 4\n1 4 16" }, { "code": null, "e": 37294, "s": 36773, "text": "Time complexity of above solution is O(n2) as for every j, we are finding i and k in linear time.This article is contributed by Aditya Goel. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 37300, "s": 37294, "text": "ukasp" }, { "code": null, "e": 37310, "s": 37300, "text": "Rajput-Ji" }, { "code": null, "e": 37316, "s": 37310, "text": "jit_t" }, { "code": null, "e": 37337, "s": 37316, "text": "avanitrachhadiya2155" }, { "code": null, "e": 37346, "s": 37337, "text": "\nArrays\n" }, { "code": null, "e": 37551, "s": 37346, "text": "Writing code in comment? \n Please use ide.geeksforgeeks.org, \n generate link and share the link here.\n " }, { "code": null, "e": 37572, "s": 37551, "text": "Next Greater Element" }, { "code": null, "e": 37597, "s": 37572, "text": "Window Sliding Technique" }, { "code": null, "e": 37624, "s": 37597, "text": "Count pairs with given sum" }, { "code": null, "e": 37673, "s": 37624, "text": "Program to find sum of elements in a given array" }, { "code": null, "e": 37711, "s": 37673, "text": "Reversal algorithm for array rotation" }, { "code": null, "e": 37770, "s": 37711, "text": "Find subarray with given sum | Set 1 (Nonnegative Numbers)" }, { "code": null, "e": 37795, "s": 37770, "text": "Building Heap from Array" }, { "code": null, "e": 37831, "s": 37795, "text": "Remove duplicates from sorted array" }, { "code": null, "e": 37891, "s": 37831, "text": "Sliding Window Maximum (Maximum of all subarrays of size k)" } ]
Pascal's Triangle - GeeksforGeeks
21 Oct, 2021 Pascal’s triangle is a triangular array of the binomial coefficients. Write a function that takes an integer value n as input and prints first n lines of the Pascal’s triangle. Following are the first 6 rows of Pascal’s Triangle. 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 Method 1 ( O(n^3) time complexity ) Number of entries in every line is equal to line number. For example, the first line has “1”, the second line has “1 1”, the third line has “1 2 1”,.. and so on. Every entry in a line is value of a Binomial Coefficient. The value of ith entry in line number line is C(line, i). The value can be calculated using following formula. C(line, i) = line! / ( (line-i)! * i! ) A simple method is to run two loops and calculate the value of Binomial Coefficient in inner loop. C++ C Java Python3 C# PHP Javascript // C++ code for Pascal's Triangle#include <iostream>using namespace std; // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k); // Function to print first// n lines of Pascal's// Trianglevoid printPascal(int n){ // Iterate through every line and // print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line // number for (int i = 0; i <= line; i++) cout <<" "<< binomialCoeff(line, i); cout <<"\n"; }} // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k){ int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res;} // Driver programint main(){ int n = 7; printPascal(n); return 0;} // This code is contributed by shivanisinghss2110 // C++ code for Pascal's Triangle#include <stdio.h> // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k); // Function to print first// n lines of Pascal's// Trianglevoid printPascal(int n){ // Iterate through every line and // print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line // number for (int i = 0; i <= line; i++) printf("%d ", binomialCoeff(line, i)); printf("\n"); }} // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k){ int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res;} // Driver programint main(){ int n = 7; printPascal(n); return 0;} // Java code for Pascal's Triangleimport java.io.*; class GFG { // Function to print first // n lines of Pascal's Triangle static void printPascal(int n) { // Iterate through every line // and print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) System.out.print(binomialCoeff (line, i)+" "); System.out.println(); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ static int binomialCoeff(int n, int k) { int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver code public static void main(String args[]) { int n = 7; printPascal(n); }} /*This code is contributed by Nikita Tiwari.*/ # Python 3 code for Pascal's Triangle# A simple O(n^3)# program for# Pascal's Triangle # Function to print# first n lines of# Pascal's Triangledef printPascal(n) : # Iterate through every line # and print entries in it for line in range(0, n) : # Every line has number of # integers equal to line # number for i in range(0, line + 1) : print(binomialCoeff(line, i), " ", end = "") print() # See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/# for details of this functiondef binomialCoeff(n, k) : res = 1 if (k > n - k) : k = n - k for i in range(0 , k) : res = res * (n - i) res = res // (i + 1) return res # Driver programn = 7printPascal(n) # This code is contributed by Nikita Tiwari. // C# code for Pascal's Triangleusing System; class GFG { // Function to print first // n lines of Pascal's Triangle static void printPascal(int n) { // Iterate through every line // and print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) Console.Write(binomialCoeff (line, i)+" "); Console.WriteLine(); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ static int binomialCoeff(int n, int k) { int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver code public static void Main() { int n = 7; printPascal(n); }} /*This code is contributed by vt_m.*/ <?php// PHP implementation for// Pascal's Triangle // for details of this functionfunction binomialCoeff($n, $k){ $res = 1; if ($k > $n - $k) $k = $n - $k; for ($i = 0; $i < $k; ++$i) { $res *= ($n - $i); $res /= ($i + 1); }return $res;} // Function to print first// n lines of Pascal's// Trianglefunction printPascal($n){ // Iterate through every line and // print entries in it for ($line = 0; $line < $n; $line++) { // Every line has number of // integers equal to line // number for ($i = 0; $i <= $line; $i++) echo "".binomialCoeff($line, $i)." "; echo "\n"; }} // Driver Code$n=7;printPascal($n); // This code is contributed by Mithun Kumar?> <script> // Javascript code for Pascal's Triangle // Function to print first // n lines of Pascal's Triangle function printPascal(n) { // Iterate through every line // and print entries in it for (let line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (let i = 0; i <= line; i++) document.write(binomialCoeff (line, i)+" "); document.write("<br />"); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ function binomialCoeff(n, k) { let res = 1; if (k > n - k) k = n - k; for (let i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver Code let n = 7; printPascal(n); </script> Output : 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1 Auxiliary Space: O(1) Time complexity of this method is O(n^3). Following are optimized methods.Method 2( O(n^2) time and O(n^2) extra space ) If we take a closer at the triangle, we observe that every entry is sum of the two values above it. So we can create a 2D array that stores previously generated values. To generate a value in a line, we can use the previously stored values from array. C++ C Java Python3 C# PHP Javascript // C++ program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Triangle#include <bits/stdc++.h>using namespace std; void printPascal(int n){ // An auxiliary array to store // generated pascal triangle values int arr[n][n]; // Iterate through every line and // print integer(s) in it for (int line = 0; line < n; line++) { // Every line has number of integers // equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; // Other values are sum of values just // above and left of above else arr[line][i] = arr[line - 1][i - 1] + arr[line - 1][i]; cout << arr[line][i] << " "; } cout << "\n"; }} // Driver codeint main(){ int n = 5; printPascal(n); return 0;} // This code is Contributed by Code_Mech. // C program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Trianglevoid printPascal(int n){// An auxiliary array to store// generated pascal triangle valuesint arr[n][n]; // Iterate through every line and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of integers // equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; // Other values are sum of values just // above and left of above else arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; printf("%d ", arr[line][i]); } printf("\n");}}// Driver codeint main(){int n = 5; printPascal(n); return 0;} // java program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Triangleimport java.io.*; class GFG { public static void main (String[] args) { int n = 5; printPascal(n); } public static void printPascal(int n){// An auxiliary array to store generated pascal triangle valuesint[][] arr = new int[n][n]; // Iterate through every line and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of integers equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; else // Other values are sum of values just above and left of above arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; System.out.print(arr[line][i]); } System.out.println("");}}} # Python3 program for Pascal's Triangle # A O(n^2) time and O(n^2) extra# space method for Pascal's Triangledef printPascal(n:int): # An auxiliary array to store # generated pascal triangle values arr = [[0 for x in range(n)] for y in range(n)] # Iterate through every line # and print integer(s) in it for line in range (0, n): # Every line has number of # integers equal to line number for i in range (0, line + 1): # First and last values # in every row are 1 if(i is 0 or i is line): arr[line][i] = 1 print(arr[line][i], end = " ") # Other values are sum of values # just above and left of above else: arr[line][i] = (arr[line - 1][i - 1] + arr[line - 1][i]) print(arr[line][i], end = " ") print("\n", end = "") # Driver Coden = 5printPascal(n) # This code is contributed# by Sanju Maderna // C# program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Triangleusing System; class GFG{public static void printPascal(int n){ // An auxiliary array to store// generated pascal triangle valuesint[,] arr = new int[n, n]; // Iterate through every line// and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) { // First and last values // in every row are 1 if (line == i || i == 0) arr[line, i] = 1; else // Other values are sum of values // just above and left of above arr[line, i] = arr[line - 1, i - 1] + arr[line - 1, i]; Console.Write(arr[line, i]); }Console.WriteLine("");}} // Driver Codepublic static void Main (){ int n = 5; printPascal(n);}} // This code is contributed// by Akanksha Rai(Abby_akku) <?php// PHP program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Trianglefunction printPascal($n){ // An auxiliary array to store // generated pascal triangle values $arr = array(array()); // Iterate through every line and // print integer(s) in it for ($line = 0; $line < $n; $line++) { // Every line has number of integers // equal to line number for ($i = 0; $i <= $line; $i++) { // First and last values in every row are 1 if ($line == $i || $i == 0) $arr[$line][$i] = 1; // Other values are sum of values just // above and left of above else $arr[$line][$i] = $arr[$line - 1][$i - 1] + $arr[$line - 1][$i]; echo $arr[$line][$i] . " "; } echo "\n"; }} // Driver code$n = 5;printPascal($n); // This code is contributed// by Akanksha Rai?> <script> // javascript program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Trianglevar n = 5;printPascal(n); function printPascal(n){// An auxiliary array to store generated pascal triangle valuesarr = a = Array(n).fill(0).map(x => Array(n).fill(0)); // Iterate through every line and print integer(s) in itfor (line = 0; line < n; line++){ // Every line has number of integers equal to line number for (i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; else // Other values are sum of values just above and left of above arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; document.write(arr[line][i]); } document.write("<br>");}} // This code is contributed by 29AjayKumar </script> Output: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 This method can be optimized to use O(n) extra space as we need values only from previous row. So we can create an auxiliary array of size n and overwrite values. Following is another method uses only O(1) extra space.Method 3 ( O(n^2) time and O(1) extra space ) This method is based on method 1. We know that ith entry in a line number line is Binomial Coefficient C(line, i) and all lines start with value 1. The idea is to calculate C(line, i) using C(line, i-1). It can be calculated in O(1) time using the following. C(line, i) = line! / ( (line-i)! * i! ) C(line, i-1) = line! / ( (line - i + 1)! * (i-1)! ) We can derive following expression from above two expressions. C(line, i) = C(line, i-1) * (line - i + 1) / i So C(line, i) can be calculated from C(line, i-1) in O(1) time C++ C Java Python3 C# PHP Javascript // C++ program for Pascal’s Triangle// A O(n^2) time and O(1) extra space// function for Pascal's Triangle#include <bits/stdc++.h> using namespace std;void printPascal(int n){ for (int line = 1; line <= n; line++){ int C = 1; // used to represent C(line, i) for (int i = 1; i <= line; i++) { // The first value in a line is always 1 cout<< C<<" "; C = C * (line - i) / i; } cout<<"\n";}} // Driver codeint main(){ int n = 5; printPascal(n); return 0;} // This code is contributed by Code_Mech // C program for Pascal’s Triangle// A O(n^2) time and O(1) extra space// function for Pascal's Trianglevoid printPascal(int n){for (int line = 1; line <= n; line++){ int C = 1; // used to represent C(line, i) for (int i = 1; i <= line; i++) { printf("%d ", C); // The first value in a line is always 1 C = C * (line - i) / i; } printf("\n");}}// Driver codeint main(){int n = 5; printPascal(n); return 0;} // Java program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangleimport java.io.*;class GFG { //Pascal functionpublic static void printPascal(int n){ for(int line = 1; line <= n; line++) { int C=1;// used to represent C(line, i) for(int i = 1; i <= line; i++) { // The first value in a line is always 1 System.out.print(C+" "); C = C * (line - i) / i; } System.out.println(); }} // Driver codepublic static void main (String[] args) { int n = 5; printPascal(n);}}// This code is contributed// by Archit Puri # Python3 program for Pascal's Triangle# A O(n^2) time and O(1) extra# space method for Pascal's Triangle # Pascal functiondef printPascal(n): for line in range(1, n + 1): C = 1; # used to represent C(line, i) for i in range(1, line + 1): # The first value in a # line is always 1 print(C, end = " "); C = int(C * (line - i) / i); print(""); # Driver coden = 5;printPascal(n); # This code is contributed by mits // C# program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangleusing System;class GFG{ // Pascal functionpublic static void printPascal(int n){ for(int line = 1; line <= n; line++) { int C = 1;// used to represent C(line, i) for(int i = 1; i <= line; i++) { // The first value in a // line is always 1 Console.Write(C + " "); C = C * (line - i) / i; } Console.Write("\n") ; }} // Driver codepublic static void Main (){ int n = 5; printPascal(n);}} // This code is contributed// by ChitraNayal <?php// PHP program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangle // Pascal functionfunction printPascal($n){ for($line = 1; $line <= $n; $line++) { $C = 1;// used to represent C(line, i) for($i = 1; $i <= $line; $i++) { // The first value in a // line is always 1 print($C . " "); $C = $C * ($line - $i) / $i; } print("\n"); }} // Driver code$n = 5;printPascal($n); // This code is contributed by mits?> <script> // JavaScript program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangle //Pascal functionfunction printPascal(n){ for(line = 1; line <= n; line++) { var C=1;// used to represent C(line, i) for(i = 1; i <= line; i++) { // The first value in a line is always 1 document.write(C+" "); C = C * (line - i) / i; } document.write("<br>"); }} // Driver codevar n = 5;printPascal(n); // This code is contributed by 29AjayKumar </script> Output: 1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 So method 3 is the best method among all, but it may cause integer overflow for large values of n as it multiplies two integers to obtain values. YouTubeGeeksforGeeks500K subscribersPascal Triangle | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 14:17•Live•<div class="player-unavailable"><h1 class="message">An error occurred.</h1><div class="submessage"><a href="https://www.youtube.com/watch?v=GJ06cufKwm8" target="_blank">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div> This article is compiled by Rahul and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Mithun Kumar Ravi_Maurya Akanksha_Rai ARCHIT PURI Smitha Dinesh Semwal Code_Mech sanju88 nidhi_biet souravghosh0416 29AjayKumar surindertarika1234 shivanisinghss2110 subhammahato348 Adobe Amazon binomial coefficient pattern-printing Arrays Mathematical Amazon Adobe Arrays Mathematical pattern-printing Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Introduction to Arrays Multidimensional Arrays in Java Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum) Linked List vs Array Python | Using 2D arrays/lists the right way Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Merge two sorted arrays Program to find GCD or HCF of two numbers
[ { "code": null, "e": 24291, "s": 24263, "text": "\n21 Oct, 2021" }, { "code": null, "e": 24522, "s": 24291, "text": "Pascal’s triangle is a triangular array of the binomial coefficients. Write a function that takes an integer value n as input and prints first n lines of the Pascal’s triangle. Following are the first 6 rows of Pascal’s Triangle. " }, { "code": null, "e": 24573, "s": 24522, "text": "1 \n1 1 \n1 2 1 \n1 3 3 1 \n1 4 6 4 1 \n1 5 10 10 5 1 " }, { "code": null, "e": 24943, "s": 24575, "text": "Method 1 ( O(n^3) time complexity ) Number of entries in every line is equal to line number. For example, the first line has “1”, the second line has “1 1”, the third line has “1 2 1”,.. and so on. Every entry in a line is value of a Binomial Coefficient. The value of ith entry in line number line is C(line, i). The value can be calculated using following formula. " }, { "code": null, "e": 24986, "s": 24943, "text": "C(line, i) = line! / ( (line-i)! * i! ) " }, { "code": null, "e": 25087, "s": 24986, "text": "A simple method is to run two loops and calculate the value of Binomial Coefficient in inner loop. " }, { "code": null, "e": 25091, "s": 25087, "text": "C++" }, { "code": null, "e": 25093, "s": 25091, "text": "C" }, { "code": null, "e": 25098, "s": 25093, "text": "Java" }, { "code": null, "e": 25106, "s": 25098, "text": "Python3" }, { "code": null, "e": 25109, "s": 25106, "text": "C#" }, { "code": null, "e": 25113, "s": 25109, "text": "PHP" }, { "code": null, "e": 25124, "s": 25113, "text": "Javascript" }, { "code": "// C++ code for Pascal's Triangle#include <iostream>using namespace std; // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k); // Function to print first// n lines of Pascal's// Trianglevoid printPascal(int n){ // Iterate through every line and // print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line // number for (int i = 0; i <= line; i++) cout <<\" \"<< binomialCoeff(line, i); cout <<\"\\n\"; }} // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k){ int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res;} // Driver programint main(){ int n = 7; printPascal(n); return 0;} // This code is contributed by shivanisinghss2110", "e": 26164, "s": 25124, "text": null }, { "code": "// C++ code for Pascal's Triangle#include <stdio.h> // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k); // Function to print first// n lines of Pascal's// Trianglevoid printPascal(int n){ // Iterate through every line and // print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line // number for (int i = 0; i <= line; i++) printf(\"%d \", binomialCoeff(line, i)); printf(\"\\n\"); }} // See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/// for details of this functionint binomialCoeff(int n, int k){ int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res;} // Driver programint main(){ int n = 7; printPascal(n); return 0;}", "e": 27155, "s": 26164, "text": null }, { "code": "// Java code for Pascal's Triangleimport java.io.*; class GFG { // Function to print first // n lines of Pascal's Triangle static void printPascal(int n) { // Iterate through every line // and print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) System.out.print(binomialCoeff (line, i)+\" \"); System.out.println(); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ static int binomialCoeff(int n, int k) { int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver code public static void main(String args[]) { int n = 7; printPascal(n); }} /*This code is contributed by Nikita Tiwari.*/", "e": 28244, "s": 27155, "text": null }, { "code": "# Python 3 code for Pascal's Triangle# A simple O(n^3)# program for# Pascal's Triangle # Function to print# first n lines of# Pascal's Triangledef printPascal(n) : # Iterate through every line # and print entries in it for line in range(0, n) : # Every line has number of # integers equal to line # number for i in range(0, line + 1) : print(binomialCoeff(line, i), \" \", end = \"\") print() # See https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/# for details of this functiondef binomialCoeff(n, k) : res = 1 if (k > n - k) : k = n - k for i in range(0 , k) : res = res * (n - i) res = res // (i + 1) return res # Driver programn = 7printPascal(n) # This code is contributed by Nikita Tiwari.", "e": 29091, "s": 28244, "text": null }, { "code": "// C# code for Pascal's Triangleusing System; class GFG { // Function to print first // n lines of Pascal's Triangle static void printPascal(int n) { // Iterate through every line // and print entries in it for (int line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) Console.Write(binomialCoeff (line, i)+\" \"); Console.WriteLine(); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ static int binomialCoeff(int n, int k) { int res = 1; if (k > n - k) k = n - k; for (int i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver code public static void Main() { int n = 7; printPascal(n); }} /*This code is contributed by vt_m.*/", "e": 30148, "s": 29091, "text": null }, { "code": "<?php// PHP implementation for// Pascal's Triangle // for details of this functionfunction binomialCoeff($n, $k){ $res = 1; if ($k > $n - $k) $k = $n - $k; for ($i = 0; $i < $k; ++$i) { $res *= ($n - $i); $res /= ($i + 1); }return $res;} // Function to print first// n lines of Pascal's// Trianglefunction printPascal($n){ // Iterate through every line and // print entries in it for ($line = 0; $line < $n; $line++) { // Every line has number of // integers equal to line // number for ($i = 0; $i <= $line; $i++) echo \"\".binomialCoeff($line, $i).\" \"; echo \"\\n\"; }} // Driver Code$n=7;printPascal($n); // This code is contributed by Mithun Kumar?>", "e": 30911, "s": 30148, "text": null }, { "code": "<script> // Javascript code for Pascal's Triangle // Function to print first // n lines of Pascal's Triangle function printPascal(n) { // Iterate through every line // and print entries in it for (let line = 0; line < n; line++) { // Every line has number of // integers equal to line number for (let i = 0; i <= line; i++) document.write(binomialCoeff (line, i)+\" \"); document.write(\"<br />\"); } } // Link for details of this function // https://www.geeksforgeeks.org/space-and-time-efficient-binomial-coefficient/ function binomialCoeff(n, k) { let res = 1; if (k > n - k) k = n - k; for (let i = 0; i < k; ++i) { res *= (n - i); res /= (i + 1); } return res; } // Driver Code let n = 7; printPascal(n); </script>", "e": 31871, "s": 30911, "text": null }, { "code": null, "e": 31882, "s": 31871, "text": "Output : " }, { "code": null, "e": 31950, "s": 31882, "text": "1 \n1 1 \n1 2 1 \n1 3 3 1 \n1 4 6 4 1 \n1 5 10 10 5 1 \n1 6 15 20 15 6 1 " }, { "code": null, "e": 31972, "s": 31950, "text": "Auxiliary Space: O(1)" }, { "code": null, "e": 32347, "s": 31972, "text": "Time complexity of this method is O(n^3). Following are optimized methods.Method 2( O(n^2) time and O(n^2) extra space ) If we take a closer at the triangle, we observe that every entry is sum of the two values above it. So we can create a 2D array that stores previously generated values. To generate a value in a line, we can use the previously stored values from array. " }, { "code": null, "e": 32353, "s": 32349, "text": "C++" }, { "code": null, "e": 32355, "s": 32353, "text": "C" }, { "code": null, "e": 32360, "s": 32355, "text": "Java" }, { "code": null, "e": 32368, "s": 32360, "text": "Python3" }, { "code": null, "e": 32371, "s": 32368, "text": "C#" }, { "code": null, "e": 32375, "s": 32371, "text": "PHP" }, { "code": null, "e": 32386, "s": 32375, "text": "Javascript" }, { "code": "// C++ program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Triangle#include <bits/stdc++.h>using namespace std; void printPascal(int n){ // An auxiliary array to store // generated pascal triangle values int arr[n][n]; // Iterate through every line and // print integer(s) in it for (int line = 0; line < n; line++) { // Every line has number of integers // equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; // Other values are sum of values just // above and left of above else arr[line][i] = arr[line - 1][i - 1] + arr[line - 1][i]; cout << arr[line][i] << \" \"; } cout << \"\\n\"; }} // Driver codeint main(){ int n = 5; printPascal(n); return 0;} // This code is Contributed by Code_Mech.", "e": 33378, "s": 32386, "text": null }, { "code": "// C program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Trianglevoid printPascal(int n){// An auxiliary array to store// generated pascal triangle valuesint arr[n][n]; // Iterate through every line and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of integers // equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; // Other values are sum of values just // above and left of above else arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; printf(\"%d \", arr[line][i]); } printf(\"\\n\");}}// Driver codeint main(){int n = 5; printPascal(n); return 0;}", "e": 34148, "s": 33378, "text": null }, { "code": "// java program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Triangleimport java.io.*; class GFG { public static void main (String[] args) { int n = 5; printPascal(n); } public static void printPascal(int n){// An auxiliary array to store generated pascal triangle valuesint[][] arr = new int[n][n]; // Iterate through every line and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of integers equal to line number for (int i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; else // Other values are sum of values just above and left of above arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; System.out.print(arr[line][i]); } System.out.println(\"\");}}}", "e": 34999, "s": 34148, "text": null }, { "code": "# Python3 program for Pascal's Triangle # A O(n^2) time and O(n^2) extra# space method for Pascal's Triangledef printPascal(n:int): # An auxiliary array to store # generated pascal triangle values arr = [[0 for x in range(n)] for y in range(n)] # Iterate through every line # and print integer(s) in it for line in range (0, n): # Every line has number of # integers equal to line number for i in range (0, line + 1): # First and last values # in every row are 1 if(i is 0 or i is line): arr[line][i] = 1 print(arr[line][i], end = \" \") # Other values are sum of values # just above and left of above else: arr[line][i] = (arr[line - 1][i - 1] + arr[line - 1][i]) print(arr[line][i], end = \" \") print(\"\\n\", end = \"\") # Driver Coden = 5printPascal(n) # This code is contributed# by Sanju Maderna", "e": 36026, "s": 34999, "text": null }, { "code": "// C# program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Triangleusing System; class GFG{public static void printPascal(int n){ // An auxiliary array to store// generated pascal triangle valuesint[,] arr = new int[n, n]; // Iterate through every line// and print integer(s) in itfor (int line = 0; line < n; line++){ // Every line has number of // integers equal to line number for (int i = 0; i <= line; i++) { // First and last values // in every row are 1 if (line == i || i == 0) arr[line, i] = 1; else // Other values are sum of values // just above and left of above arr[line, i] = arr[line - 1, i - 1] + arr[line - 1, i]; Console.Write(arr[line, i]); }Console.WriteLine(\"\");}} // Driver Codepublic static void Main (){ int n = 5; printPascal(n);}} // This code is contributed// by Akanksha Rai(Abby_akku)", "e": 36971, "s": 36026, "text": null }, { "code": "<?php// PHP program for Pascal’s Triangle// A O(n^2) time and O(n^2) extra space// method for Pascal's Trianglefunction printPascal($n){ // An auxiliary array to store // generated pascal triangle values $arr = array(array()); // Iterate through every line and // print integer(s) in it for ($line = 0; $line < $n; $line++) { // Every line has number of integers // equal to line number for ($i = 0; $i <= $line; $i++) { // First and last values in every row are 1 if ($line == $i || $i == 0) $arr[$line][$i] = 1; // Other values are sum of values just // above and left of above else $arr[$line][$i] = $arr[$line - 1][$i - 1] + $arr[$line - 1][$i]; echo $arr[$line][$i] . \" \"; } echo \"\\n\"; }} // Driver code$n = 5;printPascal($n); // This code is contributed// by Akanksha Rai?>", "e": 37966, "s": 36971, "text": null }, { "code": "<script> // javascript program for Pascal's Triangle// A O(n^2) time and O(n^2) extra// space method for Pascal's Trianglevar n = 5;printPascal(n); function printPascal(n){// An auxiliary array to store generated pascal triangle valuesarr = a = Array(n).fill(0).map(x => Array(n).fill(0)); // Iterate through every line and print integer(s) in itfor (line = 0; line < n; line++){ // Every line has number of integers equal to line number for (i = 0; i <= line; i++) { // First and last values in every row are 1 if (line == i || i == 0) arr[line][i] = 1; else // Other values are sum of values just above and left of above arr[line][i] = arr[line-1][i-1] + arr[line-1][i]; document.write(arr[line][i]); } document.write(\"<br>\");}} // This code is contributed by 29AjayKumar </script>", "e": 38794, "s": 37966, "text": null }, { "code": null, "e": 38804, "s": 38794, "text": "Output: " }, { "code": null, "e": 38839, "s": 38804, "text": "1 \n1 1 \n1 2 1 \n1 3 3 1 \n1 4 6 4 1 " }, { "code": null, "e": 39364, "s": 38839, "text": "This method can be optimized to use O(n) extra space as we need values only from previous row. So we can create an auxiliary array of size n and overwrite values. Following is another method uses only O(1) extra space.Method 3 ( O(n^2) time and O(1) extra space ) This method is based on method 1. We know that ith entry in a line number line is Binomial Coefficient C(line, i) and all lines start with value 1. The idea is to calculate C(line, i) using C(line, i-1). It can be calculated in O(1) time using the following. " }, { "code": null, "e": 39632, "s": 39364, "text": "C(line, i) = line! / ( (line-i)! * i! )\nC(line, i-1) = line! / ( (line - i + 1)! * (i-1)! )\nWe can derive following expression from above two expressions.\nC(line, i) = C(line, i-1) * (line - i + 1) / i\n\nSo C(line, i) can be calculated from C(line, i-1) in O(1) time" }, { "code": null, "e": 39638, "s": 39634, "text": "C++" }, { "code": null, "e": 39640, "s": 39638, "text": "C" }, { "code": null, "e": 39645, "s": 39640, "text": "Java" }, { "code": null, "e": 39653, "s": 39645, "text": "Python3" }, { "code": null, "e": 39656, "s": 39653, "text": "C#" }, { "code": null, "e": 39660, "s": 39656, "text": "PHP" }, { "code": null, "e": 39671, "s": 39660, "text": "Javascript" }, { "code": "// C++ program for Pascal’s Triangle// A O(n^2) time and O(1) extra space// function for Pascal's Triangle#include <bits/stdc++.h> using namespace std;void printPascal(int n){ for (int line = 1; line <= n; line++){ int C = 1; // used to represent C(line, i) for (int i = 1; i <= line; i++) { // The first value in a line is always 1 cout<< C<<\" \"; C = C * (line - i) / i; } cout<<\"\\n\";}} // Driver codeint main(){ int n = 5; printPascal(n); return 0;} // This code is contributed by Code_Mech", "e": 40222, "s": 39671, "text": null }, { "code": "// C program for Pascal’s Triangle// A O(n^2) time and O(1) extra space// function for Pascal's Trianglevoid printPascal(int n){for (int line = 1; line <= n; line++){ int C = 1; // used to represent C(line, i) for (int i = 1; i <= line; i++) { printf(\"%d \", C); // The first value in a line is always 1 C = C * (line - i) / i; } printf(\"\\n\");}}// Driver codeint main(){int n = 5; printPascal(n); return 0;}", "e": 40656, "s": 40222, "text": null }, { "code": "// Java program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangleimport java.io.*;class GFG { //Pascal functionpublic static void printPascal(int n){ for(int line = 1; line <= n; line++) { int C=1;// used to represent C(line, i) for(int i = 1; i <= line; i++) { // The first value in a line is always 1 System.out.print(C+\" \"); C = C * (line - i) / i; } System.out.println(); }} // Driver codepublic static void main (String[] args) { int n = 5; printPascal(n);}}// This code is contributed// by Archit Puri", "e": 41264, "s": 40656, "text": null }, { "code": "# Python3 program for Pascal's Triangle# A O(n^2) time and O(1) extra# space method for Pascal's Triangle # Pascal functiondef printPascal(n): for line in range(1, n + 1): C = 1; # used to represent C(line, i) for i in range(1, line + 1): # The first value in a # line is always 1 print(C, end = \" \"); C = int(C * (line - i) / i); print(\"\"); # Driver coden = 5;printPascal(n); # This code is contributed by mits", "e": 41758, "s": 41264, "text": null }, { "code": "// C# program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangleusing System;class GFG{ // Pascal functionpublic static void printPascal(int n){ for(int line = 1; line <= n; line++) { int C = 1;// used to represent C(line, i) for(int i = 1; i <= line; i++) { // The first value in a // line is always 1 Console.Write(C + \" \"); C = C * (line - i) / i; } Console.Write(\"\\n\") ; }} // Driver codepublic static void Main (){ int n = 5; printPascal(n);}} // This code is contributed// by ChitraNayal", "e": 42369, "s": 41758, "text": null }, { "code": "<?php// PHP program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangle // Pascal functionfunction printPascal($n){ for($line = 1; $line <= $n; $line++) { $C = 1;// used to represent C(line, i) for($i = 1; $i <= $line; $i++) { // The first value in a // line is always 1 print($C . \" \"); $C = $C * ($line - $i) / $i; } print(\"\\n\"); }} // Driver code$n = 5;printPascal($n); // This code is contributed by mits?>", "e": 42906, "s": 42369, "text": null }, { "code": "<script> // JavaScript program for Pascal's Triangle// A O(n^2) time and O(1) extra// space method for Pascal's Triangle //Pascal functionfunction printPascal(n){ for(line = 1; line <= n; line++) { var C=1;// used to represent C(line, i) for(i = 1; i <= line; i++) { // The first value in a line is always 1 document.write(C+\" \"); C = C * (line - i) / i; } document.write(\"<br>\"); }} // Driver codevar n = 5;printPascal(n); // This code is contributed by 29AjayKumar </script>", "e": 43437, "s": 42906, "text": null }, { "code": null, "e": 43446, "s": 43437, "text": "Output: " }, { "code": null, "e": 43481, "s": 43446, "text": "1 \n1 1 \n1 2 1 \n1 3 3 1 \n1 4 6 4 1 " }, { "code": null, "e": 43629, "s": 43481, "text": "So method 3 is the best method among all, but it may cause integer overflow for large values of n as it multiplies two integers to obtain values. " }, { "code": null, "e": 44444, "s": 43629, "text": "YouTubeGeeksforGeeks500K subscribersPascal Triangle | GeeksforGeeksWatch laterShareCopy linkInfoShoppingTap to unmuteIf playback doesn't begin shortly, try restarting your device.You're signed outVideos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.CancelConfirmMore videosMore videosSwitch cameraShareInclude playlistAn error occurred while retrieving sharing information. Please try again later.Watch on0:000:000:00 / 14:17•Live•<div class=\"player-unavailable\"><h1 class=\"message\">An error occurred.</h1><div class=\"submessage\"><a href=\"https://www.youtube.com/watch?v=GJ06cufKwm8\" target=\"_blank\">Try watching this video on www.youtube.com</a>, or enable JavaScript if it is disabled in your browser.</div></div>" }, { "code": null, "e": 44640, "s": 44444, "text": "This article is compiled by Rahul and reviewed by GeeksforGeeks team. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 44653, "s": 44640, "text": "Mithun Kumar" }, { "code": null, "e": 44665, "s": 44653, "text": "Ravi_Maurya" }, { "code": null, "e": 44678, "s": 44665, "text": "Akanksha_Rai" }, { "code": null, "e": 44690, "s": 44678, "text": "ARCHIT PURI" }, { "code": null, "e": 44711, "s": 44690, "text": "Smitha Dinesh Semwal" }, { "code": null, "e": 44721, "s": 44711, "text": "Code_Mech" }, { "code": null, "e": 44729, "s": 44721, "text": "sanju88" }, { "code": null, "e": 44740, "s": 44729, "text": "nidhi_biet" }, { "code": null, "e": 44756, "s": 44740, "text": "souravghosh0416" }, { "code": null, "e": 44768, "s": 44756, "text": "29AjayKumar" }, { "code": null, "e": 44787, "s": 44768, "text": "surindertarika1234" }, { "code": null, "e": 44806, "s": 44787, "text": "shivanisinghss2110" }, { "code": null, "e": 44822, "s": 44806, "text": "subhammahato348" }, { "code": null, "e": 44828, "s": 44822, "text": "Adobe" }, { "code": null, "e": 44835, "s": 44828, "text": "Amazon" }, { "code": null, "e": 44856, "s": 44835, "text": "binomial coefficient" }, { "code": null, "e": 44873, "s": 44856, "text": "pattern-printing" }, { "code": null, "e": 44880, "s": 44873, "text": "Arrays" }, { "code": null, "e": 44893, "s": 44880, "text": "Mathematical" }, { "code": null, "e": 44900, "s": 44893, "text": "Amazon" }, { "code": null, "e": 44906, "s": 44900, "text": "Adobe" }, { "code": null, "e": 44913, "s": 44906, "text": "Arrays" }, { "code": null, "e": 44926, "s": 44913, "text": "Mathematical" }, { "code": null, "e": 44943, "s": 44926, "text": "pattern-printing" }, { "code": null, "e": 45041, "s": 44943, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 45050, "s": 45041, "text": "Comments" }, { "code": null, "e": 45063, "s": 45050, "text": "Old Comments" }, { "code": null, "e": 45086, "s": 45063, "text": "Introduction to Arrays" }, { "code": null, "e": 45118, "s": 45086, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 45203, "s": 45118, "text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)" }, { "code": null, "e": 45224, "s": 45203, "text": "Linked List vs Array" }, { "code": null, "e": 45269, "s": 45224, "text": "Python | Using 2D arrays/lists the right way" }, { "code": null, "e": 45329, "s": 45269, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 45344, "s": 45329, "text": "C++ Data Types" }, { "code": null, "e": 45387, "s": 45344, "text": "Set in C++ Standard Template Library (STL)" }, { "code": null, "e": 45411, "s": 45387, "text": "Merge two sorted arrays" } ]
First neural network for beginners explained (with code) | by Arthur Arnx | Towards Data Science
So you want to create your first artificial neural network, or simply discover this subject, but have no idea where to begin ? Follow this quick guide to understand all the steps ! Based on nature, neural networks are the usual representation we make of the brain : neurons interconnected to other neurons which forms a network. A simple information transits in a lot of them before becoming an actual thing, like “move the hand to pick up this pencil”. The operation of a complete neural network is straightforward : one enter variables as inputs (for example an image if the neural network is supposed to tell what is on an image), and after some calculations, an output is returned (following the first example, giving an image of a cat should return the word “cat”). Now, you should know that artificial neural network are usually put on columns, so that a neuron of the column n can only be connected to neurons from columns n-1 and n+1. There are few types of networks that use a different architecture, but we will focus on the simplest for now. So, we can represent an artificial neural network like that : Neural networks can usually be read from left to right. Here, the first layer is the layer in which inputs are entered. There are 2 internals layers (called hidden layers) that do some math, and one last layer that contains all the possible outputs. Don’t bother with the “+1”s at the bottom of every columns. It is something called “bias” and we’ll talk about that later. By the way, the term “deep learning” comes from neural networks that contains several hidden layers, also called “deep neural networks” . The Figure 1 can be considered as one. The operations done by each neurons are pretty simple : First, it adds up the value of every neurons from the previous column it is connected to. On the Figure 2, there are 3 inputs (x1, x2, x3) coming to the neuron, so 3 neurons of the previous column are connected to our neuron. This value is multiplied, before being added, by another variable called “weight” (w1, w2, w3) which determines the connection between the two neurons. Each connection of neurons has its own weight, and those are the only values that will be modified during the learning process. Moreover, a bias value may be added to the total value calculated. It is not a value coming from a specific neuron and is chosen before the learning phase, but can be useful for the network. After all those summations, the neuron finally applies a function called “activation function” to the obtained value. The so-called activation function usually serves to turn the total value calculated before to a number between 0 and 1 (done for example by a sigmoid function shown by Figure 3). Other function exist and may change the limits of our function, but keeps the same aim of limiting the value. That’s all a neuron does ! Take all values from connected neurons multiplied by their respective weight, add them, and apply an activation function. Then, the neuron is ready to send its new value to other neurons. After every neurons of a column did it, the neural network passes to the next column. In the end, the last values obtained should be one usable to determine the desired output. Now that we understand what a neuron does, we could possibly create any network we want. However, there are other operations to implement to make a neural network learn. Yep, creating variables and making them interact with each other is great, but that is not enough to make the whole neural network learn by itself. We need to prepare a lot of data to give to our network. Those data include the inputs and the output expected from the neural network. Let’s take a look at how the learning process works : First of all, remember that when an input is given to the neural network, it returns an output. On the first try, it can’t get the right output by its own (except with luck) and that is why, during the learning phase, every inputs come with its label, explaining what output the neural network should have guessed. If the choice is the good one, actual parameters are kept and the next input is given. However, if the obtained output doesn’t match the label, weights are changed. Those are the only variables that can be changed during the learning phase. This process may be imagined as multiple buttons, that are turned into different possibilities every times an input isn’t guessed correctly. To determine which weight is better to modify, a particular process, called “backpropagation” is done. We won’t linger too much on that, since the neural network we will build doesn’t use this exact process, but it consists on going back on the neural network and inspect every connection to check how the output would behave according to a change on the weight. Finally, there is a last parameter to know to be able to control the way the neural network learns : the “learning rate”. The name says it all, this new value determines on what speed the neural network will learn, or more specifically how it will modify a weight, little by little or by bigger steps. 1 is generally a good value for that parameter. Okay, we know the basics, let’s check about the neural network we will create. The one explained here is called a Perceptron and is the first neural network ever created. It consists on 2 neurons in the inputs column and 1 neuron in the output column. This configuration allows to create a simple classifier to distinguish 2 groups. To better understand the possibilities and the limitations, let’s see a quick example (which doesn’t have much interest except to understand) : Let’s say you want your neural network to be able to return outputs according to the rules of the “inclusive or”. Reminder : if A is true and B is true, then A or B is true. if A is true and B is false, then A or B is true. if A is false and B is true, then A or B is true. if A is false and B is false, then A or B is false. If you replace the “true”s by 1 and the “false”s by 0 and put the 4 possibilities as points with coordinates on a plan, then you realize the two final groups “false” and “true” may be separated by a single line. This is what a Perceptron can do. On the other hand, if we check the case of the “exclusive or” (in which the case “true or true” (the point (1,1)) is false), then we can see that a simple line cannot separate the two groups, and a Perceptron isn’t able to deal with this problem. So, the Perceptron is indeed not a very efficient neural network, but it is simple to create and may still be useful as a classifier. Let’s create a neural network from scratch with Python (3.x in the example below). import numpy, random, oslr = 1 #learning ratebias = 1 #value of biasweights = [random.random(),random.random(),random.random()] #weights generated in a list (3 weights in total for 2 neurons and the bias) The beginning of the program just defines libraries and the values of the parameters, and creates a list which contains the values of the weights that will be modified (those are generated randomly). def Perceptron(input1, input2, output) : outputP = input1*weights[0]+input2*weights[1]+bias*weights[2] if outputP > 0 : #activation function (here Heaviside) outputP = 1 else : outputP = 0 error = output – outputP weights[0] += error * input1 * lr weights[1] += error * input2 * lr weights[2] += error * bias * lr Here we create a function which defines the work of the output neuron. It takes 3 parameters (the 2 values of the neurons and the expected output). “outputP” is the variable corresponding to the output given by the Perceptron. Then we calculate the error, used to modify the weights of every connections to the output neuron right after. for i in range(50) : Perceptron(1,1,1) #True or true Perceptron(1,0,1) #True or false Perceptron(0,1,1) #False or true Perceptron(0,0,0) #False or false We create a loop that makes the neural network repeat every situation several times. This part is the learning phase. The number of iteration is chosen according to the precision we want. However, be aware that too much iterations could lead the network to over-fitting, which causes it to focus too much on the treated examples, so it couldn’t get a right output on case it didn’t see during its learning phase. However, our case here is a bit special, since there are only 4 possibilities, and we give the neural network all of them during its learning phase. A Perceptron is supposed to give a correct output without having ever seen the case it is treating. x = int(input())y = int(input())outputP = x*weights[0] + y*weights[1] + bias*weights[2]if outputP > 0 : #activation function outputP = 1else : outputP = 0print(x, "or", y, "is : ", outputP) Finally, we can ask the user to enter himself the values to check if the Perceptron is working. This is the testing phase. The activation function Heaviside is interesting to use in this case, since it takes back all values to exactly 0 or 1, since we are looking for a false or true result. We could try with a sigmoid function and obtain a decimal number between 0 and 1, normally very close to one of those limits. outputP = 1/(1+numpy.exp(-outputP)) #sigmoid function We could also save the weights that the neural network just calculated in a file, to use it later without making another learning phase. It is done for way bigger project, in which that phase can last days or weeks. That’s it ! You’ve done your own complete neural network. You created it, made it learn, and checked its capacities. Your Perceptron can now be modified to use it on another problem. Just change the points given during the iterations, adjust the number of loop if your case is more complex, and just let your Perceptron do the classification. Do you want to list 2 types of trees in the nearest forest and be able to determine if a new tree is type A or B ? Chose 2 features that can dissociate both types (for example height and width), and create some points for the Perceptron to place on the plan. Let it deduct a way to separate the 2 groups, and enter any new tree’s point to know which type it is. You could later expand your knowledge and see about bigger and deeper neural network, that are very powerful ! There are multiple aspects we didn’t treat, or just enough for you to get the basics, so don’t hesitate to go further. I would love to write about more complex neural networks so stay tuned ! Thanks for reading ! I hope this little guide was useful, if you have any question and/or suggestion, let me know in the comments.
[ { "code": null, "e": 353, "s": 172, "text": "So you want to create your first artificial neural network, or simply discover this subject, but have no idea where to begin ? Follow this quick guide to understand all the steps !" }, { "code": null, "e": 626, "s": 353, "text": "Based on nature, neural networks are the usual representation we make of the brain : neurons interconnected to other neurons which forms a network. A simple information transits in a lot of them before becoming an actual thing, like “move the hand to pick up this pencil”." }, { "code": null, "e": 943, "s": 626, "text": "The operation of a complete neural network is straightforward : one enter variables as inputs (for example an image if the neural network is supposed to tell what is on an image), and after some calculations, an output is returned (following the first example, giving an image of a cat should return the word “cat”)." }, { "code": null, "e": 1225, "s": 943, "text": "Now, you should know that artificial neural network are usually put on columns, so that a neuron of the column n can only be connected to neurons from columns n-1 and n+1. There are few types of networks that use a different architecture, but we will focus on the simplest for now." }, { "code": null, "e": 1287, "s": 1225, "text": "So, we can represent an artificial neural network like that :" }, { "code": null, "e": 1660, "s": 1287, "text": "Neural networks can usually be read from left to right. Here, the first layer is the layer in which inputs are entered. There are 2 internals layers (called hidden layers) that do some math, and one last layer that contains all the possible outputs. Don’t bother with the “+1”s at the bottom of every columns. It is something called “bias” and we’ll talk about that later." }, { "code": null, "e": 1837, "s": 1660, "text": "By the way, the term “deep learning” comes from neural networks that contains several hidden layers, also called “deep neural networks” . The Figure 1 can be considered as one." }, { "code": null, "e": 1893, "s": 1837, "text": "The operations done by each neurons are pretty simple :" }, { "code": null, "e": 2119, "s": 1893, "text": "First, it adds up the value of every neurons from the previous column it is connected to. On the Figure 2, there are 3 inputs (x1, x2, x3) coming to the neuron, so 3 neurons of the previous column are connected to our neuron." }, { "code": null, "e": 2399, "s": 2119, "text": "This value is multiplied, before being added, by another variable called “weight” (w1, w2, w3) which determines the connection between the two neurons. Each connection of neurons has its own weight, and those are the only values that will be modified during the learning process." }, { "code": null, "e": 2590, "s": 2399, "text": "Moreover, a bias value may be added to the total value calculated. It is not a value coming from a specific neuron and is chosen before the learning phase, but can be useful for the network." }, { "code": null, "e": 2708, "s": 2590, "text": "After all those summations, the neuron finally applies a function called “activation function” to the obtained value." }, { "code": null, "e": 2997, "s": 2708, "text": "The so-called activation function usually serves to turn the total value calculated before to a number between 0 and 1 (done for example by a sigmoid function shown by Figure 3). Other function exist and may change the limits of our function, but keeps the same aim of limiting the value." }, { "code": null, "e": 3212, "s": 2997, "text": "That’s all a neuron does ! Take all values from connected neurons multiplied by their respective weight, add them, and apply an activation function. Then, the neuron is ready to send its new value to other neurons." }, { "code": null, "e": 3389, "s": 3212, "text": "After every neurons of a column did it, the neural network passes to the next column. In the end, the last values obtained should be one usable to determine the desired output." }, { "code": null, "e": 3559, "s": 3389, "text": "Now that we understand what a neuron does, we could possibly create any network we want. However, there are other operations to implement to make a neural network learn." }, { "code": null, "e": 3843, "s": 3559, "text": "Yep, creating variables and making them interact with each other is great, but that is not enough to make the whole neural network learn by itself. We need to prepare a lot of data to give to our network. Those data include the inputs and the output expected from the neural network." }, { "code": null, "e": 3897, "s": 3843, "text": "Let’s take a look at how the learning process works :" }, { "code": null, "e": 4594, "s": 3897, "text": "First of all, remember that when an input is given to the neural network, it returns an output. On the first try, it can’t get the right output by its own (except with luck) and that is why, during the learning phase, every inputs come with its label, explaining what output the neural network should have guessed. If the choice is the good one, actual parameters are kept and the next input is given. However, if the obtained output doesn’t match the label, weights are changed. Those are the only variables that can be changed during the learning phase. This process may be imagined as multiple buttons, that are turned into different possibilities every times an input isn’t guessed correctly." }, { "code": null, "e": 4957, "s": 4594, "text": "To determine which weight is better to modify, a particular process, called “backpropagation” is done. We won’t linger too much on that, since the neural network we will build doesn’t use this exact process, but it consists on going back on the neural network and inspect every connection to check how the output would behave according to a change on the weight." }, { "code": null, "e": 5307, "s": 4957, "text": "Finally, there is a last parameter to know to be able to control the way the neural network learns : the “learning rate”. The name says it all, this new value determines on what speed the neural network will learn, or more specifically how it will modify a weight, little by little or by bigger steps. 1 is generally a good value for that parameter." }, { "code": null, "e": 5784, "s": 5307, "text": "Okay, we know the basics, let’s check about the neural network we will create. The one explained here is called a Perceptron and is the first neural network ever created. It consists on 2 neurons in the inputs column and 1 neuron in the output column. This configuration allows to create a simple classifier to distinguish 2 groups. To better understand the possibilities and the limitations, let’s see a quick example (which doesn’t have much interest except to understand) :" }, { "code": null, "e": 5909, "s": 5784, "text": "Let’s say you want your neural network to be able to return outputs according to the rules of the “inclusive or”. Reminder :" }, { "code": null, "e": 5958, "s": 5909, "text": "if A is true and B is true, then A or B is true." }, { "code": null, "e": 6008, "s": 5958, "text": "if A is true and B is false, then A or B is true." }, { "code": null, "e": 6058, "s": 6008, "text": "if A is false and B is true, then A or B is true." }, { "code": null, "e": 6110, "s": 6058, "text": "if A is false and B is false, then A or B is false." }, { "code": null, "e": 6356, "s": 6110, "text": "If you replace the “true”s by 1 and the “false”s by 0 and put the 4 possibilities as points with coordinates on a plan, then you realize the two final groups “false” and “true” may be separated by a single line. This is what a Perceptron can do." }, { "code": null, "e": 6603, "s": 6356, "text": "On the other hand, if we check the case of the “exclusive or” (in which the case “true or true” (the point (1,1)) is false), then we can see that a simple line cannot separate the two groups, and a Perceptron isn’t able to deal with this problem." }, { "code": null, "e": 6737, "s": 6603, "text": "So, the Perceptron is indeed not a very efficient neural network, but it is simple to create and may still be useful as a classifier." }, { "code": null, "e": 6820, "s": 6737, "text": "Let’s create a neural network from scratch with Python (3.x in the example below)." }, { "code": null, "e": 7025, "s": 6820, "text": "import numpy, random, oslr = 1 #learning ratebias = 1 #value of biasweights = [random.random(),random.random(),random.random()] #weights generated in a list (3 weights in total for 2 neurons and the bias)" }, { "code": null, "e": 7225, "s": 7025, "text": "The beginning of the program just defines libraries and the values of the parameters, and creates a list which contains the values of the weights that will be modified (those are generated randomly)." }, { "code": null, "e": 7563, "s": 7225, "text": "def Perceptron(input1, input2, output) : outputP = input1*weights[0]+input2*weights[1]+bias*weights[2] if outputP > 0 : #activation function (here Heaviside) outputP = 1 else : outputP = 0 error = output – outputP weights[0] += error * input1 * lr weights[1] += error * input2 * lr weights[2] += error * bias * lr" }, { "code": null, "e": 7901, "s": 7563, "text": "Here we create a function which defines the work of the output neuron. It takes 3 parameters (the 2 values of the neurons and the expected output). “outputP” is the variable corresponding to the output given by the Perceptron. Then we calculate the error, used to modify the weights of every connections to the output neuron right after." }, { "code": null, "e": 8062, "s": 7901, "text": "for i in range(50) : Perceptron(1,1,1) #True or true Perceptron(1,0,1) #True or false Perceptron(0,1,1) #False or true Perceptron(0,0,0) #False or false" }, { "code": null, "e": 8475, "s": 8062, "text": "We create a loop that makes the neural network repeat every situation several times. This part is the learning phase. The number of iteration is chosen according to the precision we want. However, be aware that too much iterations could lead the network to over-fitting, which causes it to focus too much on the treated examples, so it couldn’t get a right output on case it didn’t see during its learning phase." }, { "code": null, "e": 8724, "s": 8475, "text": "However, our case here is a bit special, since there are only 4 possibilities, and we give the neural network all of them during its learning phase. A Perceptron is supposed to give a correct output without having ever seen the case it is treating." }, { "code": null, "e": 8918, "s": 8724, "text": "x = int(input())y = int(input())outputP = x*weights[0] + y*weights[1] + bias*weights[2]if outputP > 0 : #activation function outputP = 1else : outputP = 0print(x, \"or\", y, \"is : \", outputP)" }, { "code": null, "e": 9041, "s": 8918, "text": "Finally, we can ask the user to enter himself the values to check if the Perceptron is working. This is the testing phase." }, { "code": null, "e": 9336, "s": 9041, "text": "The activation function Heaviside is interesting to use in this case, since it takes back all values to exactly 0 or 1, since we are looking for a false or true result. We could try with a sigmoid function and obtain a decimal number between 0 and 1, normally very close to one of those limits." }, { "code": null, "e": 9390, "s": 9336, "text": "outputP = 1/(1+numpy.exp(-outputP)) #sigmoid function" }, { "code": null, "e": 9606, "s": 9390, "text": "We could also save the weights that the neural network just calculated in a file, to use it later without making another learning phase. It is done for way bigger project, in which that phase can last days or weeks." }, { "code": null, "e": 9949, "s": 9606, "text": "That’s it ! You’ve done your own complete neural network. You created it, made it learn, and checked its capacities. Your Perceptron can now be modified to use it on another problem. Just change the points given during the iterations, adjust the number of loop if your case is more complex, and just let your Perceptron do the classification." }, { "code": null, "e": 10311, "s": 9949, "text": "Do you want to list 2 types of trees in the nearest forest and be able to determine if a new tree is type A or B ? Chose 2 features that can dissociate both types (for example height and width), and create some points for the Perceptron to place on the plan. Let it deduct a way to separate the 2 groups, and enter any new tree’s point to know which type it is." }, { "code": null, "e": 10422, "s": 10311, "text": "You could later expand your knowledge and see about bigger and deeper neural network, that are very powerful !" }, { "code": null, "e": 10614, "s": 10422, "text": "There are multiple aspects we didn’t treat, or just enough for you to get the basics, so don’t hesitate to go further. I would love to write about more complex neural networks so stay tuned !" }, { "code": null, "e": 10635, "s": 10614, "text": "Thanks for reading !" } ]
How to Get Current time in Golang? - GeeksforGeeks
21 Apr, 2020 With the help of time.Now() function, we can get the current time in Golang by importing time module. Syntax: time.Now()Return: Return current date and time. Example #1: In this example, we can see that by using time.Now() function, we are able to get the current date and time. // Golang program to get the current timepackage main // Here "fmt" is formatted IO which// is same as C’s printf and scanf.import "fmt" // importing time moduleimport "time" // Main functionfunc main() { // Using time.Now() function. dt := time.Now() fmt.Println("Current date and time is: ", dt.String())} Output : Current date and time is: 2009-11-10 23:00:00 +0000 UTC m=+0.000000001 Example #2: // Golang program to get the current timepackage main // Here "fmt" is formatted IO which// is same as C’s printf and scanf.import "fmt" // importing time moduleimport "time" // Main functionfunc main() { // Using time.Now() function. dt := time.Now() fmt.Println(dt.Format("01-02-2006 15:04:05")) } Output: 11-10-2009 23:00:00 GoLang-time Picked Go Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. strings.Replace() Function in Golang With Examples fmt.Sprintf() Function in Golang With Examples How to Split a String in Golang? Golang Maps Arrays in Go Slices in Golang How to convert a string in lower case in Golang? How to compare times in Golang? How to Trim a String in Golang? Inheritance in GoLang
[ { "code": null, "e": 24510, "s": 24482, "text": "\n21 Apr, 2020" }, { "code": null, "e": 24612, "s": 24510, "text": "With the help of time.Now() function, we can get the current time in Golang by importing time module." }, { "code": null, "e": 24668, "s": 24612, "text": "Syntax: time.Now()Return: Return current date and time." }, { "code": null, "e": 24789, "s": 24668, "text": "Example #1: In this example, we can see that by using time.Now() function, we are able to get the current date and time." }, { "code": "// Golang program to get the current timepackage main // Here \"fmt\" is formatted IO which// is same as C’s printf and scanf.import \"fmt\" // importing time moduleimport \"time\" // Main functionfunc main() { // Using time.Now() function. dt := time.Now() fmt.Println(\"Current date and time is: \", dt.String())}", "e": 25111, "s": 24789, "text": null }, { "code": null, "e": 25120, "s": 25111, "text": "Output :" }, { "code": null, "e": 25193, "s": 25120, "text": "Current date and time is: 2009-11-10 23:00:00 +0000 UTC m=+0.000000001\n" }, { "code": null, "e": 25205, "s": 25193, "text": "Example #2:" }, { "code": "// Golang program to get the current timepackage main // Here \"fmt\" is formatted IO which// is same as C’s printf and scanf.import \"fmt\" // importing time moduleimport \"time\" // Main functionfunc main() { // Using time.Now() function. dt := time.Now() fmt.Println(dt.Format(\"01-02-2006 15:04:05\")) }", "e": 25520, "s": 25205, "text": null }, { "code": null, "e": 25528, "s": 25520, "text": "Output:" }, { "code": null, "e": 25549, "s": 25528, "text": "11-10-2009 23:00:00\n" }, { "code": null, "e": 25561, "s": 25549, "text": "GoLang-time" }, { "code": null, "e": 25568, "s": 25561, "text": "Picked" }, { "code": null, "e": 25580, "s": 25568, "text": "Go Language" }, { "code": null, "e": 25678, "s": 25580, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25729, "s": 25678, "text": "strings.Replace() Function in Golang With Examples" }, { "code": null, "e": 25776, "s": 25729, "text": "fmt.Sprintf() Function in Golang With Examples" }, { "code": null, "e": 25809, "s": 25776, "text": "How to Split a String in Golang?" }, { "code": null, "e": 25821, "s": 25809, "text": "Golang Maps" }, { "code": null, "e": 25834, "s": 25821, "text": "Arrays in Go" }, { "code": null, "e": 25851, "s": 25834, "text": "Slices in Golang" }, { "code": null, "e": 25900, "s": 25851, "text": "How to convert a string in lower case in Golang?" }, { "code": null, "e": 25932, "s": 25900, "text": "How to compare times in Golang?" }, { "code": null, "e": 25964, "s": 25932, "text": "How to Trim a String in Golang?" } ]
Structuring Your Dash App. Choosing the right structure makes it... | by Edward Krueger | Towards Data Science
By Edward Krueger and Erin Oefelein Put simply, Dash is a Python package that allows for the creation of “dashboards” in pure Python without using HTML, CSS or JavaScript. Since it’s programmable, it’s far more powerful than other dashboarding options. In recent years there has been a major push towards making Python scripts into web applications. To some extent, even Jupyter Notebooks push Python in this direction. However, new tools, including Dash and Streamlit, make it easier to distribute Python by allowing users to write web applications as scripts. You’ve completed your project and created an app to showcase your results. Now, you’d like to share your project insights by deploying your Dash app. This article will walk you through how to structure your app into a form that is easy to deploy! If this tooling seems like a lot, it is. However, you don’t need to write it all out if you don’t want to. Not only have we provided the code in the following repo, but we’ve also made it a template. So, you can just hit the button “Use this template,” follow the instructions and start editing the code in the app/ directory. github.com We will be using a sample app created to showcase a project. Let’s take a quick look at the app we are deploying. The project by Erin is the result of scraping tiny house listings, storing the results as a CSV and visualizing the results in a Dash app. Some of the plots require a Mapbox and, therefore, a Mapbox token. Before we get started, here is an overview of what we’ll cover. We’ll cover: Enough about Pipenv to get started How to use Pipenv to install dependencies How and why we install development dependencies How and why we use Pre-commit hooks How to set environmental variables How to structure the repository How to test all of our toolings Managing your app’s virtual environment with the pipenv package is highly recommended as pipenv allows additional functionality such as: the ability to automatically source an environment (.env) file. the ability to specify app requirements in the Pipfile, circumventing the need to use them in production. the specification of the Python version used in the virtual environment. The functionalities provided by pipenv make it much easier to deploy your app. Should you wish to install pipenv at this point, you can do so by running: pip install pipenv towardsdatascience.com Within your project folder, run: pipenv install <your app dependencies> Our Tiny Home Dashboard app uses Pandas, Dash and Plotly. So, to create our pipenv environment, we run: pipenv install pandas dash plotly gunicorn Note the addition of gunicorn. It’s important to include gunicorn in your Pipfile so that, should you wish to deploy to the Google Cloud Platform App Engine service or to build a Docker container for your app, both will work as expected. The above pipenv command will create a Pipfile and a Pipfile.lock, containing your app’s dependencies. Development dependencies are all of the dependencies that aren’t required for the app to run in production but enhance the developer experience or the code quality. We’ll use the following development dependencies: black will format your code. pylint will check your app’s code style and make recommendations on the length of a line of code, the presence of unused variables, whether imports are correctly placed at the top of the module, etc. pipenv-to-requirementswrites your app’s requirements to a requirements.txt file, required in this format by several Cloud providers to be correctly recognized. pre-commit is what automates the checks you’d like to see on your code (here, black, pylint and pipenv-to-requirements) before your code is committed to your GitHub repo. You can install them with pipenv install --dev --pre black pylint pipenv-to-requirements pre-commit. Alternatively, take the following snippet from our template, paste it in your Pipfile and run pipenv install --dev --pre . [dev-packages]black = "*"pylint = "*"pre-commit = "*"pipenv-to-requirements = "*" Next, you will add the pre-commit-config.yaml file to your project by running: touch pre-commit-config.yaml *Note: To be recognized correctly, this file must be named pre-commit-config.yaml! You can easily copy and paste our pre-commit-config.yaml file’s text using the text found below: repos: - repo: https://github.com/psf/black rev: 19.10b0 hooks: - id: black - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.5.0 hooks: - id: check-added-large-files - repo: local hooks: - id: pylint name: pylint entry: pylint app/ language: system always_run: true pass_filenames: false - id: pipenv-to-requirements name: pipenv-to-requirements entry: pipenv_to_requirements language: system always_run: true pass_filenames: false If your folder isn’t a git repo, you’ll need to run git init before creating the pre-commit hooks. Now, use the pre-commit dev dependency (installed during the creation of your virtual environment) to install and set up the git hooks by running: pipenv run pre-commit install The pipenv package is again used to run the pre-commit hooks. Are you starting to see why we recommend it? Now pre-commit will run automatically when running git commit! Here is how the file should look in your editor of choice: Should you wish to read more on pre-commit hooks, check out our article below. towardsdatascience.com Environment variables are useful in making your app more secure. Rather than saving tokens within your script, you can reference them using an environment variable. Some examples of tokens used within your app may be connections to databases, third-party APIs, etc. In our case, we use Mapbox to render the interactive folium maps of the U.S. Rather than saving our token within our script, we reference our token’s secret key using an environment variable. To create an environment variable, you will first create an .env file with the following command: touch .env Next, create a .gitignore file and add the .env file you’ve just created to your .gitignore file as shown: Within the .env file, we can now create our environment variable. To do this, specify the name of the variable, here named MAPBOX_TOKEN, and specify its value. We’ve hidden our token’s secret key here for security. Now when running pipenv, your environment variables will be sourced and loaded from the .env file automatically. And, when running: pipenv run echo $MAPBOX_TOKEN Your token’s value will print to the console. If you did not originally set up your token in this way, you can easily set your env variable equal to your app’s original name for the token (here mapbox_access_token) to avoid changing the token name in every place it occurs in your app: Now that you have all the files necessary to deploy your app, you’ll want to make sure your app is set up to be understood and easily deployed to the Cloud. Create a folder named app and save all the necessary files to the app folder so that your project is structured as shown: It’s always a good idea to make sure everything is working as expected by running your app locally first! You don’t want to go through all the trouble of deploying your app to the Cloud to find it doesn’t work there, and then debugging on the premise that the issue is related to your Cloud deployment, only to find the issue is with the app itself! Once you’ve confirmed your app works on your machine, you are ready to commit your work to GitHub! If you haven’t already created a GitHub repo to backup your files, do so now. You can easily set up a GitHub repo by following these instructions. Stage and add your app files to the repo: git add pre-commit-config.yaml Pipfile Pipfile.lock app/ And commit them with: git commit -m "Update package management" Now that your app is structured correctly and all files have been staged, to run the hooks against all of the app files, run: This will display and execute the hooks you’ve implemented. **Note: The pre-commit hook requires that files be staged to be discovered when running pipenv run pre-commit run --all-files When using pre-commit hooks, you’ll need to add and stage your files twice, as black and pipenv-to-requirementswill make changes to your code, namely black will format your scripts and pipenv-to-requirements will create a requirements.txt file. This causes the version stored locally and the version you’ve staged to be committed to be out of sync. Add the requirements.txt file that was just created via the pipenv-to-requirementshook by running: git add requirements* Once you’ve done this, push your files by running: git commit -m "your-message" Nice! Your app has been structured, cleaned up and saved to GitHub. If you’d like to have a closer look and deeper understanding of the over process so far, have a look at our article below. Note: We use some different tooling, but the concepts are the same. towardsdatascience.com Our next article will walk you through how to deploy your Dash app using Google’s App Engine service. Deploying on this platform is convenient as there is no need to Dockerize your app, as is necessary when deploying to Google’s Cloud Run and Compute Engine services. We hope you’ve enjoyed this article! For more content on data science, machine learning, and development check out Edward’s YouTube Channel and subscribe to my mailing list below to be the first to hear about new articles!
[ { "code": null, "e": 208, "s": 172, "text": "By Edward Krueger and Erin Oefelein" }, { "code": null, "e": 425, "s": 208, "text": "Put simply, Dash is a Python package that allows for the creation of “dashboards” in pure Python without using HTML, CSS or JavaScript. Since it’s programmable, it’s far more powerful than other dashboarding options." }, { "code": null, "e": 734, "s": 425, "text": "In recent years there has been a major push towards making Python scripts into web applications. To some extent, even Jupyter Notebooks push Python in this direction. However, new tools, including Dash and Streamlit, make it easier to distribute Python by allowing users to write web applications as scripts." }, { "code": null, "e": 981, "s": 734, "text": "You’ve completed your project and created an app to showcase your results. Now, you’d like to share your project insights by deploying your Dash app. This article will walk you through how to structure your app into a form that is easy to deploy!" }, { "code": null, "e": 1308, "s": 981, "text": "If this tooling seems like a lot, it is. However, you don’t need to write it all out if you don’t want to. Not only have we provided the code in the following repo, but we’ve also made it a template. So, you can just hit the button “Use this template,” follow the instructions and start editing the code in the app/ directory." }, { "code": null, "e": 1319, "s": 1308, "text": "github.com" }, { "code": null, "e": 1639, "s": 1319, "text": "We will be using a sample app created to showcase a project. Let’s take a quick look at the app we are deploying. The project by Erin is the result of scraping tiny house listings, storing the results as a CSV and visualizing the results in a Dash app. Some of the plots require a Mapbox and, therefore, a Mapbox token." }, { "code": null, "e": 1703, "s": 1639, "text": "Before we get started, here is an overview of what we’ll cover." }, { "code": null, "e": 1716, "s": 1703, "text": "We’ll cover:" }, { "code": null, "e": 1751, "s": 1716, "text": "Enough about Pipenv to get started" }, { "code": null, "e": 1793, "s": 1751, "text": "How to use Pipenv to install dependencies" }, { "code": null, "e": 1841, "s": 1793, "text": "How and why we install development dependencies" }, { "code": null, "e": 1877, "s": 1841, "text": "How and why we use Pre-commit hooks" }, { "code": null, "e": 1912, "s": 1877, "text": "How to set environmental variables" }, { "code": null, "e": 1944, "s": 1912, "text": "How to structure the repository" }, { "code": null, "e": 1976, "s": 1944, "text": "How to test all of our toolings" }, { "code": null, "e": 2113, "s": 1976, "text": "Managing your app’s virtual environment with the pipenv package is highly recommended as pipenv allows additional functionality such as:" }, { "code": null, "e": 2177, "s": 2113, "text": "the ability to automatically source an environment (.env) file." }, { "code": null, "e": 2283, "s": 2177, "text": "the ability to specify app requirements in the Pipfile, circumventing the need to use them in production." }, { "code": null, "e": 2356, "s": 2283, "text": "the specification of the Python version used in the virtual environment." }, { "code": null, "e": 2510, "s": 2356, "text": "The functionalities provided by pipenv make it much easier to deploy your app. Should you wish to install pipenv at this point, you can do so by running:" }, { "code": null, "e": 2529, "s": 2510, "text": "pip install pipenv" }, { "code": null, "e": 2552, "s": 2529, "text": "towardsdatascience.com" }, { "code": null, "e": 2585, "s": 2552, "text": "Within your project folder, run:" }, { "code": null, "e": 2624, "s": 2585, "text": "pipenv install <your app dependencies>" }, { "code": null, "e": 2728, "s": 2624, "text": "Our Tiny Home Dashboard app uses Pandas, Dash and Plotly. So, to create our pipenv environment, we run:" }, { "code": null, "e": 2771, "s": 2728, "text": "pipenv install pandas dash plotly gunicorn" }, { "code": null, "e": 3009, "s": 2771, "text": "Note the addition of gunicorn. It’s important to include gunicorn in your Pipfile so that, should you wish to deploy to the Google Cloud Platform App Engine service or to build a Docker container for your app, both will work as expected." }, { "code": null, "e": 3112, "s": 3009, "text": "The above pipenv command will create a Pipfile and a Pipfile.lock, containing your app’s dependencies." }, { "code": null, "e": 3277, "s": 3112, "text": "Development dependencies are all of the dependencies that aren’t required for the app to run in production but enhance the developer experience or the code quality." }, { "code": null, "e": 3327, "s": 3277, "text": "We’ll use the following development dependencies:" }, { "code": null, "e": 3356, "s": 3327, "text": "black will format your code." }, { "code": null, "e": 3556, "s": 3356, "text": "pylint will check your app’s code style and make recommendations on the length of a line of code, the presence of unused variables, whether imports are correctly placed at the top of the module, etc." }, { "code": null, "e": 3716, "s": 3556, "text": "pipenv-to-requirementswrites your app’s requirements to a requirements.txt file, required in this format by several Cloud providers to be correctly recognized." }, { "code": null, "e": 3887, "s": 3716, "text": "pre-commit is what automates the checks you’d like to see on your code (here, black, pylint and pipenv-to-requirements) before your code is committed to your GitHub repo." }, { "code": null, "e": 3988, "s": 3887, "text": "You can install them with pipenv install --dev --pre black pylint pipenv-to-requirements pre-commit." }, { "code": null, "e": 4111, "s": 3988, "text": "Alternatively, take the following snippet from our template, paste it in your Pipfile and run pipenv install --dev --pre ." }, { "code": null, "e": 4193, "s": 4111, "text": "[dev-packages]black = \"*\"pylint = \"*\"pre-commit = \"*\"pipenv-to-requirements = \"*\"" }, { "code": null, "e": 4272, "s": 4193, "text": "Next, you will add the pre-commit-config.yaml file to your project by running:" }, { "code": null, "e": 4301, "s": 4272, "text": "touch pre-commit-config.yaml" }, { "code": null, "e": 4384, "s": 4301, "text": "*Note: To be recognized correctly, this file must be named pre-commit-config.yaml!" }, { "code": null, "e": 4481, "s": 4384, "text": "You can easily copy and paste our pre-commit-config.yaml file’s text using the text found below:" }, { "code": null, "e": 5079, "s": 4481, "text": "repos: - repo: https://github.com/psf/black rev: 19.10b0 hooks: - id: black - repo: https://github.com/pre-commit/pre-commit-hooks rev: v2.5.0 hooks: - id: check-added-large-files - repo: local hooks: - id: pylint name: pylint entry: pylint app/ language: system always_run: true pass_filenames: false - id: pipenv-to-requirements name: pipenv-to-requirements entry: pipenv_to_requirements language: system always_run: true pass_filenames: false" }, { "code": null, "e": 5325, "s": 5079, "text": "If your folder isn’t a git repo, you’ll need to run git init before creating the pre-commit hooks. Now, use the pre-commit dev dependency (installed during the creation of your virtual environment) to install and set up the git hooks by running:" }, { "code": null, "e": 5355, "s": 5325, "text": "pipenv run pre-commit install" }, { "code": null, "e": 5525, "s": 5355, "text": "The pipenv package is again used to run the pre-commit hooks. Are you starting to see why we recommend it? Now pre-commit will run automatically when running git commit!" }, { "code": null, "e": 5584, "s": 5525, "text": "Here is how the file should look in your editor of choice:" }, { "code": null, "e": 5663, "s": 5584, "text": "Should you wish to read more on pre-commit hooks, check out our article below." }, { "code": null, "e": 5686, "s": 5663, "text": "towardsdatascience.com" }, { "code": null, "e": 5952, "s": 5686, "text": "Environment variables are useful in making your app more secure. Rather than saving tokens within your script, you can reference them using an environment variable. Some examples of tokens used within your app may be connections to databases, third-party APIs, etc." }, { "code": null, "e": 6144, "s": 5952, "text": "In our case, we use Mapbox to render the interactive folium maps of the U.S. Rather than saving our token within our script, we reference our token’s secret key using an environment variable." }, { "code": null, "e": 6242, "s": 6144, "text": "To create an environment variable, you will first create an .env file with the following command:" }, { "code": null, "e": 6253, "s": 6242, "text": "touch .env" }, { "code": null, "e": 6360, "s": 6253, "text": "Next, create a .gitignore file and add the .env file you’ve just created to your .gitignore file as shown:" }, { "code": null, "e": 6575, "s": 6360, "text": "Within the .env file, we can now create our environment variable. To do this, specify the name of the variable, here named MAPBOX_TOKEN, and specify its value. We’ve hidden our token’s secret key here for security." }, { "code": null, "e": 6707, "s": 6575, "text": "Now when running pipenv, your environment variables will be sourced and loaded from the .env file automatically. And, when running:" }, { "code": null, "e": 6737, "s": 6707, "text": "pipenv run echo $MAPBOX_TOKEN" }, { "code": null, "e": 6783, "s": 6737, "text": "Your token’s value will print to the console." }, { "code": null, "e": 7023, "s": 6783, "text": "If you did not originally set up your token in this way, you can easily set your env variable equal to your app’s original name for the token (here mapbox_access_token) to avoid changing the token name in every place it occurs in your app:" }, { "code": null, "e": 7180, "s": 7023, "text": "Now that you have all the files necessary to deploy your app, you’ll want to make sure your app is set up to be understood and easily deployed to the Cloud." }, { "code": null, "e": 7302, "s": 7180, "text": "Create a folder named app and save all the necessary files to the app folder so that your project is structured as shown:" }, { "code": null, "e": 7652, "s": 7302, "text": "It’s always a good idea to make sure everything is working as expected by running your app locally first! You don’t want to go through all the trouble of deploying your app to the Cloud to find it doesn’t work there, and then debugging on the premise that the issue is related to your Cloud deployment, only to find the issue is with the app itself!" }, { "code": null, "e": 7751, "s": 7652, "text": "Once you’ve confirmed your app works on your machine, you are ready to commit your work to GitHub!" }, { "code": null, "e": 7940, "s": 7751, "text": "If you haven’t already created a GitHub repo to backup your files, do so now. You can easily set up a GitHub repo by following these instructions. Stage and add your app files to the repo:" }, { "code": null, "e": 7997, "s": 7940, "text": "git add pre-commit-config.yaml Pipfile Pipfile.lock app/" }, { "code": null, "e": 8019, "s": 7997, "text": "And commit them with:" }, { "code": null, "e": 8061, "s": 8019, "text": "git commit -m \"Update package management\"" }, { "code": null, "e": 8187, "s": 8061, "text": "Now that your app is structured correctly and all files have been staged, to run the hooks against all of the app files, run:" }, { "code": null, "e": 8247, "s": 8187, "text": "This will display and execute the hooks you’ve implemented." }, { "code": null, "e": 8373, "s": 8247, "text": "**Note: The pre-commit hook requires that files be staged to be discovered when running pipenv run pre-commit run --all-files" }, { "code": null, "e": 8722, "s": 8373, "text": "When using pre-commit hooks, you’ll need to add and stage your files twice, as black and pipenv-to-requirementswill make changes to your code, namely black will format your scripts and pipenv-to-requirements will create a requirements.txt file. This causes the version stored locally and the version you’ve staged to be committed to be out of sync." }, { "code": null, "e": 8821, "s": 8722, "text": "Add the requirements.txt file that was just created via the pipenv-to-requirementshook by running:" }, { "code": null, "e": 8843, "s": 8821, "text": "git add requirements*" }, { "code": null, "e": 8894, "s": 8843, "text": "Once you’ve done this, push your files by running:" }, { "code": null, "e": 8923, "s": 8894, "text": "git commit -m \"your-message\"" }, { "code": null, "e": 8991, "s": 8923, "text": "Nice! Your app has been structured, cleaned up and saved to GitHub." }, { "code": null, "e": 9182, "s": 8991, "text": "If you’d like to have a closer look and deeper understanding of the over process so far, have a look at our article below. Note: We use some different tooling, but the concepts are the same." }, { "code": null, "e": 9205, "s": 9182, "text": "towardsdatascience.com" }, { "code": null, "e": 9473, "s": 9205, "text": "Our next article will walk you through how to deploy your Dash app using Google’s App Engine service. Deploying on this platform is convenient as there is no need to Dockerize your app, as is necessary when deploying to Google’s Cloud Run and Compute Engine services." } ]
How I Redesigned over 100 ETL into ELT Data Pipelines | by Nicholas Leong | Towards Data Science
Everyone: What do Data Engineers do?Me: We build pipelines.Everyone: You mean like a plumber? Something like that, but instead of water flowing through pipes, data flows through our pipelines. Data Scientists build models and Data Analysts communicate data to stakeholders. So, what do we need Data Engineers for? Little do they know, without Data Engineers, models won’t even exist. There won’t be any data to be communicated. Data Engineers build warehouses and pipelines to allow data to flow through the organization. We connect the dots. towardsdatascience.com Data Engineer is the fastest-growing job in 2019, growing by 50% YoY, which is higher than the job growth of Data Scientist, amounting to 32% YoY. Hence, I’m here to shed some light on some of the day-to-day tasks a Data Engineer gets. Data Pipelines is just one of them. ETL — Extract, Transform, LoadELT — Extract, Load, Transform What do these mean and how are they different from each other? In the data pipeline world, there is a source and a destination. In the simplest form, the source is where Data Engineers get the data from and the destination is where they want the data to be loaded into. More often than not, there will need to be some processing of data somewhere in between. This can be due to numerous reasons which include but are not limited to — The difference in types of Data Storage Purpose of data Data governance/quality Data Engineers label the processing of data as transformations. This is where they perform their magic to transform all kinds of data into the form they intend it to be. In ETL Data Pipelines, Data Engineers perform transformations before loading data into the destination. If there are relational transformations between tables, these happen within the source itself. In my case, the source was a Postgres Database. Hence, we performed relational joins in the source to obtain the data required, then load it into the destination. In ELT Data Pipelines, Data Engineers load data into the destination raw.They then perform any relational transformations within the destination itself. In this article, we will be talking about how I transformed over 100+ ETL Pipelines in my organization into ELT Pipelines, we will also go through the reasons I did it. Initially, the pipelines were ran using Linux cron jobs. Cron jobs are like your traditional task schedulers, they initialize using the Linux terminal. They are the most basic way of scheduling programs without any functionalities like — Setting dependencies Setting Dynamic Variables Building Connections This was the first thing to go as it was causing way too many issues. We needed to scale. To do that, we had to set up a proper Workflow Management System. We chose Apache Airflow. I wrote all about it here. towardsdatascience.com Airflow was originally built by the guys at Airbnb, made open source. It is also used by popular companies like Twitter as their Pipeline management system. You can read all about the benefits of Airflow above. After that’s sorted out, we had to change the way we are extracting data. The team suggested redesigning our ETL pipelines into ELT pipelines. More on why did we do it later. Here’s an example of the pipeline before it was redesigned. The source we were dealing with was a Postgres Database. Hence, to obtain data in the form intended, we had to perform joins in the source database. Select a.user_id,b.country,a.revenuefrom transactions a left join users b ona.user_id = b.user_id This is the query ran in the source database. Of course, I’ve simplified the examples to their dumbest form, the actual queries were over 400 lines of SQL. The query results were saved in a CSV file and then uploaded to the destination, which is a Google Bigquery database in our case. Here’s how it looked like in Apache Airflow — This is a simple example of an ETL pipeline. It was working as intended, but the team had realized the benefits of redesigning this into an ELT pipeline. More on that later. Here’s an example of the pipeline after it was redesigned. Observed how the tables are brought into the destination as it is. After all the tables have been successfully extracted, we perform relational transformations in the destination. --transactionsSelect *from transactions --Select*from users This is the query ran in the source database. Most of the extractions are using ‘Select *’ statements without any joins. For appending jobs, we include where conditions to properly segregate the data. Similarly, the query results were saved in a CSV file and then uploaded into the Google Bigquery database. We then made a separate dag for transformation jobs by setting dependencies within Apache Airflow. This is to ensure that all the extraction jobs have been completed before running transformation jobs. We set dependencies using Airflow Sensors. You can read about them here. towardsdatascience.com Now that you understand how I did it, we move onto the why — Why exactly did we re-wrote all our ETL into ELT pipelines? Running with our old Pipeline had cost our team resources, specifically time, effort, and money. To understand the cost aspect of things, you have to understand that our source database (Postgres) was an ancient machine set up back in 2008. It was hosted on-prem. It was also running an old version of Postgres which makes things even complicated. It wasn’t until recent years when the organization realize the need for a centralized data warehouse for Data Scientists and Analysts. This is when they started to build the old pipelines on cron jobs. As the number of jobs increase, it had drained resources on the machine. The SQL joins written by the previous Data Analysts were also all over the place. There were over 20 joins in a single query in some pipelines, and we were approaching 100+ pipelines. Our tasks began running during midnight, it usually finished about 1–2 p.m., which amounted to about 12+ hours, which is absolutely unacceptable. For those of you who don’t know, SQL joins are one of the most resource-intensive commands to run. It’ll increase the query’s runtime exponentially as the number of joins increases. Since we were moving onto Google Cloud, the team understood that Google Bigquery is lightning fast in computing SQL queries. You can read all about it here. cloud.google.com Hence, the whole point is to only run simple ‘Select *’ statements in the source and perform all the joins on Google Cloud. This had more than doubled the efficiency and speed of our Data Pipelines. As businesses scale, so do their tools and technologies.By moving onto Google Cloud, we can easily scale our machines and pipelines without worrying much. Google Cloud utilizes Cloud Monitoring which is a tool that collects metrics, events, and metadata of your Google Cloud Technologies like Google Cloud Composer, Dataflow, Bigquery, and many more. You can monitor all sorts of data points which includes but are not limited to — Cost of Virtual Machines The cost of each query ran in Google Bigquery The size of each query ran in Google Bigquery Duration of Data Pipelines This had made monitoring a breeze for us. Hence, by performing all transformations on Google Bigquery, we are able to accurately monitor our query size, duration, and cost as we scale. Even as we increase our machine sizes, data warehouses, data pipelines, etc, we completely understand the costs and benefits that come with it and have full control of turning it on and off if needed. This had and will save us from a lot of headaches. If you’ve read until this point, you must really have a thing for data. You should! We’ve already made ETLs and ELTs. Who knows what kind of pipelines we will be building in the future? In this article, we talked about — What are ELT/ETL Data Pipelines? How I redesigned ETL to ELT Pipelines Why I did it As usual, I end with a quote. Data is the new science. Big Data holds the answers — Pet Gelsinger You can also support me by signing up for a medium membership through my link. You will be able to read an unlimited amount of stories from me and other incredible writers! I am working on more stories, writings, and guides in the data industry. You can absolutely expect more posts like this. In the meantime, feel free to check out my other articles to temporarily fill your hunger for data. Thanks for reading! If you want to get in touch with me, feel free to reach me at nickmydata@gmail.com or my LinkedIn Profile. You can also view the code for previous write-ups in my Github.
[ { "code": null, "e": 265, "s": 171, "text": "Everyone: What do Data Engineers do?Me: We build pipelines.Everyone: You mean like a plumber?" }, { "code": null, "e": 364, "s": 265, "text": "Something like that, but instead of water flowing through pipes, data flows through our pipelines." }, { "code": null, "e": 485, "s": 364, "text": "Data Scientists build models and Data Analysts communicate data to stakeholders. So, what do we need Data Engineers for?" }, { "code": null, "e": 714, "s": 485, "text": "Little do they know, without Data Engineers, models won’t even exist. There won’t be any data to be communicated. Data Engineers build warehouses and pipelines to allow data to flow through the organization. We connect the dots." }, { "code": null, "e": 737, "s": 714, "text": "towardsdatascience.com" }, { "code": null, "e": 884, "s": 737, "text": "Data Engineer is the fastest-growing job in 2019, growing by 50% YoY, which is higher than the job growth of Data Scientist, amounting to 32% YoY." }, { "code": null, "e": 1009, "s": 884, "text": "Hence, I’m here to shed some light on some of the day-to-day tasks a Data Engineer gets. Data Pipelines is just one of them." }, { "code": null, "e": 1070, "s": 1009, "text": "ETL — Extract, Transform, LoadELT — Extract, Load, Transform" }, { "code": null, "e": 1133, "s": 1070, "text": "What do these mean and how are they different from each other?" }, { "code": null, "e": 1340, "s": 1133, "text": "In the data pipeline world, there is a source and a destination. In the simplest form, the source is where Data Engineers get the data from and the destination is where they want the data to be loaded into." }, { "code": null, "e": 1504, "s": 1340, "text": "More often than not, there will need to be some processing of data somewhere in between. This can be due to numerous reasons which include but are not limited to —" }, { "code": null, "e": 1544, "s": 1504, "text": "The difference in types of Data Storage" }, { "code": null, "e": 1560, "s": 1544, "text": "Purpose of data" }, { "code": null, "e": 1584, "s": 1560, "text": "Data governance/quality" }, { "code": null, "e": 1754, "s": 1584, "text": "Data Engineers label the processing of data as transformations. This is where they perform their magic to transform all kinds of data into the form they intend it to be." }, { "code": null, "e": 2116, "s": 1754, "text": "In ETL Data Pipelines, Data Engineers perform transformations before loading data into the destination. If there are relational transformations between tables, these happen within the source itself. In my case, the source was a Postgres Database. Hence, we performed relational joins in the source to obtain the data required, then load it into the destination." }, { "code": null, "e": 2269, "s": 2116, "text": "In ELT Data Pipelines, Data Engineers load data into the destination raw.They then perform any relational transformations within the destination itself." }, { "code": null, "e": 2438, "s": 2269, "text": "In this article, we will be talking about how I transformed over 100+ ETL Pipelines in my organization into ELT Pipelines, we will also go through the reasons I did it." }, { "code": null, "e": 2676, "s": 2438, "text": "Initially, the pipelines were ran using Linux cron jobs. Cron jobs are like your traditional task schedulers, they initialize using the Linux terminal. They are the most basic way of scheduling programs without any functionalities like —" }, { "code": null, "e": 2697, "s": 2676, "text": "Setting dependencies" }, { "code": null, "e": 2723, "s": 2697, "text": "Setting Dynamic Variables" }, { "code": null, "e": 2744, "s": 2723, "text": "Building Connections" }, { "code": null, "e": 2900, "s": 2744, "text": "This was the first thing to go as it was causing way too many issues. We needed to scale. To do that, we had to set up a proper Workflow Management System." }, { "code": null, "e": 2952, "s": 2900, "text": "We chose Apache Airflow. I wrote all about it here." }, { "code": null, "e": 2975, "s": 2952, "text": "towardsdatascience.com" }, { "code": null, "e": 3186, "s": 2975, "text": "Airflow was originally built by the guys at Airbnb, made open source. It is also used by popular companies like Twitter as their Pipeline management system. You can read all about the benefits of Airflow above." }, { "code": null, "e": 3361, "s": 3186, "text": "After that’s sorted out, we had to change the way we are extracting data. The team suggested redesigning our ETL pipelines into ELT pipelines. More on why did we do it later." }, { "code": null, "e": 3570, "s": 3361, "text": "Here’s an example of the pipeline before it was redesigned. The source we were dealing with was a Postgres Database. Hence, to obtain data in the form intended, we had to perform joins in the source database." }, { "code": null, "e": 3668, "s": 3570, "text": "Select a.user_id,b.country,a.revenuefrom transactions a left join users b ona.user_id = b.user_id" }, { "code": null, "e": 3824, "s": 3668, "text": "This is the query ran in the source database. Of course, I’ve simplified the examples to their dumbest form, the actual queries were over 400 lines of SQL." }, { "code": null, "e": 4000, "s": 3824, "text": "The query results were saved in a CSV file and then uploaded to the destination, which is a Google Bigquery database in our case. Here’s how it looked like in Apache Airflow —" }, { "code": null, "e": 4174, "s": 4000, "text": "This is a simple example of an ETL pipeline. It was working as intended, but the team had realized the benefits of redesigning this into an ELT pipeline. More on that later." }, { "code": null, "e": 4413, "s": 4174, "text": "Here’s an example of the pipeline after it was redesigned. Observed how the tables are brought into the destination as it is. After all the tables have been successfully extracted, we perform relational transformations in the destination." }, { "code": null, "e": 4473, "s": 4413, "text": "--transactionsSelect *from transactions --Select*from users" }, { "code": null, "e": 4674, "s": 4473, "text": "This is the query ran in the source database. Most of the extractions are using ‘Select *’ statements without any joins. For appending jobs, we include where conditions to properly segregate the data." }, { "code": null, "e": 4983, "s": 4674, "text": "Similarly, the query results were saved in a CSV file and then uploaded into the Google Bigquery database. We then made a separate dag for transformation jobs by setting dependencies within Apache Airflow. This is to ensure that all the extraction jobs have been completed before running transformation jobs." }, { "code": null, "e": 5056, "s": 4983, "text": "We set dependencies using Airflow Sensors. You can read about them here." }, { "code": null, "e": 5079, "s": 5056, "text": "towardsdatascience.com" }, { "code": null, "e": 5200, "s": 5079, "text": "Now that you understand how I did it, we move onto the why — Why exactly did we re-wrote all our ETL into ELT pipelines?" }, { "code": null, "e": 5297, "s": 5200, "text": "Running with our old Pipeline had cost our team resources, specifically time, effort, and money." }, { "code": null, "e": 5548, "s": 5297, "text": "To understand the cost aspect of things, you have to understand that our source database (Postgres) was an ancient machine set up back in 2008. It was hosted on-prem. It was also running an old version of Postgres which makes things even complicated." }, { "code": null, "e": 5823, "s": 5548, "text": "It wasn’t until recent years when the organization realize the need for a centralized data warehouse for Data Scientists and Analysts. This is when they started to build the old pipelines on cron jobs. As the number of jobs increase, it had drained resources on the machine." }, { "code": null, "e": 6153, "s": 5823, "text": "The SQL joins written by the previous Data Analysts were also all over the place. There were over 20 joins in a single query in some pipelines, and we were approaching 100+ pipelines. Our tasks began running during midnight, it usually finished about 1–2 p.m., which amounted to about 12+ hours, which is absolutely unacceptable." }, { "code": null, "e": 6335, "s": 6153, "text": "For those of you who don’t know, SQL joins are one of the most resource-intensive commands to run. It’ll increase the query’s runtime exponentially as the number of joins increases." }, { "code": null, "e": 6492, "s": 6335, "text": "Since we were moving onto Google Cloud, the team understood that Google Bigquery is lightning fast in computing SQL queries. You can read all about it here." }, { "code": null, "e": 6509, "s": 6492, "text": "cloud.google.com" }, { "code": null, "e": 6633, "s": 6509, "text": "Hence, the whole point is to only run simple ‘Select *’ statements in the source and perform all the joins on Google Cloud." }, { "code": null, "e": 6708, "s": 6633, "text": "This had more than doubled the efficiency and speed of our Data Pipelines." }, { "code": null, "e": 6863, "s": 6708, "text": "As businesses scale, so do their tools and technologies.By moving onto Google Cloud, we can easily scale our machines and pipelines without worrying much." }, { "code": null, "e": 7140, "s": 6863, "text": "Google Cloud utilizes Cloud Monitoring which is a tool that collects metrics, events, and metadata of your Google Cloud Technologies like Google Cloud Composer, Dataflow, Bigquery, and many more. You can monitor all sorts of data points which includes but are not limited to —" }, { "code": null, "e": 7165, "s": 7140, "text": "Cost of Virtual Machines" }, { "code": null, "e": 7211, "s": 7165, "text": "The cost of each query ran in Google Bigquery" }, { "code": null, "e": 7257, "s": 7211, "text": "The size of each query ran in Google Bigquery" }, { "code": null, "e": 7284, "s": 7257, "text": "Duration of Data Pipelines" }, { "code": null, "e": 7469, "s": 7284, "text": "This had made monitoring a breeze for us. Hence, by performing all transformations on Google Bigquery, we are able to accurately monitor our query size, duration, and cost as we scale." }, { "code": null, "e": 7670, "s": 7469, "text": "Even as we increase our machine sizes, data warehouses, data pipelines, etc, we completely understand the costs and benefits that come with it and have full control of turning it on and off if needed." }, { "code": null, "e": 7721, "s": 7670, "text": "This had and will save us from a lot of headaches." }, { "code": null, "e": 7805, "s": 7721, "text": "If you’ve read until this point, you must really have a thing for data. You should!" }, { "code": null, "e": 7907, "s": 7805, "text": "We’ve already made ETLs and ELTs. Who knows what kind of pipelines we will be building in the future?" }, { "code": null, "e": 7942, "s": 7907, "text": "In this article, we talked about —" }, { "code": null, "e": 7975, "s": 7942, "text": "What are ELT/ETL Data Pipelines?" }, { "code": null, "e": 8013, "s": 7975, "text": "How I redesigned ETL to ELT Pipelines" }, { "code": null, "e": 8026, "s": 8013, "text": "Why I did it" }, { "code": null, "e": 8056, "s": 8026, "text": "As usual, I end with a quote." }, { "code": null, "e": 8124, "s": 8056, "text": "Data is the new science. Big Data holds the answers — Pet Gelsinger" }, { "code": null, "e": 8297, "s": 8124, "text": "You can also support me by signing up for a medium membership through my link. You will be able to read an unlimited amount of stories from me and other incredible writers!" }, { "code": null, "e": 8518, "s": 8297, "text": "I am working on more stories, writings, and guides in the data industry. You can absolutely expect more posts like this. In the meantime, feel free to check out my other articles to temporarily fill your hunger for data." } ]
Getting Data into TensorFlow Estimator Models | by Robert Thas John | Towards Data Science
Machine Learning is all about the quantity and quality of your data. The said data is usually made available in a variety of sources: Text files (CSV, TSV, Excel) Databases Streaming Sources Text files are made available by some person or persons who extract the data from another source, but wish to save you the stress of extracting the data yourself. The data could be in one or more files, with or without headers. TensorFlow estimators work with input functions. The signature of an input function returns a tuple of features and labels. Features are a dictionary of feature names and numeric value arrays. Labels are an array of values. Some management needs to happen, such as shuffling the data, and returning it in batches. The approach you take determines how much effort you need to put in. Let’s start with the simple option. If you have your data in one file, which you are able to read completely into memory (so-called toy examples), and the file is in text-delimited format (CSV, TSV, etc), the amount of effort required is minimal. You can read your files in with numpy or pandas, as is commonly the case. As a reminder, when you work with tf.estimator API, you need to pass in an input function during training. This is the function signature for training: train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None) Our focus is on input_fn! We will work with the popular Boston Housing data which is hosted here. If you have your data in numpy format, you can use tf.estimator.inputs.numpy_input_function to get your data in. First you need to define a dictionary for your features: # extract numpy data from a DataFramecrim = train_df['crim'].valueszn = train_df['zn'].valuesindus = train_df['indus'].valueschas = train_df['chas'].valuesnox = train_df['nox'].valuesrm = train_df['rm'].valuesage = train_df['age'].valuesdis = train_df['dis'].valuesrad = train_df['rad'].valuestax = train_df['tax'].valuesptratio = train_df['ptratio'].valuesblack = train_df['black'].valueslstat = train_df['lstat'].valuesmedv = train_df['medv'].values# create a dictionaryx_dict = { 'crim': crim, 'zn': zn, 'indus': indus, 'chas': chas, 'nox': nox, 'rm': rm, 'age': age, 'dis': dis, 'rad': rad, 'tax': tax, 'ptratio': ptratio, 'black': black, 'lstat': lstat} With our dictionary in place, we may proceed to define our input function. def np_training_input_fn(x, y): return tf.estimator.inputs.numpy_input_fn( x= x, y= y, batch_size= 32, num_epochs= 5, # this way you can leave out steps from training shuffle= True, queue_capacity= 5000 ) In our function, we pass in x, which is our dictionary, and y, which is our label. We can also pass in our batch size, number of epochs, and whether or not to shuffle the data. Please note that you always want to shuffle your data. The batch size is a hyper parameter that you should file empirically. The number of epochs is how many times you would like to go over your data. For training, set any number. For test, set this to 1. Before creating your estimator, you will need feature columns. feature_cols = [tf.feature_column.numeric_column(k) for k in x_dict.keys()]lin_model = tf.estimator.LinearRegressor(feature_columns=feature_cols)lin_model.train(np_training_input_fn(x_dict, medv), steps=10) You can leave out steps, so the training uses the epochs specified in your training input function, or specify the number of steps to use for training. That’s all for numpy input. For a DataFrame, you would proceed to define the input function as follows: def pd_input_fn(df, y_label): return tf.estimator.inputs.pandas_input_fn( x=df, y=df[y_label], batch_size = 32, num_epochs = 5, shuffle = True, queue_capacity = 1000, num_threads = 1 ) Note that in the above method, we proceed to pass in our DataFrame, complete with the label in it. If the label is not in what you pass to x, you will get an error. You pass a series to y. The other parameters are the same as when you deal with numpy. The model is treated the same going forward. You create the model and specify the feature columns. You then proceed to train the mode. lin_model = tf.estimator.LinearRegressor(feature_columns=feature_cols)lin_model.train(pd_input_fn(train_df, 'medv'), steps=10) It’s all well and good when you can read your data into memory. But, what happens when you can’t. What happens when your training dataset is 100GB? The good news is such a dataset will normally be produced by a distributed system, so your files will be sharded. That means the data will be stored in different files with names like data-0001-of-1000. If you have never dealt with Big Data, your first thought might be to use glob . Do not do that unless you know that you are dealing with a toy example. You will exhaust your memory and training will stop. These types of files normally do not have headers, and that is a good thing. You will start by defining a list of column names which should be in the order in which your columns exist in the files. Secondly, define a label column. Finally, define a list of defaults so you can handle missing values when you encounter them during reading. CSV_COLUMNS = ['medv', 'crim', 'zn', 'lstat', 'tax', 'rad', 'chas', 'nox', 'indus', 'ptratio', 'age', 'black', 'rm', 'dis']LABEL_COLUMN = 'medv'DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]] Next, we define a function to read in text data and return our format in the same way that our earlier functions were handling them. One advantage of the way the function is created is that it can handle wildcards, such as data-* . def read_dataset(filename, mode, batch_size = 512): def _input_fn(): def decode_csv(value_column): columns = tf.decode_csv(value_column, record_defaults = DEFAULTS) features = dict(zip(CSV_COLUMNS, columns)) label = features.pop(LABEL_COLUMN) return features, label # Create list of files that match pattern file_list = tf.gfile.Glob(filename) # Create dataset from file list dataset = tf.data.TextLineDataset(file_list).map(decode_csv) if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely dataset = dataset.shuffle(buffer_size = 10 * batch_size) else: num_epochs = 1 # end-of-input after this dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() return _input_fn The function takes in three parameters: a pattern so we can match multiple files, a mode (training or evaluation), and a batch size. Notice that read_dataset returns a function. We have called that function _input_fn. Inside this function, we have a function called decode_csv that will create a dictionary, extract a series, and return both in the tuple format we mentioned at the beginning of this article. Secondly, our function creates a list of file names using glob. Yes, glob is still used, but we don’t pass the result to a pandas.read_csv(). Instead, meet tf.data.TextLineDataset(). It takes three parameters: a list of file names, the compression format (none, ZLIB, or GZIP), and a buffer size. The primary difference between read_csv and TextLineDataset is that the former reads the contents into memory (we can read in batches), while the latter returns an Iterator. So, our function creates a dataset using TextLineDataset by calling the map function, passing in decode_csv. The next thing it does is check whether or not we are in training mode. If we are not, our number of epochs is set to 1. If we are, it is set to however many epochs we would like. Our training dataset is also shuffled. Our dataset is then set to repeat the number of epochs we would like, and configured for our batch size. Finally, we return a one-shot iterator, and call get_next(). All of this work is handled behind the scenes by the functions we saw earlier. We can create our training, evaluation, and test input functions using the following approach: def get_train(): return read_dataset('./train-.*', mode = tf.estimator.ModeKeys.TRAIN)def get_valid(): return read_dataset('./valid.csv', mode = tf.estimator.ModeKeys.EVAL)def get_test(): return read_dataset('./test.csv', mode = tf.estimator.ModeKeys.EVAL) The rest of the process is exactly the same as we have seen. We can create our estimator and train it as usual. For real projects, you will start by reading in one of your training files using pandas and tf.estimator.inputs. However, to use all of your files in training, you will want to use tf.data.TextLineDataset.
[ { "code": null, "e": 181, "s": 47, "text": "Machine Learning is all about the quantity and quality of your data. The said data is usually made available in a variety of sources:" }, { "code": null, "e": 210, "s": 181, "text": "Text files (CSV, TSV, Excel)" }, { "code": null, "e": 220, "s": 210, "text": "Databases" }, { "code": null, "e": 238, "s": 220, "text": "Streaming Sources" }, { "code": null, "e": 466, "s": 238, "text": "Text files are made available by some person or persons who extract the data from another source, but wish to save you the stress of extracting the data yourself. The data could be in one or more files, with or without headers." }, { "code": null, "e": 849, "s": 466, "text": "TensorFlow estimators work with input functions. The signature of an input function returns a tuple of features and labels. Features are a dictionary of feature names and numeric value arrays. Labels are an array of values. Some management needs to happen, such as shuffling the data, and returning it in batches. The approach you take determines how much effort you need to put in." }, { "code": null, "e": 1170, "s": 849, "text": "Let’s start with the simple option. If you have your data in one file, which you are able to read completely into memory (so-called toy examples), and the file is in text-delimited format (CSV, TSV, etc), the amount of effort required is minimal. You can read your files in with numpy or pandas, as is commonly the case." }, { "code": null, "e": 1322, "s": 1170, "text": "As a reminder, when you work with tf.estimator API, you need to pass in an input function during training. This is the function signature for training:" }, { "code": null, "e": 1417, "s": 1322, "text": "train( input_fn, hooks=None, steps=None, max_steps=None, saving_listeners=None)" }, { "code": null, "e": 1515, "s": 1417, "text": "Our focus is on input_fn! We will work with the popular Boston Housing data which is hosted here." }, { "code": null, "e": 1685, "s": 1515, "text": "If you have your data in numpy format, you can use tf.estimator.inputs.numpy_input_function to get your data in. First you need to define a dictionary for your features:" }, { "code": null, "e": 2383, "s": 1685, "text": "# extract numpy data from a DataFramecrim = train_df['crim'].valueszn = train_df['zn'].valuesindus = train_df['indus'].valueschas = train_df['chas'].valuesnox = train_df['nox'].valuesrm = train_df['rm'].valuesage = train_df['age'].valuesdis = train_df['dis'].valuesrad = train_df['rad'].valuestax = train_df['tax'].valuesptratio = train_df['ptratio'].valuesblack = train_df['black'].valueslstat = train_df['lstat'].valuesmedv = train_df['medv'].values# create a dictionaryx_dict = { 'crim': crim, 'zn': zn, 'indus': indus, 'chas': chas, 'nox': nox, 'rm': rm, 'age': age, 'dis': dis, 'rad': rad, 'tax': tax, 'ptratio': ptratio, 'black': black, 'lstat': lstat}" }, { "code": null, "e": 2458, "s": 2383, "text": "With our dictionary in place, we may proceed to define our input function." }, { "code": null, "e": 2695, "s": 2458, "text": "def np_training_input_fn(x, y): return tf.estimator.inputs.numpy_input_fn( x= x, y= y, batch_size= 32, num_epochs= 5, # this way you can leave out steps from training shuffle= True, queue_capacity= 5000 )" }, { "code": null, "e": 3128, "s": 2695, "text": "In our function, we pass in x, which is our dictionary, and y, which is our label. We can also pass in our batch size, number of epochs, and whether or not to shuffle the data. Please note that you always want to shuffle your data. The batch size is a hyper parameter that you should file empirically. The number of epochs is how many times you would like to go over your data. For training, set any number. For test, set this to 1." }, { "code": null, "e": 3191, "s": 3128, "text": "Before creating your estimator, you will need feature columns." }, { "code": null, "e": 3398, "s": 3191, "text": "feature_cols = [tf.feature_column.numeric_column(k) for k in x_dict.keys()]lin_model = tf.estimator.LinearRegressor(feature_columns=feature_cols)lin_model.train(np_training_input_fn(x_dict, medv), steps=10)" }, { "code": null, "e": 3578, "s": 3398, "text": "You can leave out steps, so the training uses the epochs specified in your training input function, or specify the number of steps to use for training. That’s all for numpy input." }, { "code": null, "e": 3654, "s": 3578, "text": "For a DataFrame, you would proceed to define the input function as follows:" }, { "code": null, "e": 3876, "s": 3654, "text": "def pd_input_fn(df, y_label): return tf.estimator.inputs.pandas_input_fn( x=df, y=df[y_label], batch_size = 32, num_epochs = 5, shuffle = True, queue_capacity = 1000, num_threads = 1 )" }, { "code": null, "e": 4128, "s": 3876, "text": "Note that in the above method, we proceed to pass in our DataFrame, complete with the label in it. If the label is not in what you pass to x, you will get an error. You pass a series to y. The other parameters are the same as when you deal with numpy." }, { "code": null, "e": 4263, "s": 4128, "text": "The model is treated the same going forward. You create the model and specify the feature columns. You then proceed to train the mode." }, { "code": null, "e": 4390, "s": 4263, "text": "lin_model = tf.estimator.LinearRegressor(feature_columns=feature_cols)lin_model.train(pd_input_fn(train_df, 'medv'), steps=10)" }, { "code": null, "e": 4538, "s": 4390, "text": "It’s all well and good when you can read your data into memory. But, what happens when you can’t. What happens when your training dataset is 100GB?" }, { "code": null, "e": 4741, "s": 4538, "text": "The good news is such a dataset will normally be produced by a distributed system, so your files will be sharded. That means the data will be stored in different files with names like data-0001-of-1000." }, { "code": null, "e": 4947, "s": 4741, "text": "If you have never dealt with Big Data, your first thought might be to use glob . Do not do that unless you know that you are dealing with a toy example. You will exhaust your memory and training will stop." }, { "code": null, "e": 5286, "s": 4947, "text": "These types of files normally do not have headers, and that is a good thing. You will start by defining a list of column names which should be in the order in which your columns exist in the files. Secondly, define a label column. Finally, define a list of defaults so you can handle missing values when you encounter them during reading." }, { "code": null, "e": 5540, "s": 5286, "text": "CSV_COLUMNS = ['medv', 'crim', 'zn', 'lstat', 'tax', 'rad', 'chas', 'nox', 'indus', 'ptratio', 'age', 'black', 'rm', 'dis']LABEL_COLUMN = 'medv'DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]" }, { "code": null, "e": 5772, "s": 5540, "text": "Next, we define a function to read in text data and return our format in the same way that our earlier functions were handling them. One advantage of the way the function is created is that it can handle wildcards, such as data-* ." }, { "code": null, "e": 6579, "s": 5772, "text": "def read_dataset(filename, mode, batch_size = 512): def _input_fn(): def decode_csv(value_column): columns = tf.decode_csv(value_column, record_defaults = DEFAULTS) features = dict(zip(CSV_COLUMNS, columns)) label = features.pop(LABEL_COLUMN) return features, label # Create list of files that match pattern file_list = tf.gfile.Glob(filename) # Create dataset from file list dataset = tf.data.TextLineDataset(file_list).map(decode_csv) if mode == tf.estimator.ModeKeys.TRAIN: num_epochs = None # indefinitely dataset = dataset.shuffle(buffer_size = 10 * batch_size) else: num_epochs = 1 # end-of-input after this dataset = dataset.repeat(num_epochs).batch(batch_size) return dataset.make_one_shot_iterator().get_next() return _input_fn" }, { "code": null, "e": 6988, "s": 6579, "text": "The function takes in three parameters: a pattern so we can match multiple files, a mode (training or evaluation), and a batch size. Notice that read_dataset returns a function. We have called that function _input_fn. Inside this function, we have a function called decode_csv that will create a dictionary, extract a series, and return both in the tuple format we mentioned at the beginning of this article." }, { "code": null, "e": 7459, "s": 6988, "text": "Secondly, our function creates a list of file names using glob. Yes, glob is still used, but we don’t pass the result to a pandas.read_csv(). Instead, meet tf.data.TextLineDataset(). It takes three parameters: a list of file names, the compression format (none, ZLIB, or GZIP), and a buffer size. The primary difference between read_csv and TextLineDataset is that the former reads the contents into memory (we can read in batches), while the latter returns an Iterator." }, { "code": null, "e": 7892, "s": 7459, "text": "So, our function creates a dataset using TextLineDataset by calling the map function, passing in decode_csv. The next thing it does is check whether or not we are in training mode. If we are not, our number of epochs is set to 1. If we are, it is set to however many epochs we would like. Our training dataset is also shuffled. Our dataset is then set to repeat the number of epochs we would like, and configured for our batch size." }, { "code": null, "e": 8127, "s": 7892, "text": "Finally, we return a one-shot iterator, and call get_next(). All of this work is handled behind the scenes by the functions we saw earlier. We can create our training, evaluation, and test input functions using the following approach:" }, { "code": null, "e": 8387, "s": 8127, "text": "def get_train(): return read_dataset('./train-.*', mode = tf.estimator.ModeKeys.TRAIN)def get_valid(): return read_dataset('./valid.csv', mode = tf.estimator.ModeKeys.EVAL)def get_test(): return read_dataset('./test.csv', mode = tf.estimator.ModeKeys.EVAL)" }, { "code": null, "e": 8499, "s": 8387, "text": "The rest of the process is exactly the same as we have seen. We can create our estimator and train it as usual." } ]
Bayesian AB Testing — Part IV — Choosing a Prior | by Kaushik Sureshkumar | Towards Data Science
This post is the 4th part of a series of blog posts on applying Bayesian AB Testing methods to real life product scenarios. It uses some of the concepts discussed in the 1st and 2nd parts of the series. Modelling and analysis of conversion based test metrics (rate metrics)Modelling and analysis of revenue based test metrics (continuous metrics)Calculating test durationChoosing an appropriate priorRunning tests with multiple variants Modelling and analysis of conversion based test metrics (rate metrics) Modelling and analysis of revenue based test metrics (continuous metrics) Calculating test duration Choosing an appropriate prior Running tests with multiple variants In Bayesian Inference a prior distribution is a probability distribution used to indicate our beliefs about an unknown variable prior to drawing samples from the underlying population. We then use this data to update our beliefs about the variable using Bayes’ Rule, resulting in a posterior distribution for the variable. Within the context of an AB test, the prior distribution is a set of values we believe the test metric to take with a probability assigned to each value. We then draw samples in the form of a randomised experiment which we use to calculate our posterior distributions. These posterior distributions are, in turn, used to calculate the results of the AB test. Well, Bayes’ rule tells us the following which, in words can be written as where the denominator is a normalising constant. So the rule can be simplified to Since the results of the test are calculated on the posterior, and the prior is a factor of the posterior, the choice of prior has an impact on the test, but we need to be careful just how much of an impact it has. If we choose too strong a prior, the prior will be the dominant factor and the likelihood of drawing the samples wouldn’t make much of an effect, rendering the experiment useless. It could result in both posteriors, control and variant, converging quickly and in the test being inconclusive. However, if we choose a very weak prior the posterior will be predominantly dependent on the likelihood, so we’d need more samples to reach a conclusive result, resulting in a longer test and slower iteration on our product. In order to make our posterior distributions easier to calculate, we can also use conjugate priors. A conjugate prior is a prior distribution we can use with the likelihood function such that the posterior distribution we calculate is of a similar form to the prior distribution. Using conjugate priors simplifies our calculations while still providing a good statistical model for the test metric. We’ve seen how the simplified calculations and choice of conjugate priors have worked in our favour in the first and second posts of this series. Before we dive into how to go about choosing a prior, let’s take a quick look into the three three main types of priors. [1] Subjective Based on the experimenter’s knowledge of the field In our case this would be based on the product and data team’s prior experience with this test metric Objective and Informative Based on historical data of the value In our case this would be based on any historical data we have about our test metric It could also be a posterior distribution from a previous experiment Non-informative Priors which don’t convey any information about the value In our case this would be a uniform distribution over the test metric space Let us assume that we’re new to the company and product so we don’t have sufficient information to use a subjective prior. We also don’t want to use a non-informative prior because we believe it will result in a longer test and thus hinder the progression of our product. Let’s look at a couple of techniques we can use to choose an objective and informative prior. The simplest method to choose a prior distribution is by plotting and inspecting historical data of the relevant test metric. In order to understand this further, let us consider the experiment from the 1st post of this series. Let’s assume we’ve recently changed the messaging on an upsell screen and want to AB test it before releasing to our wider user base. We hypothesise that the changes we’ve made will result in a significantly better conversion rate. Before we set up the test, we want to use historical data to choose a prior. Let’s have a look at how we can plot the data to help us choose. We’re going to split the data into 100 partitions, work out the conversion rate for each one and plot the conversion rates as a histogram. import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sns prior_data = pd.read_csv('prior_data_conversions.csv')x = np.linspace(0,1,1000)partitions = np.array_split(prior_data, 100)rates = []for partition in partitions: rates.append(partition['converted'].mean())_, ax = plt.subplots()sns.histplot(rates, kde=True, label='CR')ax.legend()ax.set_xlabel('Conversion Rate')ax.set_ylabel('Density')ax.set_title('Histogram of Prior Conversion Rates') We can now choose a prior distribution that is similar to the distribution above, but a bit weaker. We don’t want to choose too strong a prior since we want the likelihood to be the dominant factor for calculating the prior. We do, however, want to choose a strong enough prior such that the test duration will be shorter. We’ll be using the beta distribution to model our conversion rate since it’s a flexible distribution over [0,1] and is also a good conjugate prior. So let’s go ahead and plot some potential priors of varying strength for our exercise. import numpy as npfrom scipy.stats import betaimport matplotlib.pyplot as plt_, ax = plt.subplots(1, 1)x = np.linspace(0,1,1000)beta_weak = beta(4, 8)beta_mid = beta(16, 36)beta_strong = beta(33, 69)ax.plot(x, beta_weak.pdf(x), label=f'weak Beta({4}, {8})')ax.plot(x, beta_mid.pdf(x), label=f'mid Beta({16}, {36})')ax.plot(x, beta_strong.pdf(x), label=f'strong Beta({33}, {69})')ax.set_xlabel('Conversion Probability')ax.set_ylabel('Density')ax.set_title('Choice of Priors')ax.legend() We see that even the strongest prior that we’ve plotted is weaker than the historical distribution of the conversion rate. So we can ahead and choose Beta(33,69) as our prior distribution. We can now run our experiment, calculate posteriors and results of the test. To find out more about how to do this, in particular for the outlined experiment, check out this post. A more complicated but very interesting method for choosing a prior distribution is using Monte Carlo Markov Chains. This method is particularly useful for models where our unknown variable is determined by other random variables, each of which have their own distribution. So it’s a good technique to use for AB tests where the test metric is revenue based (like average revenue per user). Before we jump into how to use this method, let me introduce how it works — MCMC deserves a post of its own, so this introduction will be very brief. [2] MCMC methods allow us to sample from an unknown distribution by running simulations (hence the Monte Carlo part of the name) in which we create a Markov Chain which has our unknown distribution as its stationary distribution. But what do these terms actually mean? Well, a Markov Chain is a process which jumps between a set of states, and each jump follows the Markov Property. Put simply, this means that the probability of jumping to a particular state is only dependent on the current state of the process and not the previous states which the process has jumped from. Due to this memoryless property, and the notion of jumping between different states, this process is often referred to as a random walk. Let us assume we perform this random walk for an infinite number of steps, then the stationary distribution is the proportion of steps in which we visited each state. Now that we have a bit of background on MCMC methods, let’s get stuck into using them to choose a prior for our AB test. Let us consider the experiment from the 2nd post of this series. We’ve recently made UX changes to a store feature in our app. We believe these changes make it easier for our users to make bigger in-app purchases and we want to AB test this before releasing to our wider user base. We hypothesise that the changes we’ve made will result in a significantly higher Average Revenue per User. We model the revenue generated by each user as a random variable R=X∗Y, where: X is a Bernoulli random variable which refers to whether the user made a purchase, with conversion probability λ — X∼Ber(λ) Y is an Exponential random variable which refers to the size of the purchase if it is made, with rate parameter θ — Y∼Exp(θ) We can use conjugate priors for λ and θ to make our calculations easier. We now need to choose priors for our parameters, which can be non-informative. import arviz as azimport pymc3 as pmprior_revenue = pd.read_csv('prior_data_revenue.csv')rev_observed = prior_revenue[prior_revenue['converted'] == 1]['revenue'].valuesconv_observed = prior_revenue['converted'].valuesmodel = pm.Model()with model: alpha = pm.Uniform("alpha", lower=0, upper=100) beta = pm.Uniform("beta", lower=0, upper=100) k = pm.Uniform("k", lower=0, upper=5) theta = pm.Uniform("theta", lower=0, upper=5) cr = pm.Beta('cr', alpha=alpha, beta=beta) rr = pm.Gamma('rr', alpha=k, beta=(1/theta)) conversion = pm.Bernoulli('conversion', p=cr, observed=conv_observed) revenue_per_sale = pm.Exponential('revenue_per_sale', lam=rr, observed=rev_observed) trace = pm.sample(10000, return_inferencedata=False) Once we’ve fit the model, we can now plot the distribution of each parameter, and print out some summary stats. with model: az.plot_trace(trace, compact=False) with model: display(az.summary(trace, kind='stats', round_to=2)) map_estimate = pm.find_MAP(model=model)print(map_estimate) The two main stats we’re going to use are the mean of each parameter and the MAP estimate of each parameter. Put simply, the latter is an estimate of the points of each parameter’s distribution which result in the modes of the conversion and revenue rate distributions. Since our parameter priors are uniform, these estimates are also the MLEs of the prior distributions of λ and θ. [3] Let’s go ahead and plot priors using each of these stats. from scipy.stats import betacr_prior_mean = beta(33, 67)cr_prior_map = beta(47, 100)x = np.linspace(0,1,1000)_, ax = plt.subplots()sns.lineplot(x=x, y=cr_prior_mean.pdf(x), label='mean Beta(33,67)')sns.lineplot(x=x, y=cr_prior_map.pdf(x), label='map Beta(47,100)')ax.set_xlabel('Conversion Probability')ax.set_ylabel('Density')ax.set_title('Conversion Probability Prior')ax.legend() In the case of the conversion probability λ, both the distributions are pretty similar. We’ll go ahead and choose the weaker one for good measure so our prior is given by λ∼Beta(33,67) from scipy.stats import gammarr_prior_mean = gamma(a=2.3, scale=2.0)rr_prior_map = gamma(a=5, scale=0.4)x = list(range(20))rr_mean = [rr_prior_mean.pdf(i) for i in x]rr_map = [rr_prior_map.pdf(i) for i in x]_, ax = plt.subplots()sns.lineplot(x=x, y=rr_mean, label='mean Gamma(2.3,2.0)')sns.lineplot(x=x, y=rr_map, label='map Gamma(5,0.4)')ax.set_xlabel('Revenue Rate')ax.set_ylabel('Density')ax.set_title('Revenue Rate Prior')ax.legend() Similarly, in the case of the rate of revenue θ, let’s go ahead and choose the weaker prior which uses the mean of the k and Θ distributions from our MCMC algorithm. So we have θ∼Gamma(2.3,2.0) Now that we have our priors, we can run our experiment, calculate posteriors and results of the test. To find out more about how to do this, in particular for the outlined experiment, check out this post. I hope you found this exploration of techniques used for choosing a prior helpful. Watch this space for the next part of the series! [1] http://www.stats.org.uk/priors/Bayes6.pdf [2] MCMC Intuition for Everyone by Rahul Agarwal — I found this really helpful to understand MCMC algorithms [3] Maximum Likelihood Estimation VS Maximum A Posterior by Yang S My code from this post can be found here.
[ { "code": null, "e": 374, "s": 171, "text": "This post is the 4th part of a series of blog posts on applying Bayesian AB Testing methods to real life product scenarios. It uses some of the concepts discussed in the 1st and 2nd parts of the series." }, { "code": null, "e": 608, "s": 374, "text": "Modelling and analysis of conversion based test metrics (rate metrics)Modelling and analysis of revenue based test metrics (continuous metrics)Calculating test durationChoosing an appropriate priorRunning tests with multiple variants" }, { "code": null, "e": 679, "s": 608, "text": "Modelling and analysis of conversion based test metrics (rate metrics)" }, { "code": null, "e": 753, "s": 679, "text": "Modelling and analysis of revenue based test metrics (continuous metrics)" }, { "code": null, "e": 779, "s": 753, "text": "Calculating test duration" }, { "code": null, "e": 809, "s": 779, "text": "Choosing an appropriate prior" }, { "code": null, "e": 846, "s": 809, "text": "Running tests with multiple variants" }, { "code": null, "e": 1169, "s": 846, "text": "In Bayesian Inference a prior distribution is a probability distribution used to indicate our beliefs about an unknown variable prior to drawing samples from the underlying population. We then use this data to update our beliefs about the variable using Bayes’ Rule, resulting in a posterior distribution for the variable." }, { "code": null, "e": 1528, "s": 1169, "text": "Within the context of an AB test, the prior distribution is a set of values we believe the test metric to take with a probability assigned to each value. We then draw samples in the form of a randomised experiment which we use to calculate our posterior distributions. These posterior distributions are, in turn, used to calculate the results of the AB test." }, { "code": null, "e": 1569, "s": 1528, "text": "Well, Bayes’ rule tells us the following" }, { "code": null, "e": 1603, "s": 1569, "text": "which, in words can be written as" }, { "code": null, "e": 1685, "s": 1603, "text": "where the denominator is a normalising constant. So the rule can be simplified to" }, { "code": null, "e": 2417, "s": 1685, "text": "Since the results of the test are calculated on the posterior, and the prior is a factor of the posterior, the choice of prior has an impact on the test, but we need to be careful just how much of an impact it has. If we choose too strong a prior, the prior will be the dominant factor and the likelihood of drawing the samples wouldn’t make much of an effect, rendering the experiment useless. It could result in both posteriors, control and variant, converging quickly and in the test being inconclusive. However, if we choose a very weak prior the posterior will be predominantly dependent on the likelihood, so we’d need more samples to reach a conclusive result, resulting in a longer test and slower iteration on our product." }, { "code": null, "e": 2962, "s": 2417, "text": "In order to make our posterior distributions easier to calculate, we can also use conjugate priors. A conjugate prior is a prior distribution we can use with the likelihood function such that the posterior distribution we calculate is of a similar form to the prior distribution. Using conjugate priors simplifies our calculations while still providing a good statistical model for the test metric. We’ve seen how the simplified calculations and choice of conjugate priors have worked in our favour in the first and second posts of this series." }, { "code": null, "e": 3087, "s": 2962, "text": "Before we dive into how to go about choosing a prior, let’s take a quick look into the three three main types of priors. [1]" }, { "code": null, "e": 3098, "s": 3087, "text": "Subjective" }, { "code": null, "e": 3149, "s": 3098, "text": "Based on the experimenter’s knowledge of the field" }, { "code": null, "e": 3251, "s": 3149, "text": "In our case this would be based on the product and data team’s prior experience with this test metric" }, { "code": null, "e": 3277, "s": 3251, "text": "Objective and Informative" }, { "code": null, "e": 3315, "s": 3277, "text": "Based on historical data of the value" }, { "code": null, "e": 3400, "s": 3315, "text": "In our case this would be based on any historical data we have about our test metric" }, { "code": null, "e": 3469, "s": 3400, "text": "It could also be a posterior distribution from a previous experiment" }, { "code": null, "e": 3485, "s": 3469, "text": "Non-informative" }, { "code": null, "e": 3543, "s": 3485, "text": "Priors which don’t convey any information about the value" }, { "code": null, "e": 3619, "s": 3543, "text": "In our case this would be a uniform distribution over the test metric space" }, { "code": null, "e": 3985, "s": 3619, "text": "Let us assume that we’re new to the company and product so we don’t have sufficient information to use a subjective prior. We also don’t want to use a non-informative prior because we believe it will result in a longer test and thus hinder the progression of our product. Let’s look at a couple of techniques we can use to choose an objective and informative prior." }, { "code": null, "e": 4445, "s": 3985, "text": "The simplest method to choose a prior distribution is by plotting and inspecting historical data of the relevant test metric. In order to understand this further, let us consider the experiment from the 1st post of this series. Let’s assume we’ve recently changed the messaging on an upsell screen and want to AB test it before releasing to our wider user base. We hypothesise that the changes we’ve made will result in a significantly better conversion rate." }, { "code": null, "e": 4726, "s": 4445, "text": "Before we set up the test, we want to use historical data to choose a prior. Let’s have a look at how we can plot the data to help us choose. We’re going to split the data into 100 partitions, work out the conversion rate for each one and plot the conversion rates as a histogram." }, { "code": null, "e": 5209, "s": 4726, "text": "import pandas as pdimport numpy as npimport matplotlib.pyplot as pltimport seaborn as sns prior_data = pd.read_csv('prior_data_conversions.csv')x = np.linspace(0,1,1000)partitions = np.array_split(prior_data, 100)rates = []for partition in partitions: rates.append(partition['converted'].mean())_, ax = plt.subplots()sns.histplot(rates, kde=True, label='CR')ax.legend()ax.set_xlabel('Conversion Rate')ax.set_ylabel('Density')ax.set_title('Histogram of Prior Conversion Rates')" }, { "code": null, "e": 5532, "s": 5209, "text": "We can now choose a prior distribution that is similar to the distribution above, but a bit weaker. We don’t want to choose too strong a prior since we want the likelihood to be the dominant factor for calculating the prior. We do, however, want to choose a strong enough prior such that the test duration will be shorter." }, { "code": null, "e": 5767, "s": 5532, "text": "We’ll be using the beta distribution to model our conversion rate since it’s a flexible distribution over [0,1] and is also a good conjugate prior. So let’s go ahead and plot some potential priors of varying strength for our exercise." }, { "code": null, "e": 6253, "s": 5767, "text": "import numpy as npfrom scipy.stats import betaimport matplotlib.pyplot as plt_, ax = plt.subplots(1, 1)x = np.linspace(0,1,1000)beta_weak = beta(4, 8)beta_mid = beta(16, 36)beta_strong = beta(33, 69)ax.plot(x, beta_weak.pdf(x), label=f'weak Beta({4}, {8})')ax.plot(x, beta_mid.pdf(x), label=f'mid Beta({16}, {36})')ax.plot(x, beta_strong.pdf(x), label=f'strong Beta({33}, {69})')ax.set_xlabel('Conversion Probability')ax.set_ylabel('Density')ax.set_title('Choice of Priors')ax.legend()" }, { "code": null, "e": 6442, "s": 6253, "text": "We see that even the strongest prior that we’ve plotted is weaker than the historical distribution of the conversion rate. So we can ahead and choose Beta(33,69) as our prior distribution." }, { "code": null, "e": 6622, "s": 6442, "text": "We can now run our experiment, calculate posteriors and results of the test. To find out more about how to do this, in particular for the outlined experiment, check out this post." }, { "code": null, "e": 7013, "s": 6622, "text": "A more complicated but very interesting method for choosing a prior distribution is using Monte Carlo Markov Chains. This method is particularly useful for models where our unknown variable is determined by other random variables, each of which have their own distribution. So it’s a good technique to use for AB tests where the test metric is revenue based (like average revenue per user)." }, { "code": null, "e": 7393, "s": 7013, "text": "Before we jump into how to use this method, let me introduce how it works — MCMC deserves a post of its own, so this introduction will be very brief. [2] MCMC methods allow us to sample from an unknown distribution by running simulations (hence the Monte Carlo part of the name) in which we create a Markov Chain which has our unknown distribution as its stationary distribution." }, { "code": null, "e": 8044, "s": 7393, "text": "But what do these terms actually mean? Well, a Markov Chain is a process which jumps between a set of states, and each jump follows the Markov Property. Put simply, this means that the probability of jumping to a particular state is only dependent on the current state of the process and not the previous states which the process has jumped from. Due to this memoryless property, and the notion of jumping between different states, this process is often referred to as a random walk. Let us assume we perform this random walk for an infinite number of steps, then the stationary distribution is the proportion of steps in which we visited each state." }, { "code": null, "e": 8554, "s": 8044, "text": "Now that we have a bit of background on MCMC methods, let’s get stuck into using them to choose a prior for our AB test. Let us consider the experiment from the 2nd post of this series. We’ve recently made UX changes to a store feature in our app. We believe these changes make it easier for our users to make bigger in-app purchases and we want to AB test this before releasing to our wider user base. We hypothesise that the changes we’ve made will result in a significantly higher Average Revenue per User." }, { "code": null, "e": 8633, "s": 8554, "text": "We model the revenue generated by each user as a random variable R=X∗Y, where:" }, { "code": null, "e": 8757, "s": 8633, "text": "X is a Bernoulli random variable which refers to whether the user made a purchase, with conversion probability λ — X∼Ber(λ)" }, { "code": null, "e": 8882, "s": 8757, "text": "Y is an Exponential random variable which refers to the size of the purchase if it is made, with rate parameter θ — Y∼Exp(θ)" }, { "code": null, "e": 8955, "s": 8882, "text": "We can use conjugate priors for λ and θ to make our calculations easier." }, { "code": null, "e": 9034, "s": 8955, "text": "We now need to choose priors for our parameters, which can be non-informative." }, { "code": null, "e": 9782, "s": 9034, "text": "import arviz as azimport pymc3 as pmprior_revenue = pd.read_csv('prior_data_revenue.csv')rev_observed = prior_revenue[prior_revenue['converted'] == 1]['revenue'].valuesconv_observed = prior_revenue['converted'].valuesmodel = pm.Model()with model: alpha = pm.Uniform(\"alpha\", lower=0, upper=100) beta = pm.Uniform(\"beta\", lower=0, upper=100) k = pm.Uniform(\"k\", lower=0, upper=5) theta = pm.Uniform(\"theta\", lower=0, upper=5) cr = pm.Beta('cr', alpha=alpha, beta=beta) rr = pm.Gamma('rr', alpha=k, beta=(1/theta)) conversion = pm.Bernoulli('conversion', p=cr, observed=conv_observed) revenue_per_sale = pm.Exponential('revenue_per_sale', lam=rr, observed=rev_observed) trace = pm.sample(10000, return_inferencedata=False)" }, { "code": null, "e": 9894, "s": 9782, "text": "Once we’ve fit the model, we can now plot the distribution of each parameter, and print out some summary stats." }, { "code": null, "e": 9945, "s": 9894, "text": "with model: az.plot_trace(trace, compact=False)" }, { "code": null, "e": 10013, "s": 9945, "text": "with model: display(az.summary(trace, kind='stats', round_to=2))" }, { "code": null, "e": 10072, "s": 10013, "text": "map_estimate = pm.find_MAP(model=model)print(map_estimate)" }, { "code": null, "e": 10459, "s": 10072, "text": "The two main stats we’re going to use are the mean of each parameter and the MAP estimate of each parameter. Put simply, the latter is an estimate of the points of each parameter’s distribution which result in the modes of the conversion and revenue rate distributions. Since our parameter priors are uniform, these estimates are also the MLEs of the prior distributions of λ and θ. [3]" }, { "code": null, "e": 10517, "s": 10459, "text": "Let’s go ahead and plot priors using each of these stats." }, { "code": null, "e": 10900, "s": 10517, "text": "from scipy.stats import betacr_prior_mean = beta(33, 67)cr_prior_map = beta(47, 100)x = np.linspace(0,1,1000)_, ax = plt.subplots()sns.lineplot(x=x, y=cr_prior_mean.pdf(x), label='mean Beta(33,67)')sns.lineplot(x=x, y=cr_prior_map.pdf(x), label='map Beta(47,100)')ax.set_xlabel('Conversion Probability')ax.set_ylabel('Density')ax.set_title('Conversion Probability Prior')ax.legend()" }, { "code": null, "e": 11071, "s": 10900, "text": "In the case of the conversion probability λ, both the distributions are pretty similar. We’ll go ahead and choose the weaker one for good measure so our prior is given by" }, { "code": null, "e": 11085, "s": 11071, "text": "λ∼Beta(33,67)" }, { "code": null, "e": 11523, "s": 11085, "text": "from scipy.stats import gammarr_prior_mean = gamma(a=2.3, scale=2.0)rr_prior_map = gamma(a=5, scale=0.4)x = list(range(20))rr_mean = [rr_prior_mean.pdf(i) for i in x]rr_map = [rr_prior_map.pdf(i) for i in x]_, ax = plt.subplots()sns.lineplot(x=x, y=rr_mean, label='mean Gamma(2.3,2.0)')sns.lineplot(x=x, y=rr_map, label='map Gamma(5,0.4)')ax.set_xlabel('Revenue Rate')ax.set_ylabel('Density')ax.set_title('Revenue Rate Prior')ax.legend()" }, { "code": null, "e": 11700, "s": 11523, "text": "Similarly, in the case of the rate of revenue θ, let’s go ahead and choose the weaker prior which uses the mean of the k and Θ distributions from our MCMC algorithm. So we have" }, { "code": null, "e": 11717, "s": 11700, "text": "θ∼Gamma(2.3,2.0)" }, { "code": null, "e": 11922, "s": 11717, "text": "Now that we have our priors, we can run our experiment, calculate posteriors and results of the test. To find out more about how to do this, in particular for the outlined experiment, check out this post." }, { "code": null, "e": 12055, "s": 11922, "text": "I hope you found this exploration of techniques used for choosing a prior helpful. Watch this space for the next part of the series!" }, { "code": null, "e": 12101, "s": 12055, "text": "[1] http://www.stats.org.uk/priors/Bayes6.pdf" }, { "code": null, "e": 12210, "s": 12101, "text": "[2] MCMC Intuition for Everyone by Rahul Agarwal — I found this really helpful to understand MCMC algorithms" }, { "code": null, "e": 12277, "s": 12210, "text": "[3] Maximum Likelihood Estimation VS Maximum A Posterior by Yang S" } ]
Pheatmap Draws Pretty Heatmaps. A tutorial of how to generate pretty... | by Yufeng | Towards Data Science
Heatmap is one of the must-have data visualization toolkits for data scientists. In R, there are many packages to generate heatmaps, such as heatmap(), heatmap.2(), and heatmaply(). However, my favorite one is pheatmap(). I am very positive that you will agree with my choice after reading this post. In this post, I will go over this powerful data visualization package, pheatmap, by applying it to the NBA players’ basic stats in the 2019–2020 season. The raw data is from the basketball reference. You can either download the dataset manually or scrape the data by following one of my previous posts. Ready to begin? Let’s go. Language: R.Package name: pheatmap. install.packages("pheatmap")library(pheatmap) Data: 2019–2020 NBA players’ stats per game. df = read.csv("../2019_2020_player_stats_pergame.csv")head(df) Above is the head of the data frame we are working on. It’s okay that you don’t understand what the column names are because they are all stats of basketball. It doesn’t affect our exploration of heatmap plotting. Data cleaning: filter out players who played less than 30 minutes per game, remove duplicates of players who got traded during the season and fill NA values with 0. df_filt = df[df$MP >= 30 ,]TOT_players = df_filt[df_filt$Tm == "TOT","Player"]df_used = df_filt[((df_filt$Player %in% TOT_players) & (df_filt$Tm == "TOT")) | (!(df_filt$Player %in% TOT_players)),]df_used[is.na(df_used)] = 0 First, pheatmap only takes the numeric matrix object as input. So, we need to transfer the numeric part of the data frame to a matrix by removing the first 5 columns of categorical data. df_num = as.matrix(df_used[,6:30]) Since the row names of the matrix are the default row labels in the heatmap, we’d better make them meaningful by avoiding numeric index. rownames(df_num) = sapply(df_used$Player,function(x) strsplit(as.character(x),split = "\\\\")[[1]][1]) The different columns of the players’ data have a large variation in the range, so we need to scale them to keep the heatmap from being dominated by the large values. df_num_scale = scale(df_num) The scale function in R performs standard scaling to the columns of the input data, which first subtracts the column means from the columns (center step) and then divides the centered columns by the column standard deviations (scale step). This function is to scale the data to a distribution with mean as 0 and standard deviation as 1. Its equation can be shown as below, where x is the data, u is the column means and s is the column standard deviations. z = (x — u) / s You can turn off the center step or the scale step in R by setting center = FALSE or scale = FALSE, respectively. Let’s visualize the effect of scaling by plotting out the density of players’ points per game before and after scaling. plot(density(df$PTS),xlab = "Points Per Game",ylab="Density",main="Comparison between scaling data and raw data",col="red",lwd=3,ylim=c(0,0.45))lines(density(df_num_scale[,"PTS"]),col="blue",lwd=3)legend("topright",legend = c("raw","scaled"),col = c("red","blue"),lty = "solid",lwd=3) After scaling the data is ready to be fed into the function. Let’s look at the default pheatmap. pheatmap(df_num_scale,main = "pheatmap default") The default behavior of the function includes the hierarchical clustering of both rows and columns, in which we can observe similar players and stats types in close positions. For example, there’s a super warm area in the middle part of the heatmap. It corresponds to a bunch of superstars, which includes James Harden, Luka Doncic, LeBron James, and Damian Lillard. If you want to turn off the clustering, you can set either cluster_cols or cluster_rows to False. The code below cancels the column clustering. pheatmap(df_num_scale,cluster_cols = F,main = "pheatmap row cluster") Actually, the function itself can do both row and column scaling in the heatmap. It mainly serves as a visualization purpose for the comparison across rows or columns. The following code shows the row scaling heatmap. pheatmap(df_num_scale,scale = "row",main = "pheatmap row scaling") The annotation function is one of the most powerful features of pheatmap. Specifically, you can input an independent data frame with annotations to the rows or columns of the heatmap matrix. For example, I annotated each player with their position, made it a data frame object and input it to the pheatmap function. One thing to note, the row names of the annotation data frame have to match the row names or column names of the heatmap matrix depending on your annotation target. pos_df = data.frame("Pos" = df_used$Pos)rownames(pos_df) = rownames(df_num) # name matchingpheatmap(df_num_scale,cluster_cols = F,annotation_row = pos_df,main = "pheatmap row annotation") You can see from the heatmap that there is another column of colors that indicate the position of the players. We see the players are not clustered by their positions, which suggests the relationship between the players’ positions and their playing types are becoming vague with the evolution of basketball. Also, we can add the column annotation as well. I named the stats with their categories that include Offence, Defence, and others. cat_df = data.frame("category" = c(rep("other",3),rep("Off",13),rep("Def",3),"Off",rep("Def",2),rep("other",2),"Off"))rownames(cat_df) = colnames(df_num) Then, I plot the heatmap with column annotation only. This time I only turn on the column clustering. pheatmap(df_num_scale,cluster_rows = F, annotation_col = cat_df,main = "pheatmap column annotation") We can see from the heatmap that the offense-related stats tend to be clustered together. The last feature I would like to introduce is the heatmap cutting feature. Sometimes, it will give a clearer visualization if we cut the heatmap by the clustering. By cutting a heatmap apart, the stand-alone blocks will represent its own population. Let’s see the row-wise cutting in the following example. pheatmap(df_num_scale,cutree_rows = 4,main = "pheatmap row cut") In the code, I input cutree_rows = 4 , which means cut the heatmap row-wise to 4 clusters. The aforementioned group of superstars is present in the third block in the cut heatmap. We can do a similar thing to the columns as below. pheatmap(df_num_scale,cutree_cols = 4,main = "pheatmap column cut") In this way, similar stats are shown close to each other. Up until now, I have gone through all the major features of pheatmap. Of course, there are a lot more details in the package, such as the color palette, clustering distance metrics, and so on. For those who are interested, please refer to the function manual. I hope this tutorial can help you strengthen your visualization toolkit. If you have enjoyed reading this post, you can also find interesting stuff in my other posts.
[ { "code": null, "e": 252, "s": 171, "text": "Heatmap is one of the must-have data visualization toolkits for data scientists." }, { "code": null, "e": 472, "s": 252, "text": "In R, there are many packages to generate heatmaps, such as heatmap(), heatmap.2(), and heatmaply(). However, my favorite one is pheatmap(). I am very positive that you will agree with my choice after reading this post." }, { "code": null, "e": 625, "s": 472, "text": "In this post, I will go over this powerful data visualization package, pheatmap, by applying it to the NBA players’ basic stats in the 2019–2020 season." }, { "code": null, "e": 775, "s": 625, "text": "The raw data is from the basketball reference. You can either download the dataset manually or scrape the data by following one of my previous posts." }, { "code": null, "e": 801, "s": 775, "text": "Ready to begin? Let’s go." }, { "code": null, "e": 837, "s": 801, "text": "Language: R.Package name: pheatmap." }, { "code": null, "e": 883, "s": 837, "text": "install.packages(\"pheatmap\")library(pheatmap)" }, { "code": null, "e": 928, "s": 883, "text": "Data: 2019–2020 NBA players’ stats per game." }, { "code": null, "e": 991, "s": 928, "text": "df = read.csv(\"../2019_2020_player_stats_pergame.csv\")head(df)" }, { "code": null, "e": 1205, "s": 991, "text": "Above is the head of the data frame we are working on. It’s okay that you don’t understand what the column names are because they are all stats of basketball. It doesn’t affect our exploration of heatmap plotting." }, { "code": null, "e": 1370, "s": 1205, "text": "Data cleaning: filter out players who played less than 30 minutes per game, remove duplicates of players who got traded during the season and fill NA values with 0." }, { "code": null, "e": 1594, "s": 1370, "text": "df_filt = df[df$MP >= 30 ,]TOT_players = df_filt[df_filt$Tm == \"TOT\",\"Player\"]df_used = df_filt[((df_filt$Player %in% TOT_players) & (df_filt$Tm == \"TOT\")) | (!(df_filt$Player %in% TOT_players)),]df_used[is.na(df_used)] = 0" }, { "code": null, "e": 1781, "s": 1594, "text": "First, pheatmap only takes the numeric matrix object as input. So, we need to transfer the numeric part of the data frame to a matrix by removing the first 5 columns of categorical data." }, { "code": null, "e": 1816, "s": 1781, "text": "df_num = as.matrix(df_used[,6:30])" }, { "code": null, "e": 1953, "s": 1816, "text": "Since the row names of the matrix are the default row labels in the heatmap, we’d better make them meaningful by avoiding numeric index." }, { "code": null, "e": 2056, "s": 1953, "text": "rownames(df_num) = sapply(df_used$Player,function(x) strsplit(as.character(x),split = \"\\\\\\\\\")[[1]][1])" }, { "code": null, "e": 2223, "s": 2056, "text": "The different columns of the players’ data have a large variation in the range, so we need to scale them to keep the heatmap from being dominated by the large values." }, { "code": null, "e": 2252, "s": 2223, "text": "df_num_scale = scale(df_num)" }, { "code": null, "e": 2589, "s": 2252, "text": "The scale function in R performs standard scaling to the columns of the input data, which first subtracts the column means from the columns (center step) and then divides the centered columns by the column standard deviations (scale step). This function is to scale the data to a distribution with mean as 0 and standard deviation as 1." }, { "code": null, "e": 2709, "s": 2589, "text": "Its equation can be shown as below, where x is the data, u is the column means and s is the column standard deviations." }, { "code": null, "e": 2725, "s": 2709, "text": "z = (x — u) / s" }, { "code": null, "e": 2959, "s": 2725, "text": "You can turn off the center step or the scale step in R by setting center = FALSE or scale = FALSE, respectively. Let’s visualize the effect of scaling by plotting out the density of players’ points per game before and after scaling." }, { "code": null, "e": 3244, "s": 2959, "text": "plot(density(df$PTS),xlab = \"Points Per Game\",ylab=\"Density\",main=\"Comparison between scaling data and raw data\",col=\"red\",lwd=3,ylim=c(0,0.45))lines(density(df_num_scale[,\"PTS\"]),col=\"blue\",lwd=3)legend(\"topright\",legend = c(\"raw\",\"scaled\"),col = c(\"red\",\"blue\"),lty = \"solid\",lwd=3)" }, { "code": null, "e": 3305, "s": 3244, "text": "After scaling the data is ready to be fed into the function." }, { "code": null, "e": 3341, "s": 3305, "text": "Let’s look at the default pheatmap." }, { "code": null, "e": 3390, "s": 3341, "text": "pheatmap(df_num_scale,main = \"pheatmap default\")" }, { "code": null, "e": 3566, "s": 3390, "text": "The default behavior of the function includes the hierarchical clustering of both rows and columns, in which we can observe similar players and stats types in close positions." }, { "code": null, "e": 3757, "s": 3566, "text": "For example, there’s a super warm area in the middle part of the heatmap. It corresponds to a bunch of superstars, which includes James Harden, Luka Doncic, LeBron James, and Damian Lillard." }, { "code": null, "e": 3901, "s": 3757, "text": "If you want to turn off the clustering, you can set either cluster_cols or cluster_rows to False. The code below cancels the column clustering." }, { "code": null, "e": 3971, "s": 3901, "text": "pheatmap(df_num_scale,cluster_cols = F,main = \"pheatmap row cluster\")" }, { "code": null, "e": 4189, "s": 3971, "text": "Actually, the function itself can do both row and column scaling in the heatmap. It mainly serves as a visualization purpose for the comparison across rows or columns. The following code shows the row scaling heatmap." }, { "code": null, "e": 4256, "s": 4189, "text": "pheatmap(df_num_scale,scale = \"row\",main = \"pheatmap row scaling\")" }, { "code": null, "e": 4447, "s": 4256, "text": "The annotation function is one of the most powerful features of pheatmap. Specifically, you can input an independent data frame with annotations to the rows or columns of the heatmap matrix." }, { "code": null, "e": 4737, "s": 4447, "text": "For example, I annotated each player with their position, made it a data frame object and input it to the pheatmap function. One thing to note, the row names of the annotation data frame have to match the row names or column names of the heatmap matrix depending on your annotation target." }, { "code": null, "e": 4925, "s": 4737, "text": "pos_df = data.frame(\"Pos\" = df_used$Pos)rownames(pos_df) = rownames(df_num) # name matchingpheatmap(df_num_scale,cluster_cols = F,annotation_row = pos_df,main = \"pheatmap row annotation\")" }, { "code": null, "e": 5036, "s": 4925, "text": "You can see from the heatmap that there is another column of colors that indicate the position of the players." }, { "code": null, "e": 5233, "s": 5036, "text": "We see the players are not clustered by their positions, which suggests the relationship between the players’ positions and their playing types are becoming vague with the evolution of basketball." }, { "code": null, "e": 5364, "s": 5233, "text": "Also, we can add the column annotation as well. I named the stats with their categories that include Offence, Defence, and others." }, { "code": null, "e": 5518, "s": 5364, "text": "cat_df = data.frame(\"category\" = c(rep(\"other\",3),rep(\"Off\",13),rep(\"Def\",3),\"Off\",rep(\"Def\",2),rep(\"other\",2),\"Off\"))rownames(cat_df) = colnames(df_num)" }, { "code": null, "e": 5620, "s": 5518, "text": "Then, I plot the heatmap with column annotation only. This time I only turn on the column clustering." }, { "code": null, "e": 5721, "s": 5620, "text": "pheatmap(df_num_scale,cluster_rows = F, annotation_col = cat_df,main = \"pheatmap column annotation\")" }, { "code": null, "e": 5811, "s": 5721, "text": "We can see from the heatmap that the offense-related stats tend to be clustered together." }, { "code": null, "e": 5975, "s": 5811, "text": "The last feature I would like to introduce is the heatmap cutting feature. Sometimes, it will give a clearer visualization if we cut the heatmap by the clustering." }, { "code": null, "e": 6118, "s": 5975, "text": "By cutting a heatmap apart, the stand-alone blocks will represent its own population. Let’s see the row-wise cutting in the following example." }, { "code": null, "e": 6183, "s": 6118, "text": "pheatmap(df_num_scale,cutree_rows = 4,main = \"pheatmap row cut\")" }, { "code": null, "e": 6274, "s": 6183, "text": "In the code, I input cutree_rows = 4 , which means cut the heatmap row-wise to 4 clusters." }, { "code": null, "e": 6363, "s": 6274, "text": "The aforementioned group of superstars is present in the third block in the cut heatmap." }, { "code": null, "e": 6414, "s": 6363, "text": "We can do a similar thing to the columns as below." }, { "code": null, "e": 6482, "s": 6414, "text": "pheatmap(df_num_scale,cutree_cols = 4,main = \"pheatmap column cut\")" }, { "code": null, "e": 6540, "s": 6482, "text": "In this way, similar stats are shown close to each other." }, { "code": null, "e": 6800, "s": 6540, "text": "Up until now, I have gone through all the major features of pheatmap. Of course, there are a lot more details in the package, such as the color palette, clustering distance metrics, and so on. For those who are interested, please refer to the function manual." } ]
Default Value in MongoDB using Node.js - GeeksforGeeks
28 Mar, 2022 Mongoose.module is one of the most powerful external module of the NodeJS. Mongoose is a MongoDB ODM i.e. (Object database Modeling) that is used to translate the code and its representation from MongoDB to the NodeJS server. Mongoose module provides several functions in order to manipulate the documents of the collection of the MongoDB database.(Refer this Link). Default Value: This value is entered when no value is entered as a value of the field in the collection. Installing Module: npm install mongoose Project Structure: Running the server on Local IP: Data is the directory where MongoDB server is present. mongod --dbpath=data --bind_ip 127.0.0.1 Filename- index.js: Javascript // Importing mongoose moduleconst mongoose = require("mongoose"); // Database Addressconst url = "mongodb://localhost:27017/GFG"; // Connecting to databasemongoose .connect(url) .then((ans) => { console.log("Connected Successful"); }) .catch((err) => { console.log("Error in the Connection"); }); // Schema classconst Schema = mongoose.Schema; // Creating Structure of the collectionconst collection_structure = new Schema({ name: { type: String, // String type required: true, }, marks: { type: Number, // Number type default: 100, },});// Creating collectionconst collections = mongoose.model("GFG2", collection_structure); // Inserting one documentcollections .create({ // Inserting value of only one key name: "aayush", }) .then((ans) => { console.log(ans); }) .catch((err) => { console.log(err.message); }); Run index.js file using below command: node index.s Output: Console output- Default Value is inserted. sagar0719kumar NodeJS-Questions Technical Scripter 2020 MongoDB Node.js Technical Scripter Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Spring Boot JpaRepository with Example MongoDB - Check the existence of the fields in the specified collection Aggregation in MongoDB MongoDB - db.collection.Find() Method Mongoose Populate() Method Installation of Node.js on Linux Node.js fs.readFileSync() Method How to update Node.js and NPM to next version ? Node.js fs.readFile() Method Node.js fs.writeFile() Method
[ { "code": null, "e": 24206, "s": 24178, "text": "\n28 Mar, 2022" }, { "code": null, "e": 24573, "s": 24206, "text": "Mongoose.module is one of the most powerful external module of the NodeJS. Mongoose is a MongoDB ODM i.e. (Object database Modeling) that is used to translate the code and its representation from MongoDB to the NodeJS server. Mongoose module provides several functions in order to manipulate the documents of the collection of the MongoDB database.(Refer this Link)." }, { "code": null, "e": 24678, "s": 24573, "text": "Default Value: This value is entered when no value is entered as a value of the field in the collection." }, { "code": null, "e": 24697, "s": 24678, "text": "Installing Module:" }, { "code": null, "e": 24718, "s": 24697, "text": "npm install mongoose" }, { "code": null, "e": 24737, "s": 24718, "text": "Project Structure:" }, { "code": null, "e": 24824, "s": 24737, "text": "Running the server on Local IP: Data is the directory where MongoDB server is present." }, { "code": null, "e": 24865, "s": 24824, "text": "mongod --dbpath=data --bind_ip 127.0.0.1" }, { "code": null, "e": 24885, "s": 24865, "text": "Filename- index.js:" }, { "code": null, "e": 24896, "s": 24885, "text": "Javascript" }, { "code": "// Importing mongoose moduleconst mongoose = require(\"mongoose\"); // Database Addressconst url = \"mongodb://localhost:27017/GFG\"; // Connecting to databasemongoose .connect(url) .then((ans) => { console.log(\"Connected Successful\"); }) .catch((err) => { console.log(\"Error in the Connection\"); }); // Schema classconst Schema = mongoose.Schema; // Creating Structure of the collectionconst collection_structure = new Schema({ name: { type: String, // String type required: true, }, marks: { type: Number, // Number type default: 100, },});// Creating collectionconst collections = mongoose.model(\"GFG2\", collection_structure); // Inserting one documentcollections .create({ // Inserting value of only one key name: \"aayush\", }) .then((ans) => { console.log(ans); }) .catch((err) => { console.log(err.message); });", "e": 25755, "s": 24896, "text": null }, { "code": null, "e": 25794, "s": 25755, "text": "Run index.js file using below command:" }, { "code": null, "e": 25807, "s": 25794, "text": "node index.s" }, { "code": null, "e": 25858, "s": 25807, "text": "Output: Console output- Default Value is inserted." }, { "code": null, "e": 25873, "s": 25858, "text": "sagar0719kumar" }, { "code": null, "e": 25890, "s": 25873, "text": "NodeJS-Questions" }, { "code": null, "e": 25914, "s": 25890, "text": "Technical Scripter 2020" }, { "code": null, "e": 25922, "s": 25914, "text": "MongoDB" }, { "code": null, "e": 25930, "s": 25922, "text": "Node.js" }, { "code": null, "e": 25949, "s": 25930, "text": "Technical Scripter" }, { "code": null, "e": 25966, "s": 25949, "text": "Web Technologies" }, { "code": null, "e": 26064, "s": 25966, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 26103, "s": 26064, "text": "Spring Boot JpaRepository with Example" }, { "code": null, "e": 26175, "s": 26103, "text": "MongoDB - Check the existence of the fields in the specified collection" }, { "code": null, "e": 26198, "s": 26175, "text": "Aggregation in MongoDB" }, { "code": null, "e": 26236, "s": 26198, "text": "MongoDB - db.collection.Find() Method" }, { "code": null, "e": 26263, "s": 26236, "text": "Mongoose Populate() Method" }, { "code": null, "e": 26296, "s": 26263, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 26329, "s": 26296, "text": "Node.js fs.readFileSync() Method" }, { "code": null, "e": 26377, "s": 26329, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 26406, "s": 26377, "text": "Node.js fs.readFile() Method" } ]
Java.net.InetSocketAddress class in Java - GeeksforGeeks
05 Oct, 2021 This class implements IP socket address( combination of IP address and port number). The objects of this class are immutable and can be used for binding, connecting purposes. Constructors : 1. InetSocketAddress(InetAddress addr, int port) : This constructor is similar to the general structure of a socket address with the attributes for Inet address and port number. Syntax :public InetSocketAddress(InetAddress addr, int port) Parameters : addr : IP address port : port number 2. InetSocketAddress(int port) : Creates a socketaddress object with the specified port number and a wildcard IP address. A wildcard IP address has the value 0.0.0.0 and it binds your socket to all network cards. Syntax : public InetSocketAddress(int port) Parameters : port : port number 3. InetSocketAddress(String hostname, int port) : Creates a socketaddress object and binds it to specified port and host. Resolution of hostname is performed to find the IP address and that is used for binding purpose, not the host name. If the resolution returns null, the address will be flagged as unresolved. Syntax : public InetSocketAddress(String hostname, int port) Parameters : hostname : host name port : port number Methods : 1. createUnresolved() : Creates a socket address with the given host and port number where no attempt is made to resolve the host name and the address is marked as unresolved. Syntax :public static InetSocketAddress createUnresolved(String host, int port) Parameters : host : host name port : port number 2. getPort() : Returns the port number for this socket address. Syntax : public final int getPort() 3. getAddress() : Returns the IP address of this socket address. Syntax : public final InetAddress getAddress() 4. getHostName() : Returns the host name, using reverse lookup if it was created using an IP address. Syntax : public final String getHostName() 5. getHostString() : Returns the host name if created with hostname or string representation of the address literal used for creation. Syntax : public final String getHostString() 6. isUnresolved() : Returns a boolean value indicating whether this address is resolved or not. Syntax : public final boolean isUnresolved() 7. toString() : Returns the string representation of this InetSocket address object. First the toString() method is called on the InetAddress part and then port number is appended after a colon. Syntax : public String toString() 8. equals() : compares if this socketaddress object is equal to specified object. The two are equal if they represent the same inetaddress and port number, or hostname and port number in case of unresolved address. Syntax : public final boolean equals(Object obj) Parameters : obj : object to compare with 9. hashcode() : Returns the hashcode for this InetSocketAddress object. Syntax : public final int hashCode() Java Implementation : Java // Java program to illustrate various// InetSocketAddress classimport java.net.InetAddress;import java.net.InetSocketAddress;import java.net.UnknownHostException; public class InetsockAddress{ public static void main(String[] args) throws UnknownHostException { // Following constructor can be used to create InetSocketAddress // objects. InetSocketAddress isa1 = new InetSocketAddress(5500); InetSocketAddress isa2 = new InetSocketAddress("localhost", 5050); InetAddress ip = InetAddress.getByName("localhost"); InetSocketAddress isa3 = new InetSocketAddress(ip, 8800); // createUnresolved() does not attempt to resolve the hostname. InetSocketAddress isa4 = InetSocketAddress.createUnresolved("abc", 5055); // These InetSocketAddress objects can be used to create sockets // in socket programming, in place of specifying individually the IP // address and port number. Please refer to TCP articles for their // further use. // These can also be used to retrieve information about the // socketAddress objects. // getHostName() method System.out.println("Hostname : " + isa1.getHostName()); // getHostString() method System.out.println("Host string : " + isa1.getHostString()); // getAddress() method System.out.println("Inet address : " + isa1.getAddress()); // getPort() method System.out.println("Port : " + isa1.getPort()); // isUnresolved() method System.out.println("isUnresolved : " + isa1.isUnresolved()); // equals() method System.out.println("isa1==isa2 : " + isa1.equals(isa2)); // toString() method System.out.println("toString : " + isa1.toString()); // hashCode() method System.out.println("hashCode : " + isa1.hashCode()); } } Output : Hostname : 0.0.0.0 Host string : 0.0.0.0 Inet address : 0.0.0.0/0.0.0.0 Port : 5500 isUnresolved : false isa1==isa2 : false toString : 0.0.0.0/0.0.0.0:5500 hashCode : 5500 References : Official Java Documentation This article is contributed by Rishabh Mahrsee. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. Akanksha_Rai gulshankumarar231 Java-Networking Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Overriding in Java Multidimensional Arrays in Java LinkedList in Java ArrayList in Java PriorityQueue in Java Queue Interface In Java Stack Class in Java Initializing a List in Java Collections.sort() in Java with Examples Math pow() method in Java with Example
[ { "code": null, "e": 24143, "s": 24115, "text": "\n05 Oct, 2021" }, { "code": null, "e": 24319, "s": 24143, "text": "This class implements IP socket address( combination of IP address and port number). The objects of this class are immutable and can be used for binding, connecting purposes. " }, { "code": null, "e": 24335, "s": 24319, "text": "Constructors : " }, { "code": null, "e": 24514, "s": 24335, "text": "1. InetSocketAddress(InetAddress addr, int port) : This constructor is similar to the general structure of a socket address with the attributes for Inet address and port number. " }, { "code": null, "e": 24642, "s": 24514, "text": "Syntax :public InetSocketAddress(InetAddress addr,\n int port)\nParameters :\naddr : IP address\nport : port number" }, { "code": null, "e": 24856, "s": 24642, "text": "2. InetSocketAddress(int port) : Creates a socketaddress object with the specified port number and a wildcard IP address. A wildcard IP address has the value 0.0.0.0 and it binds your socket to all network cards. " }, { "code": null, "e": 24932, "s": 24856, "text": "Syntax : public InetSocketAddress(int port)\nParameters :\nport : port number" }, { "code": null, "e": 25246, "s": 24932, "text": "3. InetSocketAddress(String hostname, int port) : Creates a socketaddress object and binds it to specified port and host. Resolution of hostname is performed to find the IP address and that is used for binding purpose, not the host name. If the resolution returns null, the address will be flagged as unresolved. " }, { "code": null, "e": 25378, "s": 25246, "text": "Syntax : public InetSocketAddress(String hostname,\n int port)\nParameters :\nhostname : host name\nport : port number" }, { "code": null, "e": 25389, "s": 25378, "text": "Methods : " }, { "code": null, "e": 25566, "s": 25389, "text": "1. createUnresolved() : Creates a socket address with the given host and port number where no attempt is made to resolve the host name and the address is marked as unresolved. " }, { "code": null, "e": 25728, "s": 25566, "text": "Syntax :public static InetSocketAddress createUnresolved(String host,\n int port)\nParameters :\nhost : host name\nport : port number" }, { "code": null, "e": 25793, "s": 25728, "text": "2. getPort() : Returns the port number for this socket address. " }, { "code": null, "e": 25829, "s": 25793, "text": "Syntax : public final int getPort()" }, { "code": null, "e": 25895, "s": 25829, "text": "3. getAddress() : Returns the IP address of this socket address. " }, { "code": null, "e": 25942, "s": 25895, "text": "Syntax : public final InetAddress getAddress()" }, { "code": null, "e": 26045, "s": 25942, "text": "4. getHostName() : Returns the host name, using reverse lookup if it was created using an IP address. " }, { "code": null, "e": 26088, "s": 26045, "text": "Syntax : public final String getHostName()" }, { "code": null, "e": 26224, "s": 26088, "text": "5. getHostString() : Returns the host name if created with hostname or string representation of the address literal used for creation. " }, { "code": null, "e": 26269, "s": 26224, "text": "Syntax : public final String getHostString()" }, { "code": null, "e": 26366, "s": 26269, "text": "6. isUnresolved() : Returns a boolean value indicating whether this address is resolved or not. " }, { "code": null, "e": 26411, "s": 26366, "text": "Syntax : public final boolean isUnresolved()" }, { "code": null, "e": 26607, "s": 26411, "text": "7. toString() : Returns the string representation of this InetSocket address object. First the toString() method is called on the InetAddress part and then port number is appended after a colon. " }, { "code": null, "e": 26641, "s": 26607, "text": "Syntax : public String toString()" }, { "code": null, "e": 26857, "s": 26641, "text": "8. equals() : compares if this socketaddress object is equal to specified object. The two are equal if they represent the same inetaddress and port number, or hostname and port number in case of unresolved address. " }, { "code": null, "e": 26948, "s": 26857, "text": "Syntax : public final boolean equals(Object obj)\nParameters :\nobj : object to compare with" }, { "code": null, "e": 27021, "s": 26948, "text": "9. hashcode() : Returns the hashcode for this InetSocketAddress object. " }, { "code": null, "e": 27058, "s": 27021, "text": "Syntax : public final int hashCode()" }, { "code": null, "e": 27082, "s": 27058, "text": "Java Implementation : " }, { "code": null, "e": 27087, "s": 27082, "text": "Java" }, { "code": "// Java program to illustrate various// InetSocketAddress classimport java.net.InetAddress;import java.net.InetSocketAddress;import java.net.UnknownHostException; public class InetsockAddress{ public static void main(String[] args) throws UnknownHostException { // Following constructor can be used to create InetSocketAddress // objects. InetSocketAddress isa1 = new InetSocketAddress(5500); InetSocketAddress isa2 = new InetSocketAddress(\"localhost\", 5050); InetAddress ip = InetAddress.getByName(\"localhost\"); InetSocketAddress isa3 = new InetSocketAddress(ip, 8800); // createUnresolved() does not attempt to resolve the hostname. InetSocketAddress isa4 = InetSocketAddress.createUnresolved(\"abc\", 5055); // These InetSocketAddress objects can be used to create sockets // in socket programming, in place of specifying individually the IP // address and port number. Please refer to TCP articles for their // further use. // These can also be used to retrieve information about the // socketAddress objects. // getHostName() method System.out.println(\"Hostname : \" + isa1.getHostName()); // getHostString() method System.out.println(\"Host string : \" + isa1.getHostString()); // getAddress() method System.out.println(\"Inet address : \" + isa1.getAddress()); // getPort() method System.out.println(\"Port : \" + isa1.getPort()); // isUnresolved() method System.out.println(\"isUnresolved : \" + isa1.isUnresolved()); // equals() method System.out.println(\"isa1==isa2 : \" + isa1.equals(isa2)); // toString() method System.out.println(\"toString : \" + isa1.toString()); // hashCode() method System.out.println(\"hashCode : \" + isa1.hashCode()); } }", "e": 28962, "s": 27087, "text": null }, { "code": null, "e": 28972, "s": 28962, "text": "Output : " }, { "code": null, "e": 29144, "s": 28972, "text": "Hostname : 0.0.0.0\nHost string : 0.0.0.0\nInet address : 0.0.0.0/0.0.0.0\nPort : 5500\nisUnresolved : false\nisa1==isa2 : false\ntoString : 0.0.0.0/0.0.0.0:5500\nhashCode : 5500" }, { "code": null, "e": 29609, "s": 29144, "text": "References : Official Java Documentation This article is contributed by Rishabh Mahrsee. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to review-team@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 29622, "s": 29609, "text": "Akanksha_Rai" }, { "code": null, "e": 29640, "s": 29622, "text": "gulshankumarar231" }, { "code": null, "e": 29656, "s": 29640, "text": "Java-Networking" }, { "code": null, "e": 29661, "s": 29656, "text": "Java" }, { "code": null, "e": 29666, "s": 29661, "text": "Java" }, { "code": null, "e": 29764, "s": 29666, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29773, "s": 29764, "text": "Comments" }, { "code": null, "e": 29786, "s": 29773, "text": "Old Comments" }, { "code": null, "e": 29805, "s": 29786, "text": "Overriding in Java" }, { "code": null, "e": 29837, "s": 29805, "text": "Multidimensional Arrays in Java" }, { "code": null, "e": 29856, "s": 29837, "text": "LinkedList in Java" }, { "code": null, "e": 29874, "s": 29856, "text": "ArrayList in Java" }, { "code": null, "e": 29896, "s": 29874, "text": "PriorityQueue in Java" }, { "code": null, "e": 29920, "s": 29896, "text": "Queue Interface In Java" }, { "code": null, "e": 29940, "s": 29920, "text": "Stack Class in Java" }, { "code": null, "e": 29968, "s": 29940, "text": "Initializing a List in Java" }, { "code": null, "e": 30009, "s": 29968, "text": "Collections.sort() in Java with Examples" } ]
Python Pandas - Create a datetime with DateTimeIndex
To create a datetime, we will use the date_range(). The periods and the time zone will also be set with the frequency. At first, import the required libraries − import pandas as pd DatetimeIndex with period 8 and frequency as M i.e. months. The timezone is Australia/Sydney − datetime = pd.date_range('2021-09-24 02:35:55', periods=8, tz='Australia/Sydney', freq='M') Display the datetime − print("DateTime...\n", datetime) Following is the code − import pandas as pd # DatetimeIndex with period 8 and frequency as M i.e. months # timezone is Australia/Sydney datetime = pd.date_range('2021-09-24 02:35:55', periods=8, tz='Australia/Sydney', freq='M') # display print("DateTime...\n", datetime) # get the day name print("\nGetting the day name..\n",datetime.day_name()) # get the month name print("\nGetting the month name..\n",datetime.month_name()) # get the year print("\nGetting the year name..\n",datetime.year) # get the hour print("\nGetting the hour..\n",datetime.hour) # get the minutes print("\nGetting the minutes..\n",datetime.minute) # get the seconds print("\nGetting the seconds..\n",datetime.second) This will produce the following output − DateTime... DatetimeIndex(['2021-09-30 02:35:55+10:00', '2021-10-31 02:35:55+11:00', '2021-11-30 02:35:55+11:00', '2021-12-31 02:35:55+11:00', '2022-01-31 02:35:55+11:00', '2022-02-28 02:35:55+11:00', '2022-03-31 02:35:55+11:00', '2022-04-30 02:35:55+10:00'], dtype='datetime64[ns, Australia/Sydney]', freq='M') Getting the day name.. Index(['Thursday', 'Sunday', 'Tuesday', 'Friday', 'Monday', 'Monday','Thursday', 'Saturday'], dtype='object') Getting the month name.. Index(['September', 'October', 'November', 'December', 'January', 'February','March', 'April'], dtype='object') Getting the year name.. Int64Index([2021, 2021, 2021, 2021, 2022, 2022, 2022, 2022], dtype='int64') Getting the hour.. Int64Index([2, 2, 2, 2, 2, 2, 2, 2], dtype='int64') Getting the minutes.. Int64Index([35, 35, 35, 35, 35, 35, 35, 35], dtype='int64') Getting the seconds.. Int64Index([55, 55, 55, 55, 55, 55, 55, 55], dtype='int64')
[ { "code": null, "e": 1223, "s": 1062, "text": "To create a datetime, we will use the date_range(). The periods and the time zone will also be set with the frequency. At first, import the required libraries −" }, { "code": null, "e": 1243, "s": 1223, "text": "import pandas as pd" }, { "code": null, "e": 1338, "s": 1243, "text": "DatetimeIndex with period 8 and frequency as M i.e. months. The timezone is Australia/Sydney −" }, { "code": null, "e": 1431, "s": 1338, "text": "datetime = pd.date_range('2021-09-24 02:35:55', periods=8, tz='Australia/Sydney', freq='M')\n" }, { "code": null, "e": 1454, "s": 1431, "text": "Display the datetime −" }, { "code": null, "e": 1487, "s": 1454, "text": "print(\"DateTime...\\n\", datetime)" }, { "code": null, "e": 1511, "s": 1487, "text": "Following is the code −" }, { "code": null, "e": 2187, "s": 1511, "text": "import pandas as pd\n\n# DatetimeIndex with period 8 and frequency as M i.e. months\n# timezone is Australia/Sydney\ndatetime = pd.date_range('2021-09-24 02:35:55', periods=8, tz='Australia/Sydney', freq='M')\n\n# display\nprint(\"DateTime...\\n\", datetime)\n\n# get the day name\nprint(\"\\nGetting the day name..\\n\",datetime.day_name())\n\n# get the month name\nprint(\"\\nGetting the month name..\\n\",datetime.month_name())\n\n# get the year\nprint(\"\\nGetting the year name..\\n\",datetime.year)\n\n# get the hour\nprint(\"\\nGetting the hour..\\n\",datetime.hour)\n\n# get the minutes\nprint(\"\\nGetting the minutes..\\n\",datetime.minute)\n\n# get the seconds\nprint(\"\\nGetting the seconds..\\n\",datetime.second)" }, { "code": null, "e": 2228, "s": 2187, "text": "This will produce the following output −" }, { "code": null, "e": 3223, "s": 2228, "text": "DateTime...\nDatetimeIndex(['2021-09-30 02:35:55+10:00', '2021-10-31 02:35:55+11:00',\n '2021-11-30 02:35:55+11:00', '2021-12-31 02:35:55+11:00',\n '2022-01-31 02:35:55+11:00', '2022-02-28 02:35:55+11:00',\n '2022-03-31 02:35:55+11:00', '2022-04-30 02:35:55+10:00'],\n dtype='datetime64[ns, Australia/Sydney]', freq='M')\n\nGetting the day name..\nIndex(['Thursday', 'Sunday', 'Tuesday', 'Friday', 'Monday', 'Monday','Thursday', 'Saturday'],\ndtype='object')\n\nGetting the month name..\nIndex(['September', 'October', 'November', 'December', 'January', 'February','March', 'April'], dtype='object')\n\nGetting the year name..\n Int64Index([2021, 2021, 2021, 2021, 2022, 2022, 2022, 2022], dtype='int64')\n\nGetting the hour..\n Int64Index([2, 2, 2, 2, 2, 2, 2, 2], dtype='int64')\n\nGetting the minutes..\n Int64Index([35, 35, 35, 35, 35, 35, 35, 35], dtype='int64')\n\nGetting the seconds..\n Int64Index([55, 55, 55, 55, 55, 55, 55, 55], dtype='int64')" } ]
How to use Tkinter in python to edit the title bar?
Tkinter creates a window or frame that appears after executing the program. Since all the functions and modules in Tkinter are independent, we can specifically use a particular function to customize the window attributes. Tkinter creates a default root window for every application. To customize or edit the default title of Tkinter window, we can use the following method, title(text= “your title”) Let us create a window by initiating an object of Tkinter frame and edit the title of the window or frame. #Import the library from tkinter import * #Create an instance of window win= Tk() #Set the geometry of the window win.geometry("700x400") #Set the title of the window win.title("tutorialspoint.com") #Create a label if needed Label(win, text= "The Title is tutorialspoint.com", font=('Helvetica bold',20), fg= "green").pack(pady=20) #Keep running the window or frame win.mainloop() The above Python code will set the title as tutorialspoint.com.
[ { "code": null, "e": 1284, "s": 1062, "text": "Tkinter creates a window or frame that appears after executing the program. Since\nall the functions and modules in Tkinter are independent, we can specifically use\na particular function to customize the window attributes." }, { "code": null, "e": 1436, "s": 1284, "text": "Tkinter creates a default root window for every application. To customize or edit\nthe default title of Tkinter window, we can use the following method," }, { "code": null, "e": 1462, "s": 1436, "text": "title(text= “your title”)" }, { "code": null, "e": 1569, "s": 1462, "text": "Let us create a window by initiating an object of Tkinter frame and edit the title of\nthe window or frame." }, { "code": null, "e": 1955, "s": 1569, "text": "#Import the library\nfrom tkinter import *\n\n#Create an instance of window\nwin= Tk()\n\n#Set the geometry of the window\nwin.geometry(\"700x400\")\n\n#Set the title of the window\nwin.title(\"tutorialspoint.com\")\n\n#Create a label if needed\nLabel(win, text= \"The Title is tutorialspoint.com\", font=('Helvetica bold',20), fg= \"green\").pack(pady=20)\n\n#Keep running the window or frame\nwin.mainloop()" }, { "code": null, "e": 2019, "s": 1955, "text": "The above Python code will set the title as tutorialspoint.com." } ]
AWS CodePipeline Serverless Deployment | Towards Data Science
Serverless applications are needed for machine learning and data science. They are lightweight, pay-as-you-go functions that reduce time spent on setting up servers or instances for your machine learning infrastructure. AWS has made it easier to construct a CI/CD pipeline with CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. Data scientists can spend less time on cloud architecture and DevOps, and spend more time fine-tuning their models/analyzing data. Plus, the pay-as-you-go model is cheaper than paying for cloud servers/EC2 instances to run 24/7 just to host the infrastructure. In this article, we’ll go over using AWS native tools to construct a CI/CD pipeline for deploying a simple machine learning model as a serverless endpoint. This pipeline (using CodePipeline) reads all code from a CodeCommit repository and triggers a CodeBuild. This CodeBuild uses Serverless Framework (a software package for YAML + CLI development) to deploy the code in CodeCommit to AWS Lambda Function. The goal is to trigger an AWS Lambda deployment whenever we make changes to a CodeCommit repository. This article will cover a lot of concepts, that may be unfamiliar to many of you. Please view the Terminology and Suggested Reading to understand more details. DISCLAIMER: Because AWS Lambda is lightweight, it may not be appropriate for huge, deep learning models. See Why We Don’t Use Lambda for Serverless Machine Learning for more information. AWS Lambda and Serverless are normally used interchangeably. To avoid confusion, I’ll refer to AWS Lambda when talking about the pay-as-you-go architecture. I’ll refer to Serverless Framework when talking about the CLI tool used to deploy to Lambda. AWS SQS — Simple Queue Service. A fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. AWS SageMaker — helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML. Used to deploy ML models separately from AWS Lambda. AWS Lambda — a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. Serverless Framework — a software package for YAML + CLI development and deployment to AWS Lambda. AWS CloudWatch — A tool for monitoring and observability. Keeps track of Lambda output in logs. To understand more on AWS Lambda, I highly recommend reading Serverless Functions and Using AWS Lambda with S3 Buckets. To understand more about deploying in the Serverless Framework, see the Serverless Framework Documentation. To understand CI/CD and CodePipeline development in AWS, I highly recommend reading CI/CD- Logical and Practical Approach to Build Four Step pipeline on AWS. To understand more about SageMaker and SQS, see the documentation on AWS. The following steps are as follows The developer deploys a model on SageMaker The developer updates via git the CodeCommit repository that contains logic for the architecture CodePipeline notices a change in CodeCommit repository via CloudWatch. It then triggers CodeBuild to create/update, link, and deploy all 4 services (3 SQS queues and 1 lambda function) in AWS. The developer then sends a message from TriggerLambdaSQS. This TriggerLambdaSQS then triggers the ModelPredictLambda function with input data. The Lambda will use the model deployed from SageMaker. If there is an error in the message format/lambda, the failed message will be outputted to the DeadletterSQS. If the lambda function processes the message successfully, it will output the result of SageMaker to the CloudWatch Logs. Correct, CodeDeploy allows lambda deployment. For every commit to CodeCommit, you can manually call a CodeBuild and then a CodeDeploy. We want to automate the build and deployment stages via CodePipeline, and not worry about manual triggers. In CodeDeploy, deploying packages on lambda functions is different than deploying packages on EC2 instances. In deploying on a lambda function, CodeBuild takes only the YAML file that lists lambda deployment configurations and stores it in an S3 Bucket. CodeDeploy downloads the YAML file and deploys it on a lambda function. In deploying on an EC2, CodeBuild takes all the files from CodeCommit, zips them, and stores the zip file in an S3 Bucket. CodeDeploy downloads and unzips the file on an EC2. CodePipeline specifies that artifacts generated from CodeBuild MUST be in zip format for them to be passed to CodeDeploy. This makes automated lambda deployment tricky, as it depends on a YAML file for deployment. This is a known problem in AWS discussion forums and Amazon did promise to deliver a fix for this bug. As of 2021, there still hasn’t been an update. The workaround is to create a CodePipeline that uses both CodeCommit and CodeBuild. In CodeBuild, you install all packages (including the serverless framework) and run the serverless framework to deploy the application to AWS Lambda. Since you’re relying on the serverless framework package instead of CodeDeploy for deployment, you’ll need to create a different YAML file called serverless.yml (no need for appspec.yml, a default YAML file for CodeDeployment). We’ll discuss more later on. For this tutorial, we’ll use scikit-learn’s Boston home prices dataset. We’ll train an XGBoost model to predict the median home price in Boston. We’ll deploy the model on SageMaker, and use it for predictions in the AWS Lambda architecture. We’ll first want to create a folder called repo. This will contain a folder (lambda_deployment) and a Python script (boston_sagemaker_deployment.py) on the same level. lambda_deployment will contain all the relevant files needed to successfully deploy a lambda function in CodePipeline. Below is a folder structure of the files. buildspec.yml — a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build and deploy using the serverless framework model_predict_lambda.py — the lambda function to call the SageMaker model for predictions requirements.txt — list of python packages to download from pip serverless.yml — configurations in YAML to define a serverless service. This also includes additional components (SQS) that are needed to be deployed along with the lambda boston_sagemaker_deployment.py — script to build and deploy a model on AWS using SageMaker First thing is to deploy a simple XGBoost Model to AWS. Note: we’ll want to create an IAM role called sagemaker-role, with full permissions to access SageMaker. I won’t go into any detail about XGBoost, as it is outside the scope of this tutorial. After that, we’ll execute the script. python boston_sagemaker_deployment.py We only have to do this once. It’ll take a few minutes, but it’ll give a SUCCESS or FAILED message. If SUCCESS, it’ll return an endpoint for our model. Check on the SageMaker AWS console that the endpoint is there. CodeCommit is Amazon’s source control service that hosts secure Git-based repositories. We’ll upload the lambda_deployment folder there. We can either use git to add the files or add them manually via the console. UPDATE: CodeBuild only recognizes files from the head of repo, not from the head of repo/lambda_deployment. To work around this, move all 4 files in lambda_deployment up to the root of repo. Once those are moved, delete the rest of the files/folders. CodeBuild allows us to build and test code with continuous scaling. As mentioned before, we’ll also use the serverless framework to deploy the lambda function and services in CodeBuild. We’ll add in buildspec.yml, which contains configurations for CodeBuild. Since we’re using a clean environment/image, we want to specify the commands needed to download certain packages. This is assuming a clean Ubuntu environment: version: 0.2phases: pre_build: commands: - echo "Running pre build commands" - apt-get update - apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7EA0A9C3F273FCD8 - add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable" - apt-get -y install docker-ce docker-ce-cli containerd.io - npm install -g serverless --unsafe - npm i -D serverless-dotenv-plugin - npm install serverless-plugin-aws-alerts --save-dev - sls plugin install -n serverless-python-requirements - apt-get -y install python3-pip - pip3 install awscli --upgrade --user - pip3 install -r requirements.txt - echo "Reducing size of SageMaker to run on lambda" - pip3 install sagemaker --target sagemaker-installation - cd sagemaker-installation - find . -type d -name "tests" -exec rm -rfv {} + - find . -type d -name "__pycache__" -exec rm -rfv {} + - zip -r sagemaker_lambda_light.zip - cd .. build: commands:. - echo "Running build commands" - sls deploy --verbose - echo "Finished deploying to Lambda and SQS" Because it is a clean image, we are responsible for installing pip, awscli, serverless, docker, etc. Note: lambda has a limit to how much we can install on the function. SageMaker has a lot of dependencies, so we’re removing the components of SageMaker that are not relevant for deployment. The prebuild commands focus on installing the packages, including the pip3 packages in requirements.txt (shown below): boto3 Now that we have the CodeBuild commands, we next want to set up serverless.yml. Serverless.yml contains configurations to deploy the lambda function and associated services to trigger the function, store the function outputs, send alarms on threshold limits, log function behavior, etc. The file will look like this. service: model-predictpackage: include: - ./model_predict_lambda.pyprovider: name: aws runtime: python3.6 iamRoleStatements: - Effect: "Allow" Action: - "lambda:InvokeFunction" - "iam:GetRole" - "sqs:CreateQueue" - "sqs:GetQueueUrl" - "sqs:SendMessage" - "ecr:DescribeRepositories" - "cloudformation:DescribeStacks" - "sagemaker:InvokeEndpoint" Resource: "*"functions: get_model_predictions: handler: model_predict_lambda.model_predict_handler provisionedConcurrency: 1 # optional, Count of provisioned lambda instances reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit events: - sqs: arn: Fn::GetAtt: - TriggerLambdaSQS - Arn batchSize: 1resources: Resources: TriggerLambdaSQS: Type: "AWS::SQS::Queue" Properties: QueueName: "TriggerLambdaSQS" VisibilityTimeout: 30 MessageRetentionPeriod: 60 RedrivePolicy: deadLetterTargetArn: "Fn::GetAtt": - DeadletterSQS - Arn maxReceiveCount: 1 DeadletterSQS: Type: "AWS::SQS::Queue" Properties: QueueName: "DeadletterSQS" VisibilityTimeout: 30 MessageRetentionPeriod: 60plugins: - serverless-python-requirements Here’s a brief overview of what serverless framework is creating a service called model-predict, which will encapsulate the lambda function and its resources a package of all python files required for the lambda function (in this case, model_predict_lambda.py) an AWS provider and permissions assigned to the IAM role created for this function a lambda function called get_model_predictions, which includes a handler that points to the model_predict_handler function in model_predict_lambda.py. It also contains concurrency limits and a reference to the SQS queue listed in resources a resource section that contains TriggerLambdaSQS and DeadletterSQS. TriggerLambdaSQS is where we send the messages to the lambda function. DeadletterSQS contains all the messages that failed when being processed in lambda. A dead letter queue is added to make sure that TriggerLambdaSQS doesn’t hold onto the failed messages (which can be resent to the lambda function and trigger a continuous loop of failed messages, resulting in a lot of failed lambda invocations and a surge in prices) Now, we want to create the python file to store the lambda function. Because we’re using an SQS to trigger a Lambda function, we’re dealing with a queue of asynchronous events. Hence, we’re traversing through the “Records” parameter to process a list of events in the queue at a current moment. For now, we’re going to print out the result and return a successful message (which will be deleted immediately as we set the batchSize in serverless.yml to 1). Now that we uploaded repo to CodeCommitt, let’s create a CodeBuild. Navigate to Developer Tools -> CodeBuild -> Build Projects. From there, click on the orange button, Create build project. Let’s name the project serverless-build. We then scroll down to initiate the source of the CodeBuild. Select the appropriate branch where the code is. Next, we’ll configure the environment for the build. We have the choice of using a custom Docker image on Amazon ECS or creating a new image. I recommend using Ubuntu, as it is compatible with a lot of machine learning libraries. Because we haven’t created a service role for the CodeBuild earlier, we’ll create a new service role for the account. We’ll then specify the buidspec file, batch configurations, and artifacts. Buildspec, we already defined the buildspec.yml file in repo. So we just point to that file. Artifacts are zip files stored in S3 that are outputted from CodeBuild and sent to other stages in the pipeline. Since we are just deploying using serverless in the build step, we can leave this empty. Next, we’ll instantiate CloudWatch logs to monitor the CodeBuild process. We’ll leave everything blank, and check the CloudWatch logs checkbox. Finally, we click on Create build project. So our build project is created. We can click on Start Build orange button, but we can let CodePipeline handle that. Remember how we created a new service role in CodeBuild titled codebuild-serverless-build-service-role? While this creates a new service role with that name, we still need to add permissions to that role so that it can access other AWS components in our architecture (Lambda, SQS, CloudFormation, CloudWatch logs, etc). Before creating the CodePipeline, please check that the following permissions added in the service role. AmazonSQSFullAccess IAMFullAccess AmazonS3FullAccess CloudWatchLogFullAccess AWSLambdaSQSQueueExecutionRole AWSCloudFormationFullAccess AWSLambda_FullAccess CodePipeline watches repo in CodeCommit. It triggers a pipeline whenever we add a commit to repo. This is convenient because we automated a deployment whenever we make changes. Navigate to Developer Tools -> CodePipeline -> Pipeline. Click on the orange button Create pipeline. We now can configure settings for the pipeline. In Step 1, name the pipeline serverless-pipeline. Because we haven’t created a service role for the CodePipeline earlier, we’ll create a new service role for the account. Use default values for everything else. Use the default values for Advanced Settings and click Next. We now add the source stage of the pipeline, which points to repo in CodeCommit. Use default values and branch of choosing, and click on Next. We now add the build stage of the pipeline. We set the build provider to CodeBuild and the project name to serverless-build (which is what we created earlier). We do a single build and click Next. Because we didn’t add a CodeDeploy stage (and because we’re doing serverless deployment in CodeBuild), we can skip this step. Click on Skip deploy stage. This is just to review everything before creating the pipeline. It looks good, so let’s click on Create pipeline. Our CodePipeline is now completed. This has the Source and Build stage. Source stage is blue because it is in progress. It will be green if it is successful, and red if it failed. Once both stages are green, we can begin testing our architecture. If everything builds correctly, we should see Lambda -> Function-> model-predict-dev-get_model_predictions created. We should also see two queues created (DeadletterSQS and TriggerLambdaSQS). If we click on TriggerLambdaSQS -> Send and receive messages, we’ll see a message we can send to the lambda function. Let’s send a JSON format to pass into the model. This is what we send in the Message Body. { "payload": "14.0507,0,18.1,0,0.597,6.657,100,1.5275,24,666,20.2,35.05,21.22"} After clicking on Send Message, we navigate to CloudWatch -> Log Groups -> /aws/lambda/model-predict-dev-get_model_predictions. We get the latest time stamp and inspect the logs. We can see the output 75.875, which represents the Boston Housing mean sample price, based on the input we sent. We went over how to construct a CI/CD pipeline in AWS to deploy a lambda architecture. The architecture contains SQS queues for trigger/failed messages and CloudWatch to output the results to logs. We went over a workaround for creating a serverless pipeline by utilizing CodeBuild and serverless framework for deployment. We also created a script to utilize SageMaker for hosting the XGBoost model. We referred to the SageMaker endpoint in our lambda function and got the prediction of the Boston Housing median price with input fed from SQS. We are able to see the prediction in CloudWatch logs. AWS CodePipeline has made it easier for data scientists to perform MLOps. Hopefully, Amazon can fix the bug in CodePipeline to allow YAML files to be passed in between CodeBuild and CodeDeploy stages within the pipeline. Note: Make sure to delete SageMaker endpoint when you’re done github.com Thanks for reading! If you want to read more of my work, view my Table of Contents. If you’re not a Medium paid member, but are interested in subscribing to Towards Data Science just to read tutorials and articles like this, click here to enroll in a membership. Enrolling in this link means I get paid for referring you to Medium.
[ { "code": null, "e": 763, "s": 171, "text": "Serverless applications are needed for machine learning and data science. They are lightweight, pay-as-you-go functions that reduce time spent on setting up servers or instances for your machine learning infrastructure. AWS has made it easier to construct a CI/CD pipeline with CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. Data scientists can spend less time on cloud architecture and DevOps, and spend more time fine-tuning their models/analyzing data. Plus, the pay-as-you-go model is cheaper than paying for cloud servers/EC2 instances to run 24/7 just to host the infrastructure." }, { "code": null, "e": 1271, "s": 763, "text": "In this article, we’ll go over using AWS native tools to construct a CI/CD pipeline for deploying a simple machine learning model as a serverless endpoint. This pipeline (using CodePipeline) reads all code from a CodeCommit repository and triggers a CodeBuild. This CodeBuild uses Serverless Framework (a software package for YAML + CLI development) to deploy the code in CodeCommit to AWS Lambda Function. The goal is to trigger an AWS Lambda deployment whenever we make changes to a CodeCommit repository." }, { "code": null, "e": 1431, "s": 1271, "text": "This article will cover a lot of concepts, that may be unfamiliar to many of you. Please view the Terminology and Suggested Reading to understand more details." }, { "code": null, "e": 1618, "s": 1431, "text": "DISCLAIMER: Because AWS Lambda is lightweight, it may not be appropriate for huge, deep learning models. See Why We Don’t Use Lambda for Serverless Machine Learning for more information." }, { "code": null, "e": 1868, "s": 1618, "text": "AWS Lambda and Serverless are normally used interchangeably. To avoid confusion, I’ll refer to AWS Lambda when talking about the pay-as-you-go architecture. I’ll refer to Serverless Framework when talking about the CLI tool used to deploy to Lambda." }, { "code": null, "e": 2044, "s": 1868, "text": "AWS SQS — Simple Queue Service. A fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications." }, { "code": null, "e": 2308, "s": 2044, "text": "AWS SageMaker — helps data scientists and developers to prepare, build, train, and deploy high-quality machine learning (ML) models quickly by bringing together a broad set of capabilities purpose-built for ML. Used to deploy ML models separately from AWS Lambda." }, { "code": null, "e": 2516, "s": 2308, "text": "AWS Lambda — a serverless compute service that lets you run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes." }, { "code": null, "e": 2615, "s": 2516, "text": "Serverless Framework — a software package for YAML + CLI development and deployment to AWS Lambda." }, { "code": null, "e": 2711, "s": 2615, "text": "AWS CloudWatch — A tool for monitoring and observability. Keeps track of Lambda output in logs." }, { "code": null, "e": 2831, "s": 2711, "text": "To understand more on AWS Lambda, I highly recommend reading Serverless Functions and Using AWS Lambda with S3 Buckets." }, { "code": null, "e": 2939, "s": 2831, "text": "To understand more about deploying in the Serverless Framework, see the Serverless Framework Documentation." }, { "code": null, "e": 3097, "s": 2939, "text": "To understand CI/CD and CodePipeline development in AWS, I highly recommend reading CI/CD- Logical and Practical Approach to Build Four Step pipeline on AWS." }, { "code": null, "e": 3171, "s": 3097, "text": "To understand more about SageMaker and SQS, see the documentation on AWS." }, { "code": null, "e": 3206, "s": 3171, "text": "The following steps are as follows" }, { "code": null, "e": 3249, "s": 3206, "text": "The developer deploys a model on SageMaker" }, { "code": null, "e": 3346, "s": 3249, "text": "The developer updates via git the CodeCommit repository that contains logic for the architecture" }, { "code": null, "e": 3539, "s": 3346, "text": "CodePipeline notices a change in CodeCommit repository via CloudWatch. It then triggers CodeBuild to create/update, link, and deploy all 4 services (3 SQS queues and 1 lambda function) in AWS." }, { "code": null, "e": 3969, "s": 3539, "text": "The developer then sends a message from TriggerLambdaSQS. This TriggerLambdaSQS then triggers the ModelPredictLambda function with input data. The Lambda will use the model deployed from SageMaker. If there is an error in the message format/lambda, the failed message will be outputted to the DeadletterSQS. If the lambda function processes the message successfully, it will output the result of SageMaker to the CloudWatch Logs." }, { "code": null, "e": 4211, "s": 3969, "text": "Correct, CodeDeploy allows lambda deployment. For every commit to CodeCommit, you can manually call a CodeBuild and then a CodeDeploy. We want to automate the build and deployment stages via CodePipeline, and not worry about manual triggers." }, { "code": null, "e": 5076, "s": 4211, "text": "In CodeDeploy, deploying packages on lambda functions is different than deploying packages on EC2 instances. In deploying on a lambda function, CodeBuild takes only the YAML file that lists lambda deployment configurations and stores it in an S3 Bucket. CodeDeploy downloads the YAML file and deploys it on a lambda function. In deploying on an EC2, CodeBuild takes all the files from CodeCommit, zips them, and stores the zip file in an S3 Bucket. CodeDeploy downloads and unzips the file on an EC2. CodePipeline specifies that artifacts generated from CodeBuild MUST be in zip format for them to be passed to CodeDeploy. This makes automated lambda deployment tricky, as it depends on a YAML file for deployment. This is a known problem in AWS discussion forums and Amazon did promise to deliver a fix for this bug. As of 2021, there still hasn’t been an update." }, { "code": null, "e": 5567, "s": 5076, "text": "The workaround is to create a CodePipeline that uses both CodeCommit and CodeBuild. In CodeBuild, you install all packages (including the serverless framework) and run the serverless framework to deploy the application to AWS Lambda. Since you’re relying on the serverless framework package instead of CodeDeploy for deployment, you’ll need to create a different YAML file called serverless.yml (no need for appspec.yml, a default YAML file for CodeDeployment). We’ll discuss more later on." }, { "code": null, "e": 5808, "s": 5567, "text": "For this tutorial, we’ll use scikit-learn’s Boston home prices dataset. We’ll train an XGBoost model to predict the median home price in Boston. We’ll deploy the model on SageMaker, and use it for predictions in the AWS Lambda architecture." }, { "code": null, "e": 6137, "s": 5808, "text": "We’ll first want to create a folder called repo. This will contain a folder (lambda_deployment) and a Python script (boston_sagemaker_deployment.py) on the same level. lambda_deployment will contain all the relevant files needed to successfully deploy a lambda function in CodePipeline. Below is a folder structure of the files." }, { "code": null, "e": 6299, "s": 6137, "text": "buildspec.yml — a collection of build commands and related settings, in YAML format, that CodeBuild uses to run a build and deploy using the serverless framework" }, { "code": null, "e": 6389, "s": 6299, "text": "model_predict_lambda.py — the lambda function to call the SageMaker model for predictions" }, { "code": null, "e": 6453, "s": 6389, "text": "requirements.txt — list of python packages to download from pip" }, { "code": null, "e": 6625, "s": 6453, "text": "serverless.yml — configurations in YAML to define a serverless service. This also includes additional components (SQS) that are needed to be deployed along with the lambda" }, { "code": null, "e": 6716, "s": 6625, "text": "boston_sagemaker_deployment.py — script to build and deploy a model on AWS using SageMaker" }, { "code": null, "e": 6964, "s": 6716, "text": "First thing is to deploy a simple XGBoost Model to AWS. Note: we’ll want to create an IAM role called sagemaker-role, with full permissions to access SageMaker. I won’t go into any detail about XGBoost, as it is outside the scope of this tutorial." }, { "code": null, "e": 7002, "s": 6964, "text": "After that, we’ll execute the script." }, { "code": null, "e": 7040, "s": 7002, "text": "python boston_sagemaker_deployment.py" }, { "code": null, "e": 7255, "s": 7040, "text": "We only have to do this once. It’ll take a few minutes, but it’ll give a SUCCESS or FAILED message. If SUCCESS, it’ll return an endpoint for our model. Check on the SageMaker AWS console that the endpoint is there." }, { "code": null, "e": 7469, "s": 7255, "text": "CodeCommit is Amazon’s source control service that hosts secure Git-based repositories. We’ll upload the lambda_deployment folder there. We can either use git to add the files or add them manually via the console." }, { "code": null, "e": 7720, "s": 7469, "text": "UPDATE: CodeBuild only recognizes files from the head of repo, not from the head of repo/lambda_deployment. To work around this, move all 4 files in lambda_deployment up to the root of repo. Once those are moved, delete the rest of the files/folders." }, { "code": null, "e": 7906, "s": 7720, "text": "CodeBuild allows us to build and test code with continuous scaling. As mentioned before, we’ll also use the serverless framework to deploy the lambda function and services in CodeBuild." }, { "code": null, "e": 8138, "s": 7906, "text": "We’ll add in buildspec.yml, which contains configurations for CodeBuild. Since we’re using a clean environment/image, we want to specify the commands needed to download certain packages. This is assuming a clean Ubuntu environment:" }, { "code": null, "e": 9276, "s": 8138, "text": "version: 0.2phases: pre_build: commands: - echo \"Running pre build commands\" - apt-get update - apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7EA0A9C3F273FCD8 - add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable\" - apt-get -y install docker-ce docker-ce-cli containerd.io - npm install -g serverless --unsafe - npm i -D serverless-dotenv-plugin - npm install serverless-plugin-aws-alerts --save-dev - sls plugin install -n serverless-python-requirements - apt-get -y install python3-pip - pip3 install awscli --upgrade --user - pip3 install -r requirements.txt - echo \"Reducing size of SageMaker to run on lambda\" - pip3 install sagemaker --target sagemaker-installation - cd sagemaker-installation - find . -type d -name \"tests\" -exec rm -rfv {} + - find . -type d -name \"__pycache__\" -exec rm -rfv {} + - zip -r sagemaker_lambda_light.zip - cd .. build: commands:. - echo \"Running build commands\" - sls deploy --verbose - echo \"Finished deploying to Lambda and SQS\"" }, { "code": null, "e": 9567, "s": 9276, "text": "Because it is a clean image, we are responsible for installing pip, awscli, serverless, docker, etc. Note: lambda has a limit to how much we can install on the function. SageMaker has a lot of dependencies, so we’re removing the components of SageMaker that are not relevant for deployment." }, { "code": null, "e": 9686, "s": 9567, "text": "The prebuild commands focus on installing the packages, including the pip3 packages in requirements.txt (shown below):" }, { "code": null, "e": 9692, "s": 9686, "text": "boto3" }, { "code": null, "e": 10009, "s": 9692, "text": "Now that we have the CodeBuild commands, we next want to set up serverless.yml. Serverless.yml contains configurations to deploy the lambda function and associated services to trigger the function, store the function outputs, send alarms on threshold limits, log function behavior, etc. The file will look like this." }, { "code": null, "e": 11431, "s": 10009, "text": "service: model-predictpackage: include: - ./model_predict_lambda.pyprovider: name: aws runtime: python3.6 iamRoleStatements: - Effect: \"Allow\" Action: - \"lambda:InvokeFunction\" - \"iam:GetRole\" - \"sqs:CreateQueue\" - \"sqs:GetQueueUrl\" - \"sqs:SendMessage\" - \"ecr:DescribeRepositories\" - \"cloudformation:DescribeStacks\" - \"sagemaker:InvokeEndpoint\" Resource: \"*\"functions: get_model_predictions: handler: model_predict_lambda.model_predict_handler provisionedConcurrency: 1 # optional, Count of provisioned lambda instances reservedConcurrency: 1 # optional, reserved concurrency limit for this function. By default, AWS uses account concurrency limit events: - sqs: arn: Fn::GetAtt: - TriggerLambdaSQS - Arn batchSize: 1resources: Resources: TriggerLambdaSQS: Type: \"AWS::SQS::Queue\" Properties: QueueName: \"TriggerLambdaSQS\" VisibilityTimeout: 30 MessageRetentionPeriod: 60 RedrivePolicy: deadLetterTargetArn: \"Fn::GetAtt\": - DeadletterSQS - Arn maxReceiveCount: 1 DeadletterSQS: Type: \"AWS::SQS::Queue\" Properties: QueueName: \"DeadletterSQS\" VisibilityTimeout: 30 MessageRetentionPeriod: 60plugins: - serverless-python-requirements" }, { "code": null, "e": 11496, "s": 11431, "text": "Here’s a brief overview of what serverless framework is creating" }, { "code": null, "e": 11589, "s": 11496, "text": "a service called model-predict, which will encapsulate the lambda function and its resources" }, { "code": null, "e": 11692, "s": 11589, "text": "a package of all python files required for the lambda function (in this case, model_predict_lambda.py)" }, { "code": null, "e": 11775, "s": 11692, "text": "an AWS provider and permissions assigned to the IAM role created for this function" }, { "code": null, "e": 12015, "s": 11775, "text": "a lambda function called get_model_predictions, which includes a handler that points to the model_predict_handler function in model_predict_lambda.py. It also contains concurrency limits and a reference to the SQS queue listed in resources" }, { "code": null, "e": 12506, "s": 12015, "text": "a resource section that contains TriggerLambdaSQS and DeadletterSQS. TriggerLambdaSQS is where we send the messages to the lambda function. DeadletterSQS contains all the messages that failed when being processed in lambda. A dead letter queue is added to make sure that TriggerLambdaSQS doesn’t hold onto the failed messages (which can be resent to the lambda function and trigger a continuous loop of failed messages, resulting in a lot of failed lambda invocations and a surge in prices)" }, { "code": null, "e": 12575, "s": 12506, "text": "Now, we want to create the python file to store the lambda function." }, { "code": null, "e": 12962, "s": 12575, "text": "Because we’re using an SQS to trigger a Lambda function, we’re dealing with a queue of asynchronous events. Hence, we’re traversing through the “Records” parameter to process a list of events in the queue at a current moment. For now, we’re going to print out the result and return a successful message (which will be deleted immediately as we set the batchSize in serverless.yml to 1)." }, { "code": null, "e": 13090, "s": 12962, "text": "Now that we uploaded repo to CodeCommitt, let’s create a CodeBuild. Navigate to Developer Tools -> CodeBuild -> Build Projects." }, { "code": null, "e": 13152, "s": 13090, "text": "From there, click on the orange button, Create build project." }, { "code": null, "e": 13254, "s": 13152, "text": "Let’s name the project serverless-build. We then scroll down to initiate the source of the CodeBuild." }, { "code": null, "e": 13303, "s": 13254, "text": "Select the appropriate branch where the code is." }, { "code": null, "e": 13356, "s": 13303, "text": "Next, we’ll configure the environment for the build." }, { "code": null, "e": 13651, "s": 13356, "text": "We have the choice of using a custom Docker image on Amazon ECS or creating a new image. I recommend using Ubuntu, as it is compatible with a lot of machine learning libraries. Because we haven’t created a service role for the CodeBuild earlier, we’ll create a new service role for the account." }, { "code": null, "e": 13726, "s": 13651, "text": "We’ll then specify the buidspec file, batch configurations, and artifacts." }, { "code": null, "e": 14021, "s": 13726, "text": "Buildspec, we already defined the buildspec.yml file in repo. So we just point to that file. Artifacts are zip files stored in S3 that are outputted from CodeBuild and sent to other stages in the pipeline. Since we are just deploying using serverless in the build step, we can leave this empty." }, { "code": null, "e": 14095, "s": 14021, "text": "Next, we’ll instantiate CloudWatch logs to monitor the CodeBuild process." }, { "code": null, "e": 14208, "s": 14095, "text": "We’ll leave everything blank, and check the CloudWatch logs checkbox. Finally, we click on Create build project." }, { "code": null, "e": 14325, "s": 14208, "text": "So our build project is created. We can click on Start Build orange button, but we can let CodePipeline handle that." }, { "code": null, "e": 14750, "s": 14325, "text": "Remember how we created a new service role in CodeBuild titled codebuild-serverless-build-service-role? While this creates a new service role with that name, we still need to add permissions to that role so that it can access other AWS components in our architecture (Lambda, SQS, CloudFormation, CloudWatch logs, etc). Before creating the CodePipeline, please check that the following permissions added in the service role." }, { "code": null, "e": 14770, "s": 14750, "text": "AmazonSQSFullAccess" }, { "code": null, "e": 14784, "s": 14770, "text": "IAMFullAccess" }, { "code": null, "e": 14803, "s": 14784, "text": "AmazonS3FullAccess" }, { "code": null, "e": 14827, "s": 14803, "text": "CloudWatchLogFullAccess" }, { "code": null, "e": 14858, "s": 14827, "text": "AWSLambdaSQSQueueExecutionRole" }, { "code": null, "e": 14886, "s": 14858, "text": "AWSCloudFormationFullAccess" }, { "code": null, "e": 14907, "s": 14886, "text": "AWSLambda_FullAccess" }, { "code": null, "e": 15141, "s": 14907, "text": "CodePipeline watches repo in CodeCommit. It triggers a pipeline whenever we add a commit to repo. This is convenient because we automated a deployment whenever we make changes. Navigate to Developer Tools -> CodePipeline -> Pipeline." }, { "code": null, "e": 15185, "s": 15141, "text": "Click on the orange button Create pipeline." }, { "code": null, "e": 15505, "s": 15185, "text": "We now can configure settings for the pipeline. In Step 1, name the pipeline serverless-pipeline. Because we haven’t created a service role for the CodePipeline earlier, we’ll create a new service role for the account. Use default values for everything else. Use the default values for Advanced Settings and click Next." }, { "code": null, "e": 15648, "s": 15505, "text": "We now add the source stage of the pipeline, which points to repo in CodeCommit. Use default values and branch of choosing, and click on Next." }, { "code": null, "e": 15845, "s": 15648, "text": "We now add the build stage of the pipeline. We set the build provider to CodeBuild and the project name to serverless-build (which is what we created earlier). We do a single build and click Next." }, { "code": null, "e": 15999, "s": 15845, "text": "Because we didn’t add a CodeDeploy stage (and because we’re doing serverless deployment in CodeBuild), we can skip this step. Click on Skip deploy stage." }, { "code": null, "e": 16113, "s": 15999, "text": "This is just to review everything before creating the pipeline. It looks good, so let’s click on Create pipeline." }, { "code": null, "e": 16360, "s": 16113, "text": "Our CodePipeline is now completed. This has the Source and Build stage. Source stage is blue because it is in progress. It will be green if it is successful, and red if it failed. Once both stages are green, we can begin testing our architecture." }, { "code": null, "e": 16552, "s": 16360, "text": "If everything builds correctly, we should see Lambda -> Function-> model-predict-dev-get_model_predictions created. We should also see two queues created (DeadletterSQS and TriggerLambdaSQS)." }, { "code": null, "e": 16719, "s": 16552, "text": "If we click on TriggerLambdaSQS -> Send and receive messages, we’ll see a message we can send to the lambda function. Let’s send a JSON format to pass into the model." }, { "code": null, "e": 16761, "s": 16719, "text": "This is what we send in the Message Body." }, { "code": null, "e": 16841, "s": 16761, "text": "{ \"payload\": \"14.0507,0,18.1,0,0.597,6.657,100,1.5275,24,666,20.2,35.05,21.22\"}" }, { "code": null, "e": 17020, "s": 16841, "text": "After clicking on Send Message, we navigate to CloudWatch -> Log Groups -> /aws/lambda/model-predict-dev-get_model_predictions. We get the latest time stamp and inspect the logs." }, { "code": null, "e": 17133, "s": 17020, "text": "We can see the output 75.875, which represents the Boston Housing mean sample price, based on the input we sent." }, { "code": null, "e": 17456, "s": 17133, "text": "We went over how to construct a CI/CD pipeline in AWS to deploy a lambda architecture. The architecture contains SQS queues for trigger/failed messages and CloudWatch to output the results to logs. We went over a workaround for creating a serverless pipeline by utilizing CodeBuild and serverless framework for deployment." }, { "code": null, "e": 17731, "s": 17456, "text": "We also created a script to utilize SageMaker for hosting the XGBoost model. We referred to the SageMaker endpoint in our lambda function and got the prediction of the Boston Housing median price with input fed from SQS. We are able to see the prediction in CloudWatch logs." }, { "code": null, "e": 17952, "s": 17731, "text": "AWS CodePipeline has made it easier for data scientists to perform MLOps. Hopefully, Amazon can fix the bug in CodePipeline to allow YAML files to be passed in between CodeBuild and CodeDeploy stages within the pipeline." }, { "code": null, "e": 18014, "s": 17952, "text": "Note: Make sure to delete SageMaker endpoint when you’re done" }, { "code": null, "e": 18025, "s": 18014, "text": "github.com" }, { "code": null, "e": 18109, "s": 18025, "text": "Thanks for reading! If you want to read more of my work, view my Table of Contents." } ]
Deep Learning: Introduction to Tensors & TensorFlow | by Victor Roman | Towards Data Science
The goal of this article is to cover the following topics: Introduction to Tensors Graphs, variables & operations Problem resolution with TensorFlow TensorFlow is a framework developed and maintained by Google that enables mathematical operations to be performed in an optimized way on a CPU or GPU. We are going to focus on the GPU since it is the fastest way we have to train a deep neural network. Why Tensorflow? Because of its flexibility and scalability Because of its popularity The key features that make TensorFlow the most popular Deep Learning library are: TensorFlow uses tensors to perform the operations. In TensorFlow, you first define the activities to be performed (build the graph), and then execute them (execute the graph). This allows the process to be optimized to the task at hand, reducing greatly the computation time. TensorFlow enables code to be run in parallel or on one or more GPUs. Okay, but what’s a tensor? Although tensors were invented by physicists to be able to describe interactions, in the field of Artificial Intelligence they can be understood simply as containers of numbers. Let’s put all this now into practice. We will use code some tensors in python in order to understand better what they are and how they work. Let’s imagine that we want to store the average grade of a student. We will use a 0D tensor, which is just a simple number or scalar. import numpy as nptensor_0D = np.array(5)print("Average grade: \n{}".format(tensor_0D))print("Tensor dimensions: \n{}".format(tensor_0D.ndim)) Let’s try now to store the grade of every subject that this student courses. We can use a 1D tensor to do so: tensor_1D = np.array([4,6,8])print("Subject grades: \n{}".format(tensor_1D))print("Tensor dimensions: \n{}".format(tensor_0D.ndim)) But wait... What if we want to store each grade of every exam the student took on each subject? How could we do this if each subject had 3 exams? By using a 2D tensor! Which, as we have seen before, is a matrix. # 2D Tensor (matrix)tensor_2D = np.array([[0, 1, 1], # Subject 1 [2, 3, 3], # Subject 2 [1, 3, 2]]) # Subject 3print("Exam grades are:\n{}".format(tensor_2D))print("Subject 1:\n{}".format(tensor_2D[0]))print("Subject 2:\n{}".format(tensor_2D[1]))print("Subject 3:\n{}".format(tensor_2D[2]))print("Tensor dimensions: \n{}".format(tensor_2D.ndim)) We want now to store the grades of the subjects (which are annual) for four quarters, so it will be easier to access them if necessary in a future, don’t you think? How do you think we could organize them? What if we add a dimension to our 2D tensor that indicates the quarter? We would get a 3D tensor (3D matrix or cube). tensor_3D = np.array([[[0, 1, 1], # First quarter [2, 3, 3], [1, 3, 2]], [[1, 3, 2], # Second quarter [2, 4, 2], [0, 1, 1]]])print("Exam grades per quarter are:\n{}".format(tensor_3D))print("First quarter:\n{}".format(tensor_3D[0]))print("Second quarter:\n{}".format(tensor_3D[1]))print("Tensor dimensions: \n{}".format(tensor_3D.ndim)) What if we add a dimension to our tensor, so that we can have the grades per term of each subject for each student? It will be a 4D tensor (3D matrix vector or cube vector). tensor_4D = np.array([[[[0, 1, 1], # Jacob [2, 3, 3], [1, 3, 2]], [[1, 3, 2], [2, 4, 2], [0, 1, 1]]], [[[0, 3, 1], # Christian [2, 4, 1], [1, 3, 2]], [[1, 1, 1], [2, 3, 4], [1, 3, 2]]], [[[2, 2, 4], # Sofia [2, 1, 3], [0, 4, 2]], [[2, 4, 1], [2, 3, 0], [1, 3, 3]]]])print("The grades of each student are:\n{}".format(tensor_4D))print("Jacob's grades:\n{}".format(tensor_4D[0]))print("Christian's grades:\n{}".format(tensor_4D[1]))print("Sofia's grades:\n{}".format(tensor_4D[2]))print("Tensor dimensions: \n{}".format(tensor_4D.ndim)) And so we could go on to infinity adding dimensions to our tensors, to be able to store more data. To give you an idea of how often tensors are used in the world of Deep Learning, the most common types of tensors are: 3D tensors: used in time series. 4D-Tensors: used with images. 5D tensioners: used with videos. Normally, one of the dimensions will be used to store the samples of each type of data. For example, with images: If we want to store 64 RGB images of 224x224 pixels, we will need a 3D matrix vector, or what is the same, a 4D tensor. How many dimensions do we need? We have 64 images of 224 pixels x 224 pixels x 3 channels (R, G and B). Therefore: (64, 224, 224, 3) If you want to go deeper into tensors or more examples, here is a very good resource for it: Tensors illustrated with cats We said before that in TensorFlow, first you define the operations to be carried out and then you execute them. To do this, you use a graph. And what is a graph? A simple example of a sum of a + b. And here is a more complex example, for now, you do not have to understand completely, but this is just a snippet of how intricate they can get. First of all, we need to define and understand some basic concepts of TensorFlow (from now on TF): tf.Graph: represent a set of tf.Operations tf.Operation: Are the operations determined by the equations defined by us tf.Tensor: Where we store the results of tf.Operations In the beginning, tf.Graph is transparent to us, because there is one default graph where are added all our defined operations. It is called tf.get_default_graph(). # We import the Tensorflow package and matplotlib for the chartsimport tensorflow as tfimport matplotlib.pyplot as plt %matplotlib inline Let’s start with something very simple, just a simple multiplication in Tensorflow. # Variables definitionx = tf.constant(6) y = tf.constant(8)# Operations definitionresult = tf.multiply(x, y)print(result) As you can see, it didn’t give us back the result. What it has done so far is create the network. To give you an example, it’s like riding in a car. Now we have it assembled, but it still doesn’t do what it was designed to do, move. To do that, we should turn it on. To do so, we will use tf.Session(). sess = tf.Session() output = sess.run(result) print(output) To be able to visualize the graph, we should define a couple of functions. from IPython.display import clear_output, Image, display, HTMLdef strip_consts(graph_def, max_const_size=32): """Strip large constant values from graph_def.""" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = "<stripped %d bytes>"%size return strip_defdef show_graph(graph_def, max_const_size=32): """Visualize TensorFlow graph.""" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = """ <script> function load() {{ document.getElementById("{id}").pbtxt = {data}; }} </script> <link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()> <div style="height:600px"> <tf-graph-basic id="{id}"></tf-graph-basic> </div> """.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))iframe = """ <iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe> """.format(code.replace('"', '"')) display(HTML(iframe) And now we will show the previously defined graph: As we can see, the graph consist of 2 nodes of constant type and one of operator type (multiplication). However, their names are not quite indicatives. Let’s change this: x2 = tf.constant(5.0, name='x2')y2 = tf.constant(6.0, name='y2')result = tf.multiply(x2, y2)# Let's see what it looks like now:show_graph(tf.get_default_graph().as_graph_def()) Once we have performed the operation, we should close the session to free the resources with sess.close(). Lastly, we can also indicate to TF the GPU that we want it to execute the operations. To do so, we can print a list of the available devices: from tensorflow.python.client import device_libdef get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos]print(get_available_devices()) We will select the GPU:0 and perform the multiplication [3 3] by [2 2], which may result in 3x3 +2x2 = 12. Let’s check it out: By using the with ___ as ___: we make python to free the resources of the TF session by itself: with tf.Session() as sess: with tf.device("/GPU:0"): matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) output = sess.run(product) print(output) Let’s create now a 1D Tensor of 32 values equally sparsed between -5 and 5: n_values = 64x = tf.linspace(-5.0, 5.0, n_values)sess = tf.Session()result = sess.run(x)print(result) In addition to sess.run(_), there are other ways to evaluate Tensors x.eval(session=sess) We need to always remember to close the session: sess.close() We can also use an interactive session, which can help us by not having to be constantly calling the .run() to have the results executed: sess = tf.InteractiveSession()x.eval() Let’s see now a tf.Operation, to do so, we will use “x” to create and visualize a gaussian distribution. Its formula is: sigma = 1.0mean = 0# To implement the gaussian distributio formula:g1d = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) / (2.0 * tf.pow(sigma, 2.0)))) * (1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))# to check that this operation has been succesuflly included in our tf.Graphif g1d.graph is tf.get_default_graph(): print('All good') plt.plot(g1d.eval()) # to see the dimensionsprint(g1d.get_shape())print(type(g1d.get_shape()))print(g1d.get_shape().as_list())print(type(g1d.get_shape().as_list())) There will be times when we don’t know the dimensions of a variable until the operation that returns its value is executed. For these cases, we can use tf.shape(variable), which returns a Tensor that will calculate at run time the dimensions of our result. This is known as ‘static shape’ and ‘dynamic shape’, where static is calculated taking into account the dimensions of the Tensors and Operations involved, and the dynamics at execution time. What happens if we define x as a ‘placeholder’? A placeholder is like a reserve, it indicates that there will be a tensor there, but it is not necessary to define it at that moment. For example, by defining: x = tf.placeholder(tf.int32, shape=[5]) We know that x will hold a 1D 5 dimensional tensor, as confirmed by x.get_shape(): print(x.get_shape()) But we don’t know what values will form it until we tell it. Differences between placeholder and variable: Variables They are used to accommodate parameters that are learned during training. Therefore, the values are derived from training — They require an initial value to be assigned (can be random) Placeholders Reserve space for data (e.g. for the pixels in an image) They do not require a value to be assigned to start (although they can) The obtained value, 5, it is the static value of the x dimensions. But, what happens if we apply a tf.unique() to x? y, _ = tf.unique(x)print(y.get_shape()) What happens is that tf.unique() returns the unique values of x, which are not known at first, since x is defined as a placeholder, and a placeholder does not have to be defined until the moment of execution, as we said before. In fact, let’s see what happens if we feed “x” with two different values: with tf.Session() as sess: print(sess.run(y, feed_dict={x: [0, 1, 2, 3, 4]}).shape) print(sess.run(y, feed_dict={x: [0, 0, 0, 0, 1]}).shape) Look at that! The size of y changes depending on what tf.unique() returns. This is called “dynamic shape”, and it is always defined, it will never return a question by answer. Because of this, TensorFlow supports operations like tf.unique() which can have variable size results. So, now you know, every time that you use operations with output variables, you will need to use tf.shape(variable) to calculate the dynamic shape of a tensor. sy = tf.shape(y)# Returns a list with the dimensionsprint(sy.eval(feed_dict={x: [0, 1, 2, 3, 4]}))print(sy.eval(feed_dict={x: [0, 0, 0, 0, 1]}))# We access the dimension of interestprint(sy.eval(feed_dict={x: [0, 1, 2, 3, 4]})[0])print(sy.eval(feed_dict={x: [0, 0, 0, 0, 1]})[0]) Now we can perform the operations after taking into account the size of the output of the operation, which we do not know in the first instance. print(tf.shape(y).eval(feed_dict={x: [0, 1, 4, 1, 0]}))print(type(tf.shape(y).eval(feed_dict={x: [0, 1, 4, 1, 0]})))print(tf.stack([y, y[::-1], tf.range(tf.shape(y)[0])]).eval(feed_dict={x: [0, 1, 4, 1, 0]})) Let’s see now a Gaussian distribution in 2D g1d_r = tf.reshape(g1d, [n_values, 1])print(g1d.get_shape().as_list())print(g1d_r.get_shape().as_list())# We multiply the row vector of the 1D Gaussian by the column to obtain the 2D versiong2d = tf.matmul(tf.reshape(g1d, [n_values, 1]), tf.reshape(g1d, [1, n_values]))# To visualize itplt.imshow(g2d.eval()) To see the list of the operations included in our tf.Graph ops = tf.get_default_graph().get_operations()print([op.name for op in ops]) As always, I hope you enjoyed the post, and that you have learned the basics of Tensors and TensorFlow and how they are used. If you liked this post then you can take a look at my other posts on Data Science and Machine Learning here . If you want to learn more about Machine Learning and Artificial Intelligence follow me on Medium, and stay tuned for my next posts!
[ { "code": null, "e": 231, "s": 172, "text": "The goal of this article is to cover the following topics:" }, { "code": null, "e": 255, "s": 231, "text": "Introduction to Tensors" }, { "code": null, "e": 286, "s": 255, "text": "Graphs, variables & operations" }, { "code": null, "e": 321, "s": 286, "text": "Problem resolution with TensorFlow" }, { "code": null, "e": 573, "s": 321, "text": "TensorFlow is a framework developed and maintained by Google that enables mathematical operations to be performed in an optimized way on a CPU or GPU. We are going to focus on the GPU since it is the fastest way we have to train a deep neural network." }, { "code": null, "e": 589, "s": 573, "text": "Why Tensorflow?" }, { "code": null, "e": 632, "s": 589, "text": "Because of its flexibility and scalability" }, { "code": null, "e": 658, "s": 632, "text": "Because of its popularity" }, { "code": null, "e": 740, "s": 658, "text": "The key features that make TensorFlow the most popular Deep Learning library are:" }, { "code": null, "e": 791, "s": 740, "text": "TensorFlow uses tensors to perform the operations." }, { "code": null, "e": 1016, "s": 791, "text": "In TensorFlow, you first define the activities to be performed (build the graph), and then execute them (execute the graph). This allows the process to be optimized to the task at hand, reducing greatly the computation time." }, { "code": null, "e": 1086, "s": 1016, "text": "TensorFlow enables code to be run in parallel or on one or more GPUs." }, { "code": null, "e": 1113, "s": 1086, "text": "Okay, but what’s a tensor?" }, { "code": null, "e": 1291, "s": 1113, "text": "Although tensors were invented by physicists to be able to describe interactions, in the field of Artificial Intelligence they can be understood simply as containers of numbers." }, { "code": null, "e": 1432, "s": 1291, "text": "Let’s put all this now into practice. We will use code some tensors in python in order to understand better what they are and how they work." }, { "code": null, "e": 1566, "s": 1432, "text": "Let’s imagine that we want to store the average grade of a student. We will use a 0D tensor, which is just a simple number or scalar." }, { "code": null, "e": 1709, "s": 1566, "text": "import numpy as nptensor_0D = np.array(5)print(\"Average grade: \\n{}\".format(tensor_0D))print(\"Tensor dimensions: \\n{}\".format(tensor_0D.ndim))" }, { "code": null, "e": 1819, "s": 1709, "text": "Let’s try now to store the grade of every subject that this student courses. We can use a 1D tensor to do so:" }, { "code": null, "e": 1951, "s": 1819, "text": "tensor_1D = np.array([4,6,8])print(\"Subject grades: \\n{}\".format(tensor_1D))print(\"Tensor dimensions: \\n{}\".format(tensor_0D.ndim))" }, { "code": null, "e": 2097, "s": 1951, "text": "But wait... What if we want to store each grade of every exam the student took on each subject? How could we do this if each subject had 3 exams?" }, { "code": null, "e": 2163, "s": 2097, "text": "By using a 2D tensor! Which, as we have seen before, is a matrix." }, { "code": null, "e": 2554, "s": 2163, "text": "# 2D Tensor (matrix)tensor_2D = np.array([[0, 1, 1], # Subject 1 [2, 3, 3], # Subject 2 [1, 3, 2]]) # Subject 3print(\"Exam grades are:\\n{}\".format(tensor_2D))print(\"Subject 1:\\n{}\".format(tensor_2D[0]))print(\"Subject 2:\\n{}\".format(tensor_2D[1]))print(\"Subject 3:\\n{}\".format(tensor_2D[2]))print(\"Tensor dimensions: \\n{}\".format(tensor_2D.ndim))" }, { "code": null, "e": 2760, "s": 2554, "text": "We want now to store the grades of the subjects (which are annual) for four quarters, so it will be easier to access them if necessary in a future, don’t you think? How do you think we could organize them?" }, { "code": null, "e": 2832, "s": 2760, "text": "What if we add a dimension to our 2D tensor that indicates the quarter?" }, { "code": null, "e": 2878, "s": 2832, "text": "We would get a 3D tensor (3D matrix or cube)." }, { "code": null, "e": 3321, "s": 2878, "text": "tensor_3D = np.array([[[0, 1, 1], # First quarter [2, 3, 3], [1, 3, 2]], [[1, 3, 2], # Second quarter [2, 4, 2], [0, 1, 1]]])print(\"Exam grades per quarter are:\\n{}\".format(tensor_3D))print(\"First quarter:\\n{}\".format(tensor_3D[0]))print(\"Second quarter:\\n{}\".format(tensor_3D[1]))print(\"Tensor dimensions: \\n{}\".format(tensor_3D.ndim))" }, { "code": null, "e": 3437, "s": 3321, "text": "What if we add a dimension to our tensor, so that we can have the grades per term of each subject for each student?" }, { "code": null, "e": 3495, "s": 3437, "text": "It will be a 4D tensor (3D matrix vector or cube vector)." }, { "code": null, "e": 4383, "s": 3495, "text": "tensor_4D = np.array([[[[0, 1, 1], # Jacob [2, 3, 3], [1, 3, 2]], [[1, 3, 2], [2, 4, 2], [0, 1, 1]]], [[[0, 3, 1], # Christian [2, 4, 1], [1, 3, 2]], [[1, 1, 1], [2, 3, 4], [1, 3, 2]]], [[[2, 2, 4], # Sofia [2, 1, 3], [0, 4, 2]], [[2, 4, 1], [2, 3, 0], [1, 3, 3]]]])print(\"The grades of each student are:\\n{}\".format(tensor_4D))print(\"Jacob's grades:\\n{}\".format(tensor_4D[0]))print(\"Christian's grades:\\n{}\".format(tensor_4D[1]))print(\"Sofia's grades:\\n{}\".format(tensor_4D[2]))print(\"Tensor dimensions: \\n{}\".format(tensor_4D.ndim))" }, { "code": null, "e": 4482, "s": 4383, "text": "And so we could go on to infinity adding dimensions to our tensors, to be able to store more data." }, { "code": null, "e": 4601, "s": 4482, "text": "To give you an idea of how often tensors are used in the world of Deep Learning, the most common types of tensors are:" }, { "code": null, "e": 4634, "s": 4601, "text": "3D tensors: used in time series." }, { "code": null, "e": 4664, "s": 4634, "text": "4D-Tensors: used with images." }, { "code": null, "e": 4697, "s": 4664, "text": "5D tensioners: used with videos." }, { "code": null, "e": 4811, "s": 4697, "text": "Normally, one of the dimensions will be used to store the samples of each type of data. For example, with images:" }, { "code": null, "e": 4963, "s": 4811, "text": "If we want to store 64 RGB images of 224x224 pixels, we will need a 3D matrix vector, or what is the same, a 4D tensor. How many dimensions do we need?" }, { "code": null, "e": 5035, "s": 4963, "text": "We have 64 images of 224 pixels x 224 pixels x 3 channels (R, G and B)." }, { "code": null, "e": 5064, "s": 5035, "text": "Therefore: (64, 224, 224, 3)" }, { "code": null, "e": 5187, "s": 5064, "text": "If you want to go deeper into tensors or more examples, here is a very good resource for it: Tensors illustrated with cats" }, { "code": null, "e": 5328, "s": 5187, "text": "We said before that in TensorFlow, first you define the operations to be carried out and then you execute them. To do this, you use a graph." }, { "code": null, "e": 5349, "s": 5328, "text": "And what is a graph?" }, { "code": null, "e": 5385, "s": 5349, "text": "A simple example of a sum of a + b." }, { "code": null, "e": 5530, "s": 5385, "text": "And here is a more complex example, for now, you do not have to understand completely, but this is just a snippet of how intricate they can get." }, { "code": null, "e": 5629, "s": 5530, "text": "First of all, we need to define and understand some basic concepts of TensorFlow (from now on TF):" }, { "code": null, "e": 5672, "s": 5629, "text": "tf.Graph: represent a set of tf.Operations" }, { "code": null, "e": 5747, "s": 5672, "text": "tf.Operation: Are the operations determined by the equations defined by us" }, { "code": null, "e": 5802, "s": 5747, "text": "tf.Tensor: Where we store the results of tf.Operations" }, { "code": null, "e": 5967, "s": 5802, "text": "In the beginning, tf.Graph is transparent to us, because there is one default graph where are added all our defined operations. It is called tf.get_default_graph()." }, { "code": null, "e": 6105, "s": 5967, "text": "# We import the Tensorflow package and matplotlib for the chartsimport tensorflow as tfimport matplotlib.pyplot as plt %matplotlib inline" }, { "code": null, "e": 6189, "s": 6105, "text": "Let’s start with something very simple, just a simple multiplication in Tensorflow." }, { "code": null, "e": 6312, "s": 6189, "text": "# Variables definitionx = tf.constant(6) y = tf.constant(8)# Operations definitionresult = tf.multiply(x, y)print(result)" }, { "code": null, "e": 6410, "s": 6312, "text": "As you can see, it didn’t give us back the result. What it has done so far is create the network." }, { "code": null, "e": 6615, "s": 6410, "text": "To give you an example, it’s like riding in a car. Now we have it assembled, but it still doesn’t do what it was designed to do, move. To do that, we should turn it on. To do so, we will use tf.Session()." }, { "code": null, "e": 6676, "s": 6615, "text": "sess = tf.Session() output = sess.run(result) print(output)" }, { "code": null, "e": 6751, "s": 6676, "text": "To be able to visualize the graph, we should define a couple of functions." }, { "code": null, "e": 8095, "s": 6751, "text": "from IPython.display import clear_output, Image, display, HTMLdef strip_consts(graph_def, max_const_size=32): \"\"\"Strip large constant values from graph_def.\"\"\" strip_def = tf.GraphDef() for n0 in graph_def.node: n = strip_def.node.add() n.MergeFrom(n0) if n.op == 'Const': tensor = n.attr['value'].tensor size = len(tensor.tensor_content) if size > max_const_size: tensor.tensor_content = \"<stripped %d bytes>\"%size return strip_defdef show_graph(graph_def, max_const_size=32): \"\"\"Visualize TensorFlow graph.\"\"\" if hasattr(graph_def, 'as_graph_def'): graph_def = graph_def.as_graph_def() strip_def = strip_consts(graph_def, max_const_size=max_const_size) code = \"\"\" <script> function load() {{ document.getElementById(\"{id}\").pbtxt = {data}; }} </script> <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()> <div style=\"height:600px\"> <tf-graph-basic id=\"{id}\"></tf-graph-basic> </div> \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))iframe = \"\"\" <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe> \"\"\".format(code.replace('\"', '\"')) display(HTML(iframe)" }, { "code": null, "e": 8146, "s": 8095, "text": "And now we will show the previously defined graph:" }, { "code": null, "e": 8317, "s": 8146, "text": "As we can see, the graph consist of 2 nodes of constant type and one of operator type (multiplication). However, their names are not quite indicatives. Let’s change this:" }, { "code": null, "e": 8494, "s": 8317, "text": "x2 = tf.constant(5.0, name='x2')y2 = tf.constant(6.0, name='y2')result = tf.multiply(x2, y2)# Let's see what it looks like now:show_graph(tf.get_default_graph().as_graph_def())" }, { "code": null, "e": 8601, "s": 8494, "text": "Once we have performed the operation, we should close the session to free the resources with sess.close()." }, { "code": null, "e": 8743, "s": 8601, "text": "Lastly, we can also indicate to TF the GPU that we want it to execute the operations. To do so, we can print a list of the available devices:" }, { "code": null, "e": 8954, "s": 8743, "text": "from tensorflow.python.client import device_libdef get_available_devices(): local_device_protos = device_lib.list_local_devices() return [x.name for x in local_device_protos]print(get_available_devices())" }, { "code": null, "e": 9081, "s": 8954, "text": "We will select the GPU:0 and perform the multiplication [3 3] by [2 2], which may result in 3x3 +2x2 = 12. Let’s check it out:" }, { "code": null, "e": 9177, "s": 9081, "text": "By using the with ___ as ___: we make python to free the resources of the TF session by itself:" }, { "code": null, "e": 9394, "s": 9177, "text": "with tf.Session() as sess: with tf.device(\"/GPU:0\"): matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) output = sess.run(product) print(output)" }, { "code": null, "e": 9470, "s": 9394, "text": "Let’s create now a 1D Tensor of 32 values equally sparsed between -5 and 5:" }, { "code": null, "e": 9572, "s": 9470, "text": "n_values = 64x = tf.linspace(-5.0, 5.0, n_values)sess = tf.Session()result = sess.run(x)print(result)" }, { "code": null, "e": 9641, "s": 9572, "text": "In addition to sess.run(_), there are other ways to evaluate Tensors" }, { "code": null, "e": 9662, "s": 9641, "text": "x.eval(session=sess)" }, { "code": null, "e": 9711, "s": 9662, "text": "We need to always remember to close the session:" }, { "code": null, "e": 9724, "s": 9711, "text": "sess.close()" }, { "code": null, "e": 9862, "s": 9724, "text": "We can also use an interactive session, which can help us by not having to be constantly calling the .run() to have the results executed:" }, { "code": null, "e": 9901, "s": 9862, "text": "sess = tf.InteractiveSession()x.eval()" }, { "code": null, "e": 10022, "s": 9901, "text": "Let’s see now a tf.Operation, to do so, we will use “x” to create and visualize a gaussian distribution. Its formula is:" }, { "code": null, "e": 10349, "s": 10022, "text": "sigma = 1.0mean = 0# To implement the gaussian distributio formula:g1d = (tf.exp(tf.negative(tf.pow(x - mean, 2.0) / (2.0 * tf.pow(sigma, 2.0)))) * (1.0 / (sigma * tf.sqrt(2.0 * 3.1415))))# to check that this operation has been succesuflly included in our tf.Graphif g1d.graph is tf.get_default_graph(): print('All good')" }, { "code": null, "e": 10370, "s": 10349, "text": "plt.plot(g1d.eval())" }, { "code": null, "e": 10514, "s": 10370, "text": "# to see the dimensionsprint(g1d.get_shape())print(type(g1d.get_shape()))print(g1d.get_shape().as_list())print(type(g1d.get_shape().as_list()))" }, { "code": null, "e": 10771, "s": 10514, "text": "There will be times when we don’t know the dimensions of a variable until the operation that returns its value is executed. For these cases, we can use tf.shape(variable), which returns a Tensor that will calculate at run time the dimensions of our result." }, { "code": null, "e": 10962, "s": 10771, "text": "This is known as ‘static shape’ and ‘dynamic shape’, where static is calculated taking into account the dimensions of the Tensors and Operations involved, and the dynamics at execution time." }, { "code": null, "e": 11170, "s": 10962, "text": "What happens if we define x as a ‘placeholder’? A placeholder is like a reserve, it indicates that there will be a tensor there, but it is not necessary to define it at that moment. For example, by defining:" }, { "code": null, "e": 11210, "s": 11170, "text": "x = tf.placeholder(tf.int32, shape=[5])" }, { "code": null, "e": 11293, "s": 11210, "text": "We know that x will hold a 1D 5 dimensional tensor, as confirmed by x.get_shape():" }, { "code": null, "e": 11314, "s": 11293, "text": "print(x.get_shape())" }, { "code": null, "e": 11375, "s": 11314, "text": "But we don’t know what values will form it until we tell it." }, { "code": null, "e": 11421, "s": 11375, "text": "Differences between placeholder and variable:" }, { "code": null, "e": 11431, "s": 11421, "text": "Variables" }, { "code": null, "e": 11505, "s": 11431, "text": "They are used to accommodate parameters that are learned during training." }, { "code": null, "e": 11616, "s": 11505, "text": "Therefore, the values are derived from training — They require an initial value to be assigned (can be random)" }, { "code": null, "e": 11629, "s": 11616, "text": "Placeholders" }, { "code": null, "e": 11686, "s": 11629, "text": "Reserve space for data (e.g. for the pixels in an image)" }, { "code": null, "e": 11758, "s": 11686, "text": "They do not require a value to be assigned to start (although they can)" }, { "code": null, "e": 11875, "s": 11758, "text": "The obtained value, 5, it is the static value of the x dimensions. But, what happens if we apply a tf.unique() to x?" }, { "code": null, "e": 11915, "s": 11875, "text": "y, _ = tf.unique(x)print(y.get_shape())" }, { "code": null, "e": 12217, "s": 11915, "text": "What happens is that tf.unique() returns the unique values of x, which are not known at first, since x is defined as a placeholder, and a placeholder does not have to be defined until the moment of execution, as we said before. In fact, let’s see what happens if we feed “x” with two different values:" }, { "code": null, "e": 12358, "s": 12217, "text": "with tf.Session() as sess: print(sess.run(y, feed_dict={x: [0, 1, 2, 3, 4]}).shape) print(sess.run(y, feed_dict={x: [0, 0, 0, 0, 1]}).shape)" }, { "code": null, "e": 12637, "s": 12358, "text": "Look at that! The size of y changes depending on what tf.unique() returns. This is called “dynamic shape”, and it is always defined, it will never return a question by answer. Because of this, TensorFlow supports operations like tf.unique() which can have variable size results." }, { "code": null, "e": 12797, "s": 12637, "text": "So, now you know, every time that you use operations with output variables, you will need to use tf.shape(variable) to calculate the dynamic shape of a tensor." }, { "code": null, "e": 13077, "s": 12797, "text": "sy = tf.shape(y)# Returns a list with the dimensionsprint(sy.eval(feed_dict={x: [0, 1, 2, 3, 4]}))print(sy.eval(feed_dict={x: [0, 0, 0, 0, 1]}))# We access the dimension of interestprint(sy.eval(feed_dict={x: [0, 1, 2, 3, 4]})[0])print(sy.eval(feed_dict={x: [0, 0, 0, 0, 1]})[0])" }, { "code": null, "e": 13222, "s": 13077, "text": "Now we can perform the operations after taking into account the size of the output of the operation, which we do not know in the first instance." }, { "code": null, "e": 13431, "s": 13222, "text": "print(tf.shape(y).eval(feed_dict={x: [0, 1, 4, 1, 0]}))print(type(tf.shape(y).eval(feed_dict={x: [0, 1, 4, 1, 0]})))print(tf.stack([y, y[::-1], tf.range(tf.shape(y)[0])]).eval(feed_dict={x: [0, 1, 4, 1, 0]}))" }, { "code": null, "e": 13475, "s": 13431, "text": "Let’s see now a Gaussian distribution in 2D" }, { "code": null, "e": 13784, "s": 13475, "text": "g1d_r = tf.reshape(g1d, [n_values, 1])print(g1d.get_shape().as_list())print(g1d_r.get_shape().as_list())# We multiply the row vector of the 1D Gaussian by the column to obtain the 2D versiong2d = tf.matmul(tf.reshape(g1d, [n_values, 1]), tf.reshape(g1d, [1, n_values]))# To visualize itplt.imshow(g2d.eval())" }, { "code": null, "e": 13843, "s": 13784, "text": "To see the list of the operations included in our tf.Graph" }, { "code": null, "e": 13919, "s": 13843, "text": "ops = tf.get_default_graph().get_operations()print([op.name for op in ops])" }, { "code": null, "e": 14045, "s": 13919, "text": "As always, I hope you enjoyed the post, and that you have learned the basics of Tensors and TensorFlow and how they are used." }, { "code": null, "e": 14155, "s": 14045, "text": "If you liked this post then you can take a look at my other posts on Data Science and Machine Learning here ." } ]
Calendar toString() Method in Java with Examples - GeeksforGeeks
13 Feb, 2019 The toString() method in Calendar class is used to get the string representation of the Calendar object. This method in Calendar Class is just for debug process and not to be used as an operation. Syntax: public String toString() Parameters: The method does not take any parameters. Return Value: The method returns the String representation of the Calendar object. It can return an empty object but not null. Below programs illustrate the working of toString() Method of Calendar class: Example 1: // Java Code to illustrate toString() Method import java.util.*; public class CalendarClassDemo { public static void main(String args[]) { // Creating a calendar object Calendar calndr1 = Calendar.getInstance(); // Returning the string representation System.out.println("The string form: " + calndr1.getTime() .toString()); }} The string form: Wed Feb 13 09:46:36 UTC 2019 Example 2: // Java Code to illustrate toString() Method import java.util.*; public class CalendarClassDemo { public static void main(String args[]) { // Creating a calendar object Calendar calndr1 = new GregorianCalendar(2018, 12, 2); // Returning the string representation System.out.println("The string form: " + calndr1.getTime() .toString()); }} The string form: Wed Jan 02 00:00:00 UTC 2019 Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html#toString() Java - util package Java-Calendar Java-Functions Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Constructors in Java Stream In Java Exceptions in Java Functional Interfaces in Java Different ways of Reading a text file in Java Java Programming Examples Internal Working of HashMap in Java Checked vs Unchecked Exceptions in Java Strings in Java StringBuilder Class in Java with Examples
[ { "code": null, "e": 23868, "s": 23840, "text": "\n13 Feb, 2019" }, { "code": null, "e": 24065, "s": 23868, "text": "The toString() method in Calendar class is used to get the string representation of the Calendar object. This method in Calendar Class is just for debug process and not to be used as an operation." }, { "code": null, "e": 24073, "s": 24065, "text": "Syntax:" }, { "code": null, "e": 24098, "s": 24073, "text": "public String toString()" }, { "code": null, "e": 24151, "s": 24098, "text": "Parameters: The method does not take any parameters." }, { "code": null, "e": 24278, "s": 24151, "text": "Return Value: The method returns the String representation of the Calendar object. It can return an empty object but not null." }, { "code": null, "e": 24356, "s": 24278, "text": "Below programs illustrate the working of toString() Method of Calendar class:" }, { "code": null, "e": 24367, "s": 24356, "text": "Example 1:" }, { "code": "// Java Code to illustrate toString() Method import java.util.*; public class CalendarClassDemo { public static void main(String args[]) { // Creating a calendar object Calendar calndr1 = Calendar.getInstance(); // Returning the string representation System.out.println(\"The string form: \" + calndr1.getTime() .toString()); }}", "e": 24804, "s": 24367, "text": null }, { "code": null, "e": 24851, "s": 24804, "text": "The string form: Wed Feb 13 09:46:36 UTC 2019\n" }, { "code": null, "e": 24862, "s": 24851, "text": "Example 2:" }, { "code": "// Java Code to illustrate toString() Method import java.util.*; public class CalendarClassDemo { public static void main(String args[]) { // Creating a calendar object Calendar calndr1 = new GregorianCalendar(2018, 12, 2); // Returning the string representation System.out.println(\"The string form: \" + calndr1.getTime() .toString()); }}", "e": 25311, "s": 24862, "text": null }, { "code": null, "e": 25358, "s": 25311, "text": "The string form: Wed Jan 02 00:00:00 UTC 2019\n" }, { "code": null, "e": 25446, "s": 25358, "text": "Reference: https://docs.oracle.com/javase/7/docs/api/java/util/Calendar.html#toString()" }, { "code": null, "e": 25466, "s": 25446, "text": "Java - util package" }, { "code": null, "e": 25480, "s": 25466, "text": "Java-Calendar" }, { "code": null, "e": 25495, "s": 25480, "text": "Java-Functions" }, { "code": null, "e": 25500, "s": 25495, "text": "Java" }, { "code": null, "e": 25505, "s": 25500, "text": "Java" }, { "code": null, "e": 25603, "s": 25505, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25612, "s": 25603, "text": "Comments" }, { "code": null, "e": 25625, "s": 25612, "text": "Old Comments" }, { "code": null, "e": 25646, "s": 25625, "text": "Constructors in Java" }, { "code": null, "e": 25661, "s": 25646, "text": "Stream In Java" }, { "code": null, "e": 25680, "s": 25661, "text": "Exceptions in Java" }, { "code": null, "e": 25710, "s": 25680, "text": "Functional Interfaces in Java" }, { "code": null, "e": 25756, "s": 25710, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 25782, "s": 25756, "text": "Java Programming Examples" }, { "code": null, "e": 25818, "s": 25782, "text": "Internal Working of HashMap in Java" }, { "code": null, "e": 25858, "s": 25818, "text": "Checked vs Unchecked Exceptions in Java" }, { "code": null, "e": 25874, "s": 25858, "text": "Strings in Java" } ]
Drools - Rule Syntax
As you saw the .drl (rule file) has its own syntax, let us cover some part of the Rule syntax in this chapter. A rule can contain many conditions and patterns such as − Account (balance == 200) Customer (name == “Vivek”) The above conditions check if the Account balance is 200 or the Customer name is “Vivek”. A variable name in Drools starts with a Dollar($) symbol. $account − Account( ) $account is the variable for Account() class Drools can work with all the native Java types and even Enum. The special characters, # or //, can be used to mark single-line comments. For multi-line comments, use the following format: /* Another line ......... ......... */ Global variables are variables assigned to a session. They can be used for various reasons as follows − For input parameters (for example, constant values that can be customized from session to session). For input parameters (for example, constant values that can be customized from session to session). For output parameters (for example, reporting—a rule could write some message to a global report variable). For output parameters (for example, reporting—a rule could write some message to a global report variable). Entry points for services such as logging, which can be used within rules. Entry points for services such as logging, which can be used within rules. Functions are a convenience feature. They can be used in conditions and consequences. Functions represent an alternative to the utility/helper classes. For example, function double calculateSquare (double value) { return value * value; } A dialect specifies the syntax used in any code expression that is in a condition or in a consequence. It includes return values, evals, inline evals, predicates, salience expressions, consequences, and so on. The default value is Java. Drools currently supports one more dialect called MVEL. The default dialect can be specified at the package level as follows − package org.mycompany.somePackage dialect "mvel" MVEL is an expression language for Java-based applications. It supports field and method/getter access. It is based on Java syntax. Salience is a very important feature of Rule Syntax. Salience is used by the conflict resolution strategy to decide which rule to fire first. By default, it is the main criterion. We can use salience to define the order of firing rules. Salience has one attribute, which takes any expression that returns a number of type int (positive as well as negative numbers are valid). The higher the value, the more likely a rule will be picked up by the conflict resolution strategy to fire. salience ($account.balance * 5) The default salience value is 0. We should keep this in mind when assigning salience values to some rules only. There are a lot of other features/parameters in the Rule Syntax, but we have covered only the important ones here. Rule Consequence Keywords are the keywords used in the “then” part of the rule. Modify − The attributes of the fact can be modified in the then part of the Rule. Modify − The attributes of the fact can be modified in the then part of the Rule. Insert − Based on some condition, if true, one can insert a new fact into the current session of the Rule Engine. Insert − Based on some condition, if true, one can insert a new fact into the current session of the Rule Engine. Retract − If a particular condition is true in a Rule and you don’t want to act anything else on that fact, you can retract the particular fact from the Rule Engine. Retract − If a particular condition is true in a Rule and you don’t want to act anything else on that fact, you can retract the particular fact from the Rule Engine. Note − It is considered a very bad practice to have a conditional logic (if statements) within a rule consequence. Most of the times, a new rule should be created. Print Add Notes Bookmark this page
[ { "code": null, "e": 1908, "s": 1797, "text": "As you saw the .drl (rule file) has its own syntax, let us cover some part of the Rule syntax in this chapter." }, { "code": null, "e": 1966, "s": 1908, "text": "A rule can contain many conditions and patterns such as −" }, { "code": null, "e": 1991, "s": 1966, "text": "Account (balance == 200)" }, { "code": null, "e": 2018, "s": 1991, "text": "Customer (name == “Vivek”)" }, { "code": null, "e": 2108, "s": 2018, "text": "The above conditions check if the Account balance is 200 or the Customer name is “Vivek”." }, { "code": null, "e": 2166, "s": 2108, "text": "A variable name in Drools starts with a Dollar($) symbol." }, { "code": null, "e": 2188, "s": 2166, "text": "$account − Account( )" }, { "code": null, "e": 2233, "s": 2188, "text": "$account is the variable for Account() class" }, { "code": null, "e": 2295, "s": 2233, "text": "Drools can work with all the native Java types and even Enum." }, { "code": null, "e": 2370, "s": 2295, "text": "The special characters, # or //, can be used to mark single-line comments." }, { "code": null, "e": 2421, "s": 2370, "text": "For multi-line comments, use the following format:" }, { "code": null, "e": 2469, "s": 2421, "text": "/*\n Another line\n .........\n .........\n*/" }, { "code": null, "e": 2573, "s": 2469, "text": "Global variables are variables assigned to a session. They can be used for various reasons as follows −" }, { "code": null, "e": 2673, "s": 2573, "text": "For input parameters (for example, constant values that can be customized from session to session)." }, { "code": null, "e": 2773, "s": 2673, "text": "For input parameters (for example, constant values that can be customized from session to session)." }, { "code": null, "e": 2881, "s": 2773, "text": "For output parameters (for example, reporting—a rule could write some message to a global report variable)." }, { "code": null, "e": 2989, "s": 2881, "text": "For output parameters (for example, reporting—a rule could write some message to a global report variable)." }, { "code": null, "e": 3064, "s": 2989, "text": "Entry points for services such as logging, which can be used within rules." }, { "code": null, "e": 3139, "s": 3064, "text": "Entry points for services such as logging, which can be used within rules." }, { "code": null, "e": 3304, "s": 3139, "text": "Functions are a convenience feature. They can be used in conditions and consequences. Functions represent an alternative to the utility/helper classes. For example," }, { "code": null, "e": 3380, "s": 3304, "text": "function double calculateSquare (double value) {\n return value * value;\n}" }, { "code": null, "e": 3744, "s": 3380, "text": "A dialect specifies the syntax used in any code expression that is in a condition or in a consequence. It includes return values, evals, inline evals, predicates, salience expressions, consequences, and so on. The default value is Java. Drools currently supports one more dialect called MVEL. The default dialect can be specified at the package level as follows −" }, { "code": null, "e": 3794, "s": 3744, "text": "package org.mycompany.somePackage\ndialect \"mvel\"\n" }, { "code": null, "e": 3926, "s": 3794, "text": "MVEL is an expression language for Java-based applications. It supports field and method/getter access. It is based on Java syntax." }, { "code": null, "e": 4106, "s": 3926, "text": "Salience is a very important feature of Rule Syntax. Salience is used by the conflict resolution strategy to decide which rule to fire first. By default, it is the main criterion." }, { "code": null, "e": 4410, "s": 4106, "text": "We can use salience to define the order of firing rules. Salience has one attribute, which takes any expression that returns a number of type int (positive as well as negative numbers are valid). The higher the value, the more likely a rule will be picked up by the conflict resolution strategy to fire." }, { "code": null, "e": 4443, "s": 4410, "text": "salience ($account.balance * 5)\n" }, { "code": null, "e": 4555, "s": 4443, "text": "The default salience value is 0. We should keep this in mind when assigning salience values to some rules only." }, { "code": null, "e": 4670, "s": 4555, "text": "There are a lot of other features/parameters in the Rule Syntax, but we have covered only the important ones here." }, { "code": null, "e": 4750, "s": 4670, "text": "Rule Consequence Keywords are the keywords used in the “then” part of the rule." }, { "code": null, "e": 4832, "s": 4750, "text": "Modify − The attributes of the fact can be modified in the then part of the Rule." }, { "code": null, "e": 4914, "s": 4832, "text": "Modify − The attributes of the fact can be modified in the then part of the Rule." }, { "code": null, "e": 5028, "s": 4914, "text": "Insert − Based on some condition, if true, one can insert a new fact into the current session of the Rule Engine." }, { "code": null, "e": 5142, "s": 5028, "text": "Insert − Based on some condition, if true, one can insert a new fact into the current session of the Rule Engine." }, { "code": null, "e": 5308, "s": 5142, "text": "Retract − If a particular condition is true in a Rule and you don’t want to act anything else on that fact, you can retract the particular fact from the Rule Engine." }, { "code": null, "e": 5474, "s": 5308, "text": "Retract − If a particular condition is true in a Rule and you don’t want to act anything else on that fact, you can retract the particular fact from the Rule Engine." }, { "code": null, "e": 5638, "s": 5474, "text": "Note − It is considered a very bad practice to have a conditional logic (if statements) within a rule consequence. Most of the times, a new rule should be created." }, { "code": null, "e": 5645, "s": 5638, "text": " Print" }, { "code": null, "e": 5656, "s": 5645, "text": " Add Notes" } ]
Java interface Keyword
❮ Java Keywords An interface is an abstract "class" that is used to group related methods with "empty" bodies: To access the interface methods, the interface must be "implemented" (kinda like inherited) by another class with the implements keyword (instead of extends). The body of the interface method is provided by the "implement" class: // interface interface Animal { public void animalSound(); // interface method (does not have a body) public void sleep(); // interface method (does not have a body) } // Pig "implements" the Animal interface class Pig implements Animal { public void animalSound() { // The body of animalSound() is provided here System.out.println("The pig says: wee wee"); } public void sleep() { // The body of sleep() is provided here System.out.println("Zzz"); } } class MyMainClass { public static void main(String[] args) { Pig myPig = new Pig(); // Create a Pig object myPig.animalSound(); myPig.sleep(); } } Try it Yourself » The interface keyword is used to declare a special type of class that only contains abstract methods. To access the interface methods, the interface must be "implemented" (kinda like inherited) by another class with the implements keyword (instead of extends). The body of the interface method is provided by the "implement" class. It cannot be used to create objects (in the example above, it is not possible to create an "Animal" object in the MyMainClass) Interface methods does not have a body - the body is provided by the "implement" class On implementation of an interface, you must override all of its methods Interface methods are by default abstract and public Interface attributes are by default public, static and final An interface cannot contain a constructor (as it cannot be used to create objects) To achieve security - hide certain details and only show the important details of an object (interface). Java does not support "multiple inheritance" (a class can only inherit from one superclass). However, it can be achieved with interfaces, because the class can implement multiple interfaces. Note: To implement multiple interfaces, separate them with a comma (see example below). To implement multiple interfaces, separate them with a comma: interface FirstInterface { public void myMethod(); // interface method } interface SecondInterface { public void myOtherMethod(); // interface method } // DemoClass "implements" FirstInterface and SecondInterface class DemoClass implements FirstInterface, SecondInterface { public void myMethod() { System.out.println("Some text.."); } public void myOtherMethod() { System.out.println("Some other text..."); } } class MyMainClass { public static void main(String[] args) { DemoClass myObj = new DemoClass(); myObj.myMethod(); myObj.myOtherMethod(); } } Try it Yourself » Read more about interfaces in our Java Interface Tutorial. ❮ Java Keywords We just launchedW3Schools videos Get certifiedby completinga course today! If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail: help@w3schools.com Your message has been sent to W3Schools.
[ { "code": null, "e": 18, "s": 0, "text": "\n❮ Java Keywords\n" }, { "code": null, "e": 113, "s": 18, "text": "An interface is an abstract \"class\" that is used to group related methods with \"empty\" bodies:" }, { "code": null, "e": 346, "s": 113, "text": "To access the interface methods, the interface must be \"implemented\" \n(kinda like inherited) by another class with the implements \nkeyword (instead of extends). The body of the \ninterface method is provided by the \"implement\" class:" }, { "code": null, "e": 994, "s": 346, "text": "// interface\ninterface Animal {\n public void animalSound(); // interface method (does not have a body)\n public void sleep(); // interface method (does not have a body)\n}\n\n// Pig \"implements\" the Animal interface\nclass Pig implements Animal {\n public void animalSound() {\n // The body of animalSound() is provided here\n System.out.println(\"The pig says: wee wee\");\n }\n public void sleep() {\n // The body of sleep() is provided here\n System.out.println(\"Zzz\");\n }\n}\n\nclass MyMainClass {\n public static void main(String[] args) {\n Pig myPig = new Pig(); // Create a Pig object\n myPig.animalSound();\n myPig.sleep();\n }\n}\n" }, { "code": null, "e": 1014, "s": 994, "text": "\nTry it Yourself »\n" }, { "code": null, "e": 1116, "s": 1014, "text": "The interface keyword is used to declare a special type of class that only contains abstract methods." }, { "code": null, "e": 1349, "s": 1116, "text": "To access the interface methods, the interface must be \"implemented\" \n(kinda like inherited) by another class with the implements \nkeyword (instead of extends). The body of the \ninterface method is provided by the \"implement\" class." }, { "code": null, "e": 1477, "s": 1349, "text": "It cannot be used to create objects (in the example above, \nit is not possible to create an \"Animal\" object in the MyMainClass)" }, { "code": null, "e": 1565, "s": 1477, "text": "Interface methods does not have a body - the \nbody is provided by the \"implement\" class" }, { "code": null, "e": 1637, "s": 1565, "text": "On implementation of an interface, you must override all of its methods" }, { "code": null, "e": 1693, "s": 1637, "text": "Interface methods are by default abstract and \n public" }, { "code": null, "e": 1757, "s": 1693, "text": "Interface attributes are by default public, \n static and final" }, { "code": null, "e": 1840, "s": 1757, "text": "An interface cannot contain a constructor (as it cannot be used to create objects)" }, { "code": null, "e": 1946, "s": 1840, "text": "To achieve security - hide certain details and only show the important \ndetails of an object (interface)." }, { "code": null, "e": 2230, "s": 1946, "text": "Java does not support \"multiple inheritance\" (a class can only inherit from one superclass). However, it can be achieved \n with interfaces, because the class can implement multiple interfaces.\n Note: To implement multiple interfaces, separate them with a comma (see example below)." }, { "code": null, "e": 2292, "s": 2230, "text": "To implement multiple interfaces, separate them with a comma:" }, { "code": null, "e": 2885, "s": 2292, "text": "interface FirstInterface {\n public void myMethod(); // interface method\n}\n\ninterface SecondInterface {\n public void myOtherMethod(); // interface method\n}\n\n// DemoClass \"implements\" FirstInterface and SecondInterface\nclass DemoClass implements FirstInterface, SecondInterface {\n public void myMethod() {\n System.out.println(\"Some text..\");\n }\n public void myOtherMethod() {\n System.out.println(\"Some other text...\");\n }\n}\n\nclass MyMainClass {\n public static void main(String[] args) {\n DemoClass myObj = new DemoClass();\n myObj.myMethod();\n myObj.myOtherMethod();\n }\n}\n" }, { "code": null, "e": 2905, "s": 2885, "text": "\nTry it Yourself »\n" }, { "code": null, "e": 2964, "s": 2905, "text": "Read more about interfaces in our Java Interface Tutorial." }, { "code": null, "e": 2982, "s": 2964, "text": "\n❮ Java Keywords\n" }, { "code": null, "e": 3015, "s": 2982, "text": "We just launchedW3Schools videos" }, { "code": null, "e": 3057, "s": 3015, "text": "Get certifiedby completinga course today!" }, { "code": null, "e": 3164, "s": 3057, "text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:" }, { "code": null, "e": 3183, "s": 3164, "text": "help@w3schools.com" } ]
How do we close resources automatically in Java?
You can close resources automatically using the try-with-resources in JDBC. try(Declaration of resource){ body..... } catch (SQLException e) { e.printStackTrace(); } It is a try statement with one or more resources declared at try. where resource is an object which should be closed once it is no more required. You can declare multiple resources in this and all those will be closed at the end of the statement automatically. The objects/resources we declare in this should implement java.lang.AutoCloseable or java.io.Closeable, interfaces or, extend the java.lang.AutoCloseable class. In JDBC we can use java.sql.CallableStatement, Connection, PreparedStatement, Statement, ResultSet, and RowSet in try-with-resources statement. Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below − CREATE TABLE MyPlayers( ID INT, First_Name VARCHAR(255), Last_Name VARCHAR(255), Date_Of_Birth date, Place_Of_Birth VARCHAR(255), Country VARCHAR(255), PRIMARY KEY (ID) ); Now, we will insert 7 records in MyPlayers table using INSERT statements − insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India'); insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica'); insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka'); insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India'); insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India'); insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India'); insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England'); Following JDBC program demonstrates how usage of try-with-resources statement in JDBC − import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; public class TryWithResources_Example { public static void main(String args[]) { //Getting the connection String mysqlUrl = "jdbc:mysql://localhost/mydatabase"; System.out.println("Connection established......"); //Registering the Driver try(Connection con = DriverManager.getConnection(mysqlUrl, "root", "password"); Statement stmt = con.createStatement(); ) { try(ResultSet rs = stmt.executeQuery("select * from MyPlayers");) { //Retrieving the data while(rs.next()) { System.out.print(rs.getInt("ID")+", "); System.out.print(rs.getString("First_Name")+", "); System.out.print(rs.getString("Last_Name")+", "); System.out.print(rs.getDate("Date_Of_Birth")+", "); System.out.print(rs.getString("Place_Of_Birth")+", "); System.out.print(rs.getString("Country")); System.out.println(); } } catch (SQLException e) { e.printStackTrace(); } } catch (SQLException e) { e.printStackTrace(); } } } Connection established...... 1, Shikhar, Dhawan, 1981-12-05, Delhi, India 2, Jonathan, Trott, 1981-04-22, CapeTown, SouthAfrica 3, Kumara, Sangakkara, 1977-10-27, Matale, Srilanka 4, Virat, Kohli, 1988-11-05, Mumbai, India 5, Rohit, Sharma, 1987-04-30, Nagpur, India 6, Ravindra, Jadeja, 1988-12-06, Nagpur, India 7, James, Anderson, 1982-06-30, Burnely, England 8, Ryan, McLaren, 1983-02-09, Kumberly, null
[ { "code": null, "e": 1138, "s": 1062, "text": "You can close resources automatically using the try-with-resources in JDBC." }, { "code": null, "e": 1234, "s": 1138, "text": "try(Declaration of resource){\n body.....\n} catch (SQLException e) {\n e.printStackTrace();\n}" }, { "code": null, "e": 1380, "s": 1234, "text": "It is a try statement with one or more resources declared at try. where resource is an object which should be closed once it is no more required." }, { "code": null, "e": 1495, "s": 1380, "text": "You can declare multiple resources in this and all those will be closed at the end of the statement automatically." }, { "code": null, "e": 1656, "s": 1495, "text": "The objects/resources we declare in this should implement java.lang.AutoCloseable or java.io.Closeable, interfaces or, extend the java.lang.AutoCloseable class." }, { "code": null, "e": 1800, "s": 1656, "text": "In JDBC we can use java.sql.CallableStatement, Connection, PreparedStatement, Statement, ResultSet, and RowSet in try-with-resources statement." }, { "code": null, "e": 1900, "s": 1800, "text": "Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below −" }, { "code": null, "e": 2093, "s": 1900, "text": "CREATE TABLE MyPlayers(\n ID INT,\n First_Name VARCHAR(255),\n Last_Name VARCHAR(255),\n Date_Of_Birth date,\n Place_Of_Birth VARCHAR(255),\n Country VARCHAR(255),\n PRIMARY KEY (ID)\n);" }, { "code": null, "e": 2168, "s": 2093, "text": "Now, we will insert 7 records in MyPlayers table using INSERT statements −" }, { "code": null, "e": 2830, "s": 2168, "text": "insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');\ninsert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');\ninsert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');\ninsert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');\ninsert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');\ninsert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');\ninsert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');" }, { "code": null, "e": 2918, "s": 2830, "text": "Following JDBC program demonstrates how usage of try-with-resources statement in JDBC −" }, { "code": null, "e": 4203, "s": 2918, "text": "import java.sql.Connection;\nimport java.sql.DriverManager;\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport java.sql.Statement;\npublic class TryWithResources_Example {\n public static void main(String args[]) {\n //Getting the connection\n String mysqlUrl = \"jdbc:mysql://localhost/mydatabase\";\n System.out.println(\"Connection established......\");\n //Registering the Driver\n try(Connection con = DriverManager.getConnection(mysqlUrl, \"root\", \"password\");\n Statement stmt = con.createStatement(); ) {\n try(ResultSet rs = stmt.executeQuery(\"select * from MyPlayers\");) {\n //Retrieving the data\n while(rs.next()) {\n System.out.print(rs.getInt(\"ID\")+\", \");\n System.out.print(rs.getString(\"First_Name\")+\", \");\n System.out.print(rs.getString(\"Last_Name\")+\", \");\n System.out.print(rs.getDate(\"Date_Of_Birth\")+\", \");\n System.out.print(rs.getString(\"Place_Of_Birth\")+\", \");\n System.out.print(rs.getString(\"Country\"));\n System.out.println();\n }\n } catch (SQLException e) {\n e.printStackTrace();\n }\n } catch (SQLException e) {\n e.printStackTrace();\n }\n }\n}" }, { "code": null, "e": 4611, "s": 4203, "text": "Connection established......\n1, Shikhar, Dhawan, 1981-12-05, Delhi, India\n2, Jonathan, Trott, 1981-04-22, CapeTown, SouthAfrica\n3, Kumara, Sangakkara, 1977-10-27, Matale, Srilanka\n4, Virat, Kohli, 1988-11-05, Mumbai, India\n5, Rohit, Sharma, 1987-04-30, Nagpur, India\n6, Ravindra, Jadeja, 1988-12-06, Nagpur, India\n7, James, Anderson, 1982-06-30, Burnely, England\n8, Ryan, McLaren, 1983-02-09, Kumberly, null" } ]
How to convert a List to a Map in Kotlin?
In this article, we will see how we can convert a List to a Map using various options provided by the Kotlin Library. The most standard way of converting a list into a map is by using the associate() function. This function takes a list of items as an argument and it returns a map containing key-value pairs. In the following example, we will see how it works. data class mySubjectList(var name: String, var priority: String) fun main() { val mySubjectList: List<mySubjectList> = listOf( mySubjectList("Java", "1"), mySubjectList("Kotlin", "2"), mySubjectList("C", "3") ) // Creating a map and adding my own list of values in it. val myMap: Map<String, String> = mySubjectList.associate { Pair(it.priority, it.name) } println(myMap) } Once we run the above piece of code, it will generate the following output which is a map and we get the output in a key-value format. {1=Java, 2=Kotlin, 3=C} AssociateBy() is another function that can be used in order to transform a list into a Map. In the following example, we will see how we can implement the same. data class mySubjectList(var name: String, var priority: String) fun main() { val mySubjectList: List<mySubjectList> = listOf( mySubjectList("Java", "1"), mySubjectList("Kotlin", "2"), mySubjectList("C", "3") ) // Creating a map and adding my own list of the values in it val myMap: Map<String, String> = mySubjectList.associateBy( {it.priority}, {it.name} ) println(myMap) } It will generate the following output which is a map and we get the output in a key-value format. {1=Java, 2=Kotlin, 3=C} Kotlin library provides another function to convert a list of items into a Map. Kotlin Map class contains a function called toMap() which returns a new map containing all the key-value pairs from a given collection. Let's see how it works. data class mySubjectList(var name: String, var priority: String) fun main() { val mySubjectList: List<mySubjectList> = listOf( mySubjectList("Java", "1"), mySubjectList("Kotlin", "2"), mySubjectList("C", "3") ) // Creating a map and adding my own list of the values in it . val myMap: Map<String, String> = mySubjectList.map{ it.priority to it.name }.toMap() println(myMap) } Once we run the above piece of code, it will generate the following output which is a map and we get the output in a key-value format. {1=Java, 2=Kotlin, 3=C}
[ { "code": null, "e": 1180, "s": 1062, "text": "In this article, we will see how we can convert a List to a Map using various options provided by the Kotlin Library." }, { "code": null, "e": 1424, "s": 1180, "text": "The most standard way of converting a list into a map is by using the associate() function. This function takes a list of items as an argument and it returns a map containing key-value pairs. In the following example, we will see how it works." }, { "code": null, "e": 1843, "s": 1424, "text": "data class mySubjectList(var name: String, var priority: String)\n\nfun main() {\n val mySubjectList: List<mySubjectList> = listOf(\n mySubjectList(\"Java\", \"1\"),\n mySubjectList(\"Kotlin\", \"2\"),\n mySubjectList(\"C\", \"3\")\n )\n\n // Creating a map and adding my own list of values in it.\n val myMap: Map<String, String> = mySubjectList.associate {\n Pair(it.priority, it.name)\n }\n\n println(myMap)\n}" }, { "code": null, "e": 1978, "s": 1843, "text": "Once we run the above piece of code, it will generate the following output\nwhich is a map and we get the output in a key-value format." }, { "code": null, "e": 2002, "s": 1978, "text": "{1=Java, 2=Kotlin, 3=C}" }, { "code": null, "e": 2163, "s": 2002, "text": "AssociateBy() is another function that can be used in order to transform a list into a Map. In the following example, we will see how we can implement the same." }, { "code": null, "e": 2584, "s": 2163, "text": "data class mySubjectList(var name: String, var priority: String)\n\nfun main() {\n val mySubjectList: List<mySubjectList> = listOf(\n mySubjectList(\"Java\", \"1\"),\n mySubjectList(\"Kotlin\", \"2\"),\n mySubjectList(\"C\", \"3\")\n )\n\n // Creating a map and adding my own list of the values in it\n val myMap: Map<String, String> = mySubjectList.associateBy(\n {it.priority}, {it.name}\n )\n\n println(myMap)\n}" }, { "code": null, "e": 2682, "s": 2584, "text": "It will generate the following output which is a map and we get the output in a key-value format." }, { "code": null, "e": 2706, "s": 2682, "text": "{1=Java, 2=Kotlin, 3=C}" }, { "code": null, "e": 2946, "s": 2706, "text": "Kotlin library provides another function to convert a list of items into a Map. Kotlin Map class contains a function called toMap() which returns a new map containing all the key-value pairs from a given collection. Let's see how it works." }, { "code": null, "e": 3367, "s": 2946, "text": "data class mySubjectList(var name: String, var priority: String)\n\nfun main() {\n val mySubjectList: List<mySubjectList> = listOf(\n mySubjectList(\"Java\", \"1\"),\n mySubjectList(\"Kotlin\", \"2\"),\n mySubjectList(\"C\", \"3\")\n )\n\n // Creating a map and adding my own list of the values in it .\n val myMap: Map<String, String> = mySubjectList.map{\n it.priority to it.name\n }.toMap()\n\n println(myMap)\n}" }, { "code": null, "e": 3502, "s": 3367, "text": "Once we run the above piece of code, it will generate the following output which is a map and we get the output in a key-value format." }, { "code": null, "e": 3526, "s": 3502, "text": "{1=Java, 2=Kotlin, 3=C}" } ]
Shortest Path Visiting All Nodes in C++
Suppose we have one undirected, connected graph with N nodes these nodes are labeled as 0, 1, 2, ..., N-1. graph length will be N, and j is not same as i is in the list graph[i] exactly once, if and only if nodes i and j are connected. We have to find the length of the shortest path that visits every node. We can start and stop at any node, we can revisit nodes multiple times, and we can reuse edges. So, if the input is like [[1],[0,2,4],[1,3,4],[2],[1,2]], then the output will be 4. Now here one possible path is [0,1,4,2,3]. To solve this, we will follow these steps − Define one queue Define one queue n := size of graph n := size of graph req := 2^(n - 1) req := 2^(n - 1) Define one map Define one map for initialize i := 0, when i < n, update (increase i by 1), do −insert {0 OR (2^i), i} into q for initialize i := 0, when i < n, update (increase i by 1), do − insert {0 OR (2^i), i} into q insert {0 OR (2^i), i} into q if n is same as 1, then −return 0 if n is same as 1, then − return 0 return 0 for initialize lvl := 1, when not q is empty, update (increase lvl by 1), do −sz := size of qwhile sz is non-zero, decrease sz by 1 in each iteration, do −Define an array curr = front element of qdelete element from qfor initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q for initialize lvl := 1, when not q is empty, update (increase lvl by 1), do − sz := size of q sz := size of q while sz is non-zero, decrease sz by 1 in each iteration, do −Define an array curr = front element of qdelete element from qfor initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q while sz is non-zero, decrease sz by 1 in each iteration, do − Define an array curr = front element of q Define an array curr = front element of q delete element from q delete element from q for initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q for initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), do u := graph[curr[1], i] u := graph[curr[1], i] newMask := (curr[0] OR 2^u) newMask := (curr[0] OR 2^u) if newMask is same as req, then −return lvl if newMask is same as req, then − return lvl return lvl if call count(newMask) of visited[u], then −Ignore following part, skip to the next iteration if call count(newMask) of visited[u], then − Ignore following part, skip to the next iteration Ignore following part, skip to the next iteration insert newMask into visited[u] insert newMask into visited[u] insert {newMask, u} into q insert {newMask, u} into q return -1 return -1 Let us see the following implementation to get better understanding − Live Demo #include <bits/stdc++.h> using namespace std; void print_vector(vector<auto> v){ cout << "["; for(int i = 0; i<v.size(); i++){ cout << v[i] << ", "; } cout << "]"<<endl; } class Solution { public: int shortestPathLength(vector<vector<int> >& graph){ queue<vector<int> > q; int n = graph.size(); int req = (1 << n) - 1; map<int, set<int> > visited; for (int i = 0; i < n; i++) { q.push({ 0 | (1 << i), i }); } if (n == 1) return 0; for (int lvl = 1; !q.empty(); lvl++) { int sz = q.size(); while (sz--) { vector<int> curr = q.front(); q.pop(); for (int i = 0; i < graph[curr[1]].size(); i++) { int u = graph[curr[1]][i]; int newMask = (curr[0] | (1 << u)); if (newMask == req) return lvl; if (visited[u].count(newMask)) continue; visited[u].insert(newMask); q.push({ newMask, u }); } } } return -1; } }; main(){ Solution ob; vector<vector<int>> v = {{1},{0,2,4},{1,3,4},{2},{1,2}}; cout << (ob.shortestPathLength(v)); } {{1},{0,2,4},{1,3,4},{2},{1,2}} 4
[ { "code": null, "e": 1466, "s": 1062, "text": "Suppose we have one undirected, connected graph with N nodes these nodes are labeled as 0, 1, 2, ..., N-1. graph length will be N, and j is not same as i is in the list graph[i] exactly once, if and only if nodes i and j are connected. We have to find the length of the shortest path that visits every node. We can start and stop at any node, we can revisit nodes multiple times, and we can reuse edges." }, { "code": null, "e": 1594, "s": 1466, "text": "So, if the input is like [[1],[0,2,4],[1,3,4],[2],[1,2]], then the output will be 4. Now here one possible path is [0,1,4,2,3]." }, { "code": null, "e": 1638, "s": 1594, "text": "To solve this, we will follow these steps −" }, { "code": null, "e": 1655, "s": 1638, "text": "Define one queue" }, { "code": null, "e": 1672, "s": 1655, "text": "Define one queue" }, { "code": null, "e": 1691, "s": 1672, "text": "n := size of graph" }, { "code": null, "e": 1710, "s": 1691, "text": "n := size of graph" }, { "code": null, "e": 1727, "s": 1710, "text": "req := 2^(n - 1)" }, { "code": null, "e": 1744, "s": 1727, "text": "req := 2^(n - 1)" }, { "code": null, "e": 1759, "s": 1744, "text": "Define one map" }, { "code": null, "e": 1774, "s": 1759, "text": "Define one map" }, { "code": null, "e": 1869, "s": 1774, "text": "for initialize i := 0, when i < n, update (increase i by 1), do −insert {0 OR (2^i), i} into q" }, { "code": null, "e": 1935, "s": 1869, "text": "for initialize i := 0, when i < n, update (increase i by 1), do −" }, { "code": null, "e": 1965, "s": 1935, "text": "insert {0 OR (2^i), i} into q" }, { "code": null, "e": 1995, "s": 1965, "text": "insert {0 OR (2^i), i} into q" }, { "code": null, "e": 2029, "s": 1995, "text": "if n is same as 1, then −return 0" }, { "code": null, "e": 2055, "s": 2029, "text": "if n is same as 1, then −" }, { "code": null, "e": 2064, "s": 2055, "text": "return 0" }, { "code": null, "e": 2073, "s": 2064, "text": "return 0" }, { "code": null, "e": 2616, "s": 2073, "text": "for initialize lvl := 1, when not q is empty, update (increase lvl by 1), do −sz := size of qwhile sz is non-zero, decrease sz by 1 in each iteration, do −Define an array curr = front element of qdelete element from qfor initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q" }, { "code": null, "e": 2695, "s": 2616, "text": "for initialize lvl := 1, when not q is empty, update (increase lvl by 1), do −" }, { "code": null, "e": 2711, "s": 2695, "text": "sz := size of q" }, { "code": null, "e": 2727, "s": 2711, "text": "sz := size of q" }, { "code": null, "e": 3177, "s": 2727, "text": "while sz is non-zero, decrease sz by 1 in each iteration, do −Define an array curr = front element of qdelete element from qfor initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q" }, { "code": null, "e": 3240, "s": 3177, "text": "while sz is non-zero, decrease sz by 1 in each iteration, do −" }, { "code": null, "e": 3282, "s": 3240, "text": "Define an array curr = front element of q" }, { "code": null, "e": 3324, "s": 3282, "text": "Define an array curr = front element of q" }, { "code": null, "e": 3346, "s": 3324, "text": "delete element from q" }, { "code": null, "e": 3368, "s": 3346, "text": "delete element from q" }, { "code": null, "e": 3694, "s": 3368, "text": "for initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), dou := graph[curr[1], i]newMask := (curr[0] OR 2^u)if newMask is same as req, then −return lvlif call count(newMask) of visited[u], then −Ignore following part, skip to the next iterationinsert newMask into visited[u]insert {newMask, u} into q" }, { "code": null, "e": 3779, "s": 3694, "text": "for initialize i := 0, when i < size of graph[curr[1]], update (increase i by 1), do" }, { "code": null, "e": 3802, "s": 3779, "text": "u := graph[curr[1], i]" }, { "code": null, "e": 3825, "s": 3802, "text": "u := graph[curr[1], i]" }, { "code": null, "e": 3853, "s": 3825, "text": "newMask := (curr[0] OR 2^u)" }, { "code": null, "e": 3881, "s": 3853, "text": "newMask := (curr[0] OR 2^u)" }, { "code": null, "e": 3925, "s": 3881, "text": "if newMask is same as req, then −return lvl" }, { "code": null, "e": 3959, "s": 3925, "text": "if newMask is same as req, then −" }, { "code": null, "e": 3970, "s": 3959, "text": "return lvl" }, { "code": null, "e": 3981, "s": 3970, "text": "return lvl" }, { "code": null, "e": 4075, "s": 3981, "text": "if call count(newMask) of visited[u], then −Ignore following part, skip to the next iteration" }, { "code": null, "e": 4120, "s": 4075, "text": "if call count(newMask) of visited[u], then −" }, { "code": null, "e": 4170, "s": 4120, "text": "Ignore following part, skip to the next iteration" }, { "code": null, "e": 4220, "s": 4170, "text": "Ignore following part, skip to the next iteration" }, { "code": null, "e": 4251, "s": 4220, "text": "insert newMask into visited[u]" }, { "code": null, "e": 4282, "s": 4251, "text": "insert newMask into visited[u]" }, { "code": null, "e": 4309, "s": 4282, "text": "insert {newMask, u} into q" }, { "code": null, "e": 4336, "s": 4309, "text": "insert {newMask, u} into q" }, { "code": null, "e": 4346, "s": 4336, "text": "return -1" }, { "code": null, "e": 4356, "s": 4346, "text": "return -1" }, { "code": null, "e": 4426, "s": 4356, "text": "Let us see the following implementation to get better understanding −" }, { "code": null, "e": 4437, "s": 4426, "text": " Live Demo" }, { "code": null, "e": 5665, "s": 4437, "text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid print_vector(vector<auto> v){\n cout << \"[\";\n for(int i = 0; i<v.size(); i++){\n cout << v[i] << \", \";\n }\n cout << \"]\"<<endl;\n}\nclass Solution {\n public:\n int shortestPathLength(vector<vector<int> >& graph){\n queue<vector<int> > q;\n int n = graph.size();\n int req = (1 << n) - 1;\n map<int, set<int> > visited;\n for (int i = 0; i < n; i++) {\n q.push({ 0 | (1 << i), i });\n }\n if (n == 1)\n return 0;\n for (int lvl = 1; !q.empty(); lvl++) {\n int sz = q.size();\n while (sz--) {\n vector<int> curr = q.front();\n q.pop();\n for (int i = 0; i < graph[curr[1]].size(); i++) {\n int u = graph[curr[1]][i];\n int newMask = (curr[0] | (1 << u));\n if (newMask == req)\n return lvl;\n if (visited[u].count(newMask))\n continue;\n visited[u].insert(newMask);\n q.push({ newMask, u });\n }\n }\n }\n return -1;\n }\n};\nmain(){\n Solution ob;\n vector<vector<int>> v = {{1},{0,2,4},{1,3,4},{2},{1,2}};\n cout << (ob.shortestPathLength(v));\n}" }, { "code": null, "e": 5697, "s": 5665, "text": "{{1},{0,2,4},{1,3,4},{2},{1,2}}" }, { "code": null, "e": 5699, "s": 5697, "text": "4" } ]
How to Make Substring of a TextView Clickable in Android? - GeeksforGeeks
23 Apr, 2021 In this article, we are going to implement a very important feature related to TextView. Here we are making part of a string or a substring to act as a substring. This feature is important while writing a blog because it may be possible that for certain points we want to redirect the user to a link. So here we are going to learn how to implement that feature. A sample GIF is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. Step 1: Create a New Project To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language. Step 2: Working with the activity_main.xml file Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. XML <?xml version="1.0" encoding="utf-8"?><LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" tools:context=".MainActivity"> <TextView android:id="@+id/text_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="First Click THIS and then THIS " android:textColor="@color/black" android:textSize="20sp" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintLeft_toLeftOf="parent" app:layout_constraintRight_toRightOf="parent" app:layout_constraintTop_toTopOf="parent" /> </LinearLayout> Step 3: Working with the MainActivity.java file Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail. Java import android.os.Bundle;import android.text.SpannableString;import android.text.Spanned;import android.text.method.LinkMovementMethod;import android.text.style.ClickableSpan;import android.view.View;import android.widget.TextView;import android.widget.Toast; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); TextView textView = findViewById(R.id.text_view); String text = "First Click THIS and then THIS "; SpannableString ss = new SpannableString(text); // creating clickable span to be implemented as a link ClickableSpan clickableSpan1 = new ClickableSpan() { public void onClick(View widget) { Toast.makeText(MainActivity.this, "First Clickable Text", Toast.LENGTH_SHORT).show(); } }; // creating clickable span to be implemented as a link ClickableSpan clickableSpan2 = new ClickableSpan() { @Override public void onClick(View widget) { Toast.makeText(MainActivity.this, "Second Clickable Text", Toast.LENGTH_SHORT).show(); } }; // setting the part of string to be act as a link ss.setSpan(clickableSpan1, 12, 16, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE); ss.setSpan(clickableSpan2, 26, 30, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE); textView.setText(ss); textView.setMovementMethod(LinkMovementMethod.getInstance()); }} Output: Android Java Java Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Flutter - Custom Bottom Navigation Bar How to Read Data from SQLite Database in Android? How to Post Data to API using Retrofit in Android? Android Listview in Java with Example Retrofit with Kotlin Coroutine in Android Arrays in Java Split() String method in Java with examples For-each loop in Java Reverse a string in Java Arrays.sort() in Java with examples
[ { "code": null, "e": 24725, "s": 24697, "text": "\n23 Apr, 2021" }, { "code": null, "e": 25252, "s": 24725, "text": "In this article, we are going to implement a very important feature related to TextView. Here we are making part of a string or a substring to act as a substring. This feature is important while writing a blog because it may be possible that for certain points we want to redirect the user to a link. So here we are going to learn how to implement that feature. A sample GIF is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. " }, { "code": null, "e": 25281, "s": 25252, "text": "Step 1: Create a New Project" }, { "code": null, "e": 25443, "s": 25281, "text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language." }, { "code": null, "e": 25491, "s": 25443, "text": "Step 2: Working with the activity_main.xml file" }, { "code": null, "e": 25634, "s": 25491, "text": "Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. " }, { "code": null, "e": 25638, "s": 25634, "text": "XML" }, { "code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" android:orientation=\"vertical\" tools:context=\".MainActivity\"> <TextView android:id=\"@+id/text_view\" android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:text=\"First Click THIS and then THIS \" android:textColor=\"@color/black\" android:textSize=\"20sp\" app:layout_constraintBottom_toBottomOf=\"parent\" app:layout_constraintLeft_toLeftOf=\"parent\" app:layout_constraintRight_toRightOf=\"parent\" app:layout_constraintTop_toTopOf=\"parent\" /> </LinearLayout>", "e": 26495, "s": 25638, "text": null }, { "code": null, "e": 26543, "s": 26495, "text": "Step 3: Working with the MainActivity.java file" }, { "code": null, "e": 26733, "s": 26543, "text": "Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail." }, { "code": null, "e": 26738, "s": 26733, "text": "Java" }, { "code": "import android.os.Bundle;import android.text.SpannableString;import android.text.Spanned;import android.text.method.LinkMovementMethod;import android.text.style.ClickableSpan;import android.view.View;import android.widget.TextView;import android.widget.Toast; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); TextView textView = findViewById(R.id.text_view); String text = \"First Click THIS and then THIS \"; SpannableString ss = new SpannableString(text); // creating clickable span to be implemented as a link ClickableSpan clickableSpan1 = new ClickableSpan() { public void onClick(View widget) { Toast.makeText(MainActivity.this, \"First Clickable Text\", Toast.LENGTH_SHORT).show(); } }; // creating clickable span to be implemented as a link ClickableSpan clickableSpan2 = new ClickableSpan() { @Override public void onClick(View widget) { Toast.makeText(MainActivity.this, \"Second Clickable Text\", Toast.LENGTH_SHORT).show(); } }; // setting the part of string to be act as a link ss.setSpan(clickableSpan1, 12, 16, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE); ss.setSpan(clickableSpan2, 26, 30, Spanned.SPAN_EXCLUSIVE_EXCLUSIVE); textView.setText(ss); textView.setMovementMethod(LinkMovementMethod.getInstance()); }}", "e": 28389, "s": 26738, "text": null }, { "code": null, "e": 28397, "s": 28389, "text": "Output:" }, { "code": null, "e": 28405, "s": 28397, "text": "Android" }, { "code": null, "e": 28410, "s": 28405, "text": "Java" }, { "code": null, "e": 28415, "s": 28410, "text": "Java" }, { "code": null, "e": 28423, "s": 28415, "text": "Android" }, { "code": null, "e": 28521, "s": 28423, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28530, "s": 28521, "text": "Comments" }, { "code": null, "e": 28543, "s": 28530, "text": "Old Comments" }, { "code": null, "e": 28582, "s": 28543, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 28632, "s": 28582, "text": "How to Read Data from SQLite Database in Android?" }, { "code": null, "e": 28683, "s": 28632, "text": "How to Post Data to API using Retrofit in Android?" }, { "code": null, "e": 28721, "s": 28683, "text": "Android Listview in Java with Example" }, { "code": null, "e": 28763, "s": 28721, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 28778, "s": 28763, "text": "Arrays in Java" }, { "code": null, "e": 28822, "s": 28778, "text": "Split() String method in Java with examples" }, { "code": null, "e": 28844, "s": 28822, "text": "For-each loop in Java" }, { "code": null, "e": 28869, "s": 28844, "text": "Reverse a string in Java" } ]
Bootstrap 4 - Embed
The embed utility is used to insert a video in a page by using <iframe> element along with .embed-responsive class and an aspect ratio (for instance: embed-responsive-21by9). You can use the aspect ratios such as 21by9, 16by9, 4by3, 1by1to embed video in a page. The following example demonstrates this − <html lang = "en"> <head> <!-- Meta tags --> <meta charset = "utf-8"> <meta name = "viewport" content = "width = device-width, initial-scale = 1, shrink-to-fit = no"> <!-- Bootstrap CSS --> <link rel = "stylesheet" href = "https://maxcdn.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <title>Bootstrap 4 Example</title> </head> <body> <div class = "container"> <h2>Aspect Ratios</h2> <h4>21:9 aspect ratio</h4> <div class = "embed-responsive embed-responsive-21by9"> <iframe class = "embed-responsive-item" src = "https://www.youtube.com/embed/I_G0Yb6R7m0" allowfullscreen></iframe> </div> <br> <h4>16:9 aspect ratio</h4> <div class = "embed-responsive embed-responsive-16by9"> <iframe class = "embed-responsive-item" src = "https://www.youtube.com/embed/I_G0Yb6R7m0"></iframe> </div> <br> <h4>4:3 aspect ratio</h4> <div class = "embed-responsive embed-responsive-4by3"> <iframe class = "embed-responsive-item" src = "https://www.youtube.com/embed/I_G0Yb6R7m0"></iframe> </div> <br> <h4>1:1 aspect ratio</h4> <div class = "embed-responsive embed-responsive-1by1"> <iframe class = "embed-responsive-item" src = "https://www.youtube.com/embed/I_G0Yb6R7m0"></iframe> </div> </div> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src = "https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity = "sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin = "anonymous"> </script> <!-- Popper --> <script src =" https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity =" sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin = "anonymous"> </script> <!-- Latest compiled and minified Bootstrap JavaScript --> <script src = "https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity = "sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin = "anonymous"> </script> </body> </html> It will produce the following result − 26 Lectures 2 hours Anadi Sharma 54 Lectures 4.5 hours Frahaan Hussain 161 Lectures 14.5 hours Eduonix Learning Solutions 20 Lectures 4 hours Azaz Patel 15 Lectures 1.5 hours Muhammad Ismail 62 Lectures 8 hours Yossef Ayman Zedan Print Add Notes Bookmark this page
[ { "code": null, "e": 1991, "s": 1816, "text": "The embed utility is used to insert a video in a page by using <iframe> element along with .embed-responsive class and an aspect ratio (for instance: embed-responsive-21by9)." }, { "code": null, "e": 2121, "s": 1991, "text": "You can use the aspect ratios such as 21by9, 16by9, 4by3, 1by1to embed video in a page. The following example demonstrates this −" }, { "code": null, "e": 4668, "s": 2121, "text": "<html lang = \"en\">\n <head>\n <!-- Meta tags -->\n <meta charset = \"utf-8\">\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1, shrink-to-fit = no\">\n \n <!-- Bootstrap CSS -->\n <link rel = \"stylesheet\" \n href = \"https://maxcdn.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n <title>Bootstrap 4 Example</title>\n </head>\n \n <body>\n <div class = \"container\">\n <h2>Aspect Ratios</h2>\n <h4>21:9 aspect ratio</h4>\n <div class = \"embed-responsive embed-responsive-21by9\">\n <iframe class = \"embed-responsive-item\" \n src = \"https://www.youtube.com/embed/I_G0Yb6R7m0\" allowfullscreen></iframe>\n </div>\n <br>\n \n <h4>16:9 aspect ratio</h4>\n <div class = \"embed-responsive embed-responsive-16by9\">\n <iframe class = \"embed-responsive-item\" \n src = \"https://www.youtube.com/embed/I_G0Yb6R7m0\"></iframe>\n </div>\n <br>\n \n <h4>4:3 aspect ratio</h4>\n <div class = \"embed-responsive embed-responsive-4by3\">\n <iframe class = \"embed-responsive-item\" \n src = \"https://www.youtube.com/embed/I_G0Yb6R7m0\"></iframe>\n </div>\n <br>\n \n <h4>1:1 aspect ratio</h4>\n <div class = \"embed-responsive embed-responsive-1by1\">\n <iframe class = \"embed-responsive-item\" \n src = \"https://www.youtube.com/embed/I_G0Yb6R7m0\"></iframe>\n </div>\n </div>\n \n <!-- jQuery first, then Popper.js, then Bootstrap JS -->\n <script src = \"https://code.jquery.com/jquery-3.2.1.slim.min.js\" \n integrity = \"sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN\" \n crossorigin = \"anonymous\">\n </script>\n \n <!-- Popper -->\n <script src =\" https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js\" \n integrity =\" sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q\" \n crossorigin = \"anonymous\">\n </script>\n \n <!-- Latest compiled and minified Bootstrap JavaScript -->\n <script src = \"https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js\" \n integrity = \"sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl\" \n crossorigin = \"anonymous\">\n </script>\n \n </body>\n</html>" }, { "code": null, "e": 4707, "s": 4668, "text": "It will produce the following result −" }, { "code": null, "e": 4740, "s": 4707, "text": "\n 26 Lectures \n 2 hours \n" }, { "code": null, "e": 4754, "s": 4740, "text": " Anadi Sharma" }, { "code": null, "e": 4789, "s": 4754, "text": "\n 54 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4806, "s": 4789, "text": " Frahaan Hussain" }, { "code": null, "e": 4843, "s": 4806, "text": "\n 161 Lectures \n 14.5 hours \n" }, { "code": null, "e": 4871, "s": 4843, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 4904, "s": 4871, "text": "\n 20 Lectures \n 4 hours \n" }, { "code": null, "e": 4916, "s": 4904, "text": " Azaz Patel" }, { "code": null, "e": 4951, "s": 4916, "text": "\n 15 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4968, "s": 4951, "text": " Muhammad Ismail" }, { "code": null, "e": 5001, "s": 4968, "text": "\n 62 Lectures \n 8 hours \n" }, { "code": null, "e": 5021, "s": 5001, "text": " Yossef Ayman Zedan" }, { "code": null, "e": 5028, "s": 5021, "text": " Print" }, { "code": null, "e": 5039, "s": 5028, "text": " Add Notes" } ]
Generate Random Numbers Using Middle Square Method in Java - GeeksforGeeks
20 Nov, 2020 This method was proposed by Van Neumann. In this method, we have a seed and then the seed is squared and its midterm is fetched as the random number. Consider we have a seed having N digits we square that number to get a 2N digits number if it doesn’t become 2N digits we add zeros before the number to make it 2N digits. A good algorithm is basically the one which does not depend on the seed and the period should also be maximally long that it should almost touch every number in its range before it starts repeating itself as a rule of thumb remember that longer the period more random is the number. Example: Consider the seed to be 14 and we want a two digit random number. Number --> Square --> Mid-term 14 --> 0196 --> 19 19 --> 0361 --> 36 36 --> 1296 --> 29 29 --> 0841 --> 84 84 --> 7056 --> 05 05 --> 0025 --> 02 02 --> 0004 --> 00 00 --> 0000 --> 00 In the above example, we can notice that we get some random numbers 19,36,29,84,05,02,00 which seem to be random picks, in this way we get multiple random numbers until we encounter a self-repeating chain. We also get to know a disadvantage of this method that is if we encounter a 0 then we get a chain of 0s from that point. Also, consider that we get a random number 50 the square will be 2500 and the midterms are 50 again, and we get into this chain of 50, and sometimes we may encounter such chains more often which acts as a disadvantage and because of these disadvantages this method is not practically used for generating random numbers. Implementation: Java // Generate Random Numbers Using Middle// Square Method in Javaimport java.util.Random; public class Main { static int rangeArray[] = { 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000 }; // function for generating a random number static long middleSquareNumber(long num, int digit) { long sqn = num * num, nextNum = 0; int trim = (digit / 2); sqn = sqn / rangeArray[trim]; for (int i = 0; i < digit; i++) { nextNum += (sqn % (rangeArray[trim])) * (rangeArray[i]); sqn = sqn / 10; } return nextNum; } public static void main(String args[]) { int numberOfDigit = 3; int start = rangeArray[numberOfDigit - 1], end = rangeArray[numberOfDigit]; // create rand object Random rand = new Random(); long nextNumber = rand.nextInt(end - start) + start; System.out.print( "The random numbers for the Geeks are:\n" + nextNumber + ", "); // Generating 10 random numbers for (int i = 0; i < 9; i++) { nextNumber = middleSquareNumber(nextNumber, numberOfDigit); System.out.print(nextNumber + ", "); } }} The random numbers for the Geeks are: 325, 562, 584, 105, 102, 40, 160, 560, 360, 960, Note: The above program shows how the Middle Square Number method works you can run the program multiple times to see different random numbers generated every time. And this method is not recommended as an ideal way of generating random numbers due to its disadvantages but this method can be used as a hashing algorithm and some other applications too. Picked Java Java Programs Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Stream In Java Constructors in Java Different ways of Reading a text file in Java Exceptions in Java Functional Interfaces in Java Convert a String to Character array in Java Java Programming Examples Convert Double to Integer in Java Implementing a Linked List in Java using Class How to Iterate HashMap in Java?
[ { "code": null, "e": 23948, "s": 23920, "text": "\n20 Nov, 2020" }, { "code": null, "e": 24554, "s": 23948, "text": "This method was proposed by Van Neumann. In this method, we have a seed and then the seed is squared and its midterm is fetched as the random number. Consider we have a seed having N digits we square that number to get a 2N digits number if it doesn’t become 2N digits we add zeros before the number to make it 2N digits. A good algorithm is basically the one which does not depend on the seed and the period should also be maximally long that it should almost touch every number in its range before it starts repeating itself as a rule of thumb remember that longer the period more random is the number. " }, { "code": null, "e": 24564, "s": 24554, "text": "Example: " }, { "code": null, "e": 24863, "s": 24564, "text": "Consider the seed to be 14 and we want a two digit random number.\nNumber --> Square --> Mid-term\n14 --> 0196 --> 19 \n19 --> 0361 --> 36\n36 --> 1296 --> 29\n29 --> 0841 --> 84\n84 --> 7056 --> 05\n05 --> 0025 --> 02\n02 --> 0004 --> 00\n00 --> 0000 --> 00 " }, { "code": null, "e": 25510, "s": 24863, "text": "In the above example, we can notice that we get some random numbers 19,36,29,84,05,02,00 which seem to be random picks, in this way we get multiple random numbers until we encounter a self-repeating chain. We also get to know a disadvantage of this method that is if we encounter a 0 then we get a chain of 0s from that point. Also, consider that we get a random number 50 the square will be 2500 and the midterms are 50 again, and we get into this chain of 50, and sometimes we may encounter such chains more often which acts as a disadvantage and because of these disadvantages this method is not practically used for generating random numbers." }, { "code": null, "e": 25526, "s": 25510, "text": "Implementation:" }, { "code": null, "e": 25531, "s": 25526, "text": "Java" }, { "code": "// Generate Random Numbers Using Middle// Square Method in Javaimport java.util.Random; public class Main { static int rangeArray[] = { 1, 10, 100, 1000, 10000, 100000, 1000000, 10000000, 100000000 }; // function for generating a random number static long middleSquareNumber(long num, int digit) { long sqn = num * num, nextNum = 0; int trim = (digit / 2); sqn = sqn / rangeArray[trim]; for (int i = 0; i < digit; i++) { nextNum += (sqn % (rangeArray[trim])) * (rangeArray[i]); sqn = sqn / 10; } return nextNum; } public static void main(String args[]) { int numberOfDigit = 3; int start = rangeArray[numberOfDigit - 1], end = rangeArray[numberOfDigit]; // create rand object Random rand = new Random(); long nextNumber = rand.nextInt(end - start) + start; System.out.print( \"The random numbers for the Geeks are:\\n\" + nextNumber + \", \"); // Generating 10 random numbers for (int i = 0; i < 9; i++) { nextNumber = middleSquareNumber(nextNumber, numberOfDigit); System.out.print(nextNumber + \", \"); } }}", "e": 26843, "s": 25531, "text": null }, { "code": null, "e": 26931, "s": 26843, "text": "The random numbers for the Geeks are:\n325, 562, 584, 105, 102, 40, 160, 560, 360, 960, " }, { "code": null, "e": 27285, "s": 26931, "text": "Note: The above program shows how the Middle Square Number method works you can run the program multiple times to see different random numbers generated every time. And this method is not recommended as an ideal way of generating random numbers due to its disadvantages but this method can be used as a hashing algorithm and some other applications too." }, { "code": null, "e": 27292, "s": 27285, "text": "Picked" }, { "code": null, "e": 27297, "s": 27292, "text": "Java" }, { "code": null, "e": 27311, "s": 27297, "text": "Java Programs" }, { "code": null, "e": 27316, "s": 27311, "text": "Java" }, { "code": null, "e": 27414, "s": 27316, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27429, "s": 27414, "text": "Stream In Java" }, { "code": null, "e": 27450, "s": 27429, "text": "Constructors in Java" }, { "code": null, "e": 27496, "s": 27450, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 27515, "s": 27496, "text": "Exceptions in Java" }, { "code": null, "e": 27545, "s": 27515, "text": "Functional Interfaces in Java" }, { "code": null, "e": 27589, "s": 27545, "text": "Convert a String to Character array in Java" }, { "code": null, "e": 27615, "s": 27589, "text": "Java Programming Examples" }, { "code": null, "e": 27649, "s": 27615, "text": "Convert Double to Integer in Java" }, { "code": null, "e": 27696, "s": 27649, "text": "Implementing a Linked List in Java using Class" } ]
Power BI and Synapse, Part 3— Serverless SQL: How much will it cost me? | by Nikola Ilic | Towards Data Science
By introducing Azure Synapse Analytics in late 2019, a whole new perspective was created when it comes to data treatment. Some core concepts, such as traditional data warehousing, came under more scrutiny, while various fresh approaches started to pop up after data nerds became aware of the new capabilities that Synapse brought to the table. Not that Synapse made a strong impact on data ingestion, transformation, and storage options only — it also offered a whole new set of possibilities for data serving and visualization! Therefore, in this series of blog posts, I will try to explore how Power BI works in synergy with the new platform. What options we, Power BI developers, have when working with Synapse? In which data analytics scenarios, Synapse will play on the edge, helping you to achieve the (im)possible? When would you want to take advantage of the innovative solutions within Synapse, and when would you be better sticking with more conventional approaches? What are the best practices when using Power BI — Synapse combo, and which parameters should you evaluate before making a final decision on which path to take. Once we’re done, I believe that you should get a better understanding of the “pros” and “cons” for each of the available options when it comes to integration between Power BI and Synapse. Power BI & Synapse Part 1 — The Art of (im)possible! Power BI & Synapse Part 2 — What Synapse brings to the table? Don’t get me wrong — the theory is nice, and you should definitely spend your time trying to absorb basic architectural concepts of new technology, tool, or feature. Because, if you don’t understand how something works under the hood, there is a great chance that you won’t take maximum advantage of it. But, putting this theoretical knowledge on a practice test is what makes the most interesting part! At least for me:)...It reminds me of the car production process: they build everything, you read the specs and you are impressed! Features, equipment...However, it’s all irrelevant until they put the car on a crash test and get a proper reality check. Therefore, this article will be like a crash test for the Serverless SQL pool within Synapse Analytics — a lot of different scenarios, tests, demos, measures, etc. I’ve already written about the Serverless SQL pool, and I firmly believe that it is the next big thing when it comes to dealing with large volumes of semi-structured or non-structured data. The greatest advantage of the Serverless SQL pool is that you can query the data directly from the CSV, parquet, or JSON files, stored in your Azure Data Lake, without the need to transfer the data! Even more, you can write plain T-SQL to retrieve the data directly from the files! But, let’s see how this works in reality in various use-cases, and, the most important thing, how much each of the solutions will cost you! There are still some things that Microsoft won’t charge you for data processing when using Synapse Serverless SQL pool, such as: Server-level metadata (logins, roles, and server-level credentials) Databases you create in your endpoint. Those databases contain only metadata (users, roles, schemas, views, inline table-valued functions, stored procedures, external data sources, external file formats, and external tables) DDL statements, except for the CREATE STATISTICS statement because it processes data from storage based on the specified sample percentage Metadata-only queries Here is the scenario: I have two CSV files related to the NYC taxi dataset, that I’ve already used in one of the previous demos. One contains data about all yellow cab rides from January 2019 (650 MB), while the other contains data from February 2019 (620 MB). I’ve created two separate views, for each month’s data. The idea is to check what happens when we query the data from Power BI under multiple different conditions. Here is the T-SQL for creating a view over a single month: DROP VIEW IF EXISTS taxi201902csv;GOCREATE VIEW taxi201902csv ASSELECT VendorID ,cast(tpep_pickup_datetime as DATE) tpep_pickup_datetime ,cast(tpep_dropoff_datetime as DATE) tpep_dropoff_datetime ,passenger_count ,trip_distance ,RateCodeID ,store_and_fwd_flag ,PULocationID ,DOLocationID ,payment_type ,fare_amount ,extra ,mta_tax ,tip_amount ,tolls_amount ,improvement_surcharge ,total_amount ,congestion_surchargeFROM OPENROWSET( BULK N'https://nikola.dfs.core.windows.net/nikola/Data/yellow_tripdata_2019-02.csv', FORMAT = 'CSV', PARSER_VERSION='2.0', HEADER_ROW = TRUE ) WITH( VendorID INT, tpep_pickup_datetime DATETIME2, tpep_dropoff_datetime DATETIME2, passenger_count INT, trip_distance DECIMAL(10,2), RateCodeID INT, store_and_fwd_flag VARCHAR(10), PULocationID INT, DOLocationID INT, payment_type INT, fare_amount DECIMAL(10,2), extra DECIMAL(10,2), mta_tax DECIMAL(10,2), tip_amount DECIMAL(10,2), tolls_amount DECIMAL(10,2), improvement_surcharge DECIMAL(10,2), total_amount DECIMAL(10,2), congestion_surcharge DECIMAL(10,2) ) AS [taxi201902csv] I am using WITH block to explicitly define data types, as if you don’t do it, all your character columns will be automatically set to VARCHAR(8000), and consequentially more memory expensive. As you can notice, I’ve renamed my generic column names from CSV files, so they now look more readable. I’ve also cast DateTime columns to Date type only, as I don’t need the time portion for this demo. This way, we’ve reduced the cardinality and the whole data model size consequentially. Let’s check how many records each of these files contains: January contains ~7.6 million records, while February has ~7 million records. Additionally, I’ve also applied the same logic and built two views over exactly the same portion of data coming from parquet files. So, we can compare metrics between CSV and Parquet files. Views used for Parquet files were built slightly different, using FILENAME() and FILEPATH() functions to eliminate unnecessary partitions: DROP VIEW IF EXISTS taxi201901parquet;GOCREATE VIEW taxi201901parquet ASSELECT VendorID ,CAST(TpepPickupDatetime AS DATE) TpepPickupDatetime ,CAST(TpepDropoffDatetime AS DATE) TpepDropoffDatetime ,PassengerCount ,TripDistance ,PuLocationId ,DoLocationId ,StartLon ,StartLat ,EndLon ,EndLat ,RateCodeId ,StoreAndFwdFlag ,PaymentType ,FareAmount ,Extra ,MtaTax ,ImprovementSurcharge ,TipAmount ,TollsAmount ,TotalAmountFROM OPENROWSET( BULK 'puYear=*/puMonth=*/*.snappy.parquet', DATA_SOURCE = 'YellowTaxi', FORMAT='PARQUET' ) nycWHERE nyc.filepath(1) = 2019 AND nyc.filepath(2) IN (1) AND tpepPickupDateTime BETWEEN CAST('1/1/2019' AS datetime) AND CAST('1/31/2019' AS datetime) As specified in the best practices for using Serverless SQL pool in Synapse, we explicitly instructed our query to target only the year 2019 and only the month of January! This will reduce the amount of data for scanning and processing. We didn’t have to do this for our CSV files, as they were already partitioned per month and saved like that. One important disclaimer: As the usage of the Serverless SQL pool is being charged per volume of the data processed (it’s currently priced starting at 5$/TB of processed data), I will not measure performance in terms of speed. I want to focus solely on the analysis of data volume processed and costs generated by examining different scenarios. Before we proceed with testing, a little more theory...As I plan to compare data processing between CSV and Parquet files, I believe we should understand the key differences between these two types: In Parquet files, data is compressed in a more optimal way. As you may recall from one of the screenshots above, a parquet file consumes approximately 1/3 of memory compared to a CSV file that contains the same portion of data Parquet files support column storage format — that being said, columns within the Parquet file are physically separated, which means that you don’t need to scan the whole file if you need data from few columns only! On the opposite, when you’re querying a CSV file, every time you send the query, it will scan the whole file, even if you need data from one single column For those coming from the traditional SQL world, you can think of CSV vs Parquet, such as row-store vs columnar databases Let’s start with the most obvious and desirable scenario — using Import mode to ingest all the data into Power BI, and performing data refresh to check how much it will cost us. One last thing before we get our hands dirty — in Synapse Analytics, there is still not a feature to measure the cost of a specific query. You can check the volume of processed data on a daily, weekly, or monthly level. Additionally, you can set the limits if you want, on each of those time-granularity levels — more on that in this article. This feature is also relatively new, so I’m glad to see that Synapse permanently makes progress in a promising way of providing full cost transparency. Since we can’t measure the exact cost of every single query, I will try to calculate the figures by querying DMV within the master database, using the following T-SQL: SELECT * FROM sys.dm_external_data_processedWHERE type = 'daily' So, my starting point for today is around 29.7 GB, and I will calculate the difference each time I target the Serverless SQL pool to get the data. Ok, going back to the first scenario, I will import the data into Power BI, for both months from CSV files: The most fascinating thing is that I’m writing plain T-SQL, so my users don’t even know that they are getting the data directly from CSV files! I’m using UNION ALL, as I’m sure that there are no identical records in my two views, and in theory, it should run faster than UNION, but I’ve could also create a separate view using this same T-SQL statement, and then use that joint view in Power BI. I will need a proper date dimension table for testing different scenarios, and I will create it using Power Query. This date table will be in Import mode in all scenarios, so it should not affect the amount of processed data from the Serverless SQL pool. Here is the M code for the date table: let StartDate = #date(StartYear,1,1), EndDate = #date(EndYear,12,31), NumberOfDays = Duration.Days( EndDate - StartDate ), Dates = List.Dates(StartDate, NumberOfDays+1, #duration(1,0,0,0)), #"Converted to Table" = Table.FromList(Dates, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #"Renamed Columns" = Table.RenameColumns(#"Converted to Table",{{"Column1", "FullDateAlternateKey"}}), #"Changed Type" = Table.TransformColumnTypes(#"Renamed Columns",{{"FullDateAlternateKey", type date}}), #"Inserted Year" = Table.AddColumn(#"Changed Type", "Year", each Date.Year([FullDateAlternateKey]), type number), #"Inserted Month" = Table.AddColumn(#"Inserted Year", "Month", each Date.Month([FullDateAlternateKey]), type number), #"Inserted Month Name" = Table.AddColumn(#"Inserted Month", "Month Name", each Date.MonthName([FullDateAlternateKey]), type text), #"Inserted Quarter" = Table.AddColumn(#"Inserted Month Name", "Quarter", each Date.QuarterOfYear([FullDateAlternateKey]), type number), #"Inserted Week of Year" = Table.AddColumn(#"Inserted Quarter", "Week of Year", each Date.WeekOfYear([FullDateAlternateKey]), type number), #"Inserted Week of Month" = Table.AddColumn(#"Inserted Week of Year", "Week of Month", each Date.WeekOfMonth([FullDateAlternateKey]), type number), #"Inserted Day" = Table.AddColumn(#"Inserted Week of Month", "Day", each Date.Day([FullDateAlternateKey]), type number), #"Inserted Day of Week" = Table.AddColumn(#"Inserted Day", "Day of Week", each Date.DayOfWeek([FullDateAlternateKey]), type number), #"Inserted Day of Year" = Table.AddColumn(#"Inserted Day of Week", "Day of Year", each Date.DayOfYear([FullDateAlternateKey]), type number), #"Inserted Day Name" = Table.AddColumn(#"Inserted Day of Year", "Day Name", each Date.DayOfWeekName([FullDateAlternateKey]), type text)in #"Inserted Day Name" It took a while to load the data in the Power BI Desktop, so let’s check now some key metrics. My table has ~ 14.7 million rows My whole data model size is ~92 MB, as the data is optimally compressed within Power BI Desktop (we’ve reduced the cardinality of DateTime columns) Once I’ve created my table visual, displaying total records per date, my daily data processed volume is ~33.3 GB. Let’s refresh the data model to check how expensive it would be. So, Power BI Desktop will now go to a Serverless SQL pool, querying data from my two views, but don’t forget that in the background are two CSV files as an ultimate source of our data! After the refresh, my daily value had increased to ~36.9 GB, which means that this refresh costs me ~3.6 GB. In terms of money, that’s around 0.018 $ (0.0036 TB x 5 USD). In this use case, it would cost me money only when my Power BI model is being refreshed! Simply said, if I refresh my data model once per day, this report will cost me 54 cents per month. Let’s now check what would happen if we use exactly the same query, but instead of importing data into Power BI Desktop, we will use the DirectQuery option. Let’s first interact with the date slicer, so we can check how much this will cost us. My starting point for measuring is ~87.7 GB and this is how my report looks like: Refreshing the whole query burned out ~2.8 GB, which is ~0.014$. Now, this is for one single visual on the page! Keep in mind that when you’re using DirectQuery, each visual will generate a separate query to the underlying data source. Let’s check what happens when I add another visual on the page: Now, this query costs me ~4 GB, which is 0.02$. As you can conclude, increasing the number of visuals on your report canvas will also increase the costs. One more important thing to keep in mind: these costs are per user! So, if you have 10 users running this same report in parallel, you should multiply costs by 10, as a new query will be generated for each visual and for each user. Now, I want to check what happens if I select a specific date range within my slicer, for example between January 1st and January 13th: The first thing I notice is that query cost me exactly the same! The strange thing is, if I look at the SQL query generated to retrieve the data, I can see the engine was clever enough to apply a date filter in the WHERE clause: /*Query 1*/SELECT TOP (1000001) [semijoin1].[c1],SUM([a0]) AS [a0]FROM ((SELECT [t1].[tpep_pickup_datetime] AS [c14],[t1].[total_amount] AS [a0]FROM ((SELECT * FROM dbo.taxi201901csvUNION ALLSELECT * FROM dbo.taxi201902csv)) AS [t1]) AS [basetable0] INNER JOIN ((SELECT 3 AS [c1],CAST( '20190101 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 4 AS [c1],CAST( '20190102 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 5 AS [c1],CAST( '20190103 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 6 AS [c1],CAST( '20190104 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 7 AS [c1],CAST( '20190105 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 8 AS [c1],CAST( '20190106 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 9 AS [c1],CAST( '20190107 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 10 AS [c1],CAST( '20190108 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 11 AS [c1],CAST( '20190109 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 12 AS [c1],CAST( '20190110 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 13 AS [c1],CAST( '20190111 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 14 AS [c1],CAST( '20190112 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 15 AS [c1],CAST( '20190113 00:00:00' AS datetime) AS [c14] ) ) AS [semijoin1] on (([semijoin1].[c14] = [basetable0].[c14])))GROUP BY [semijoin1].[c1] /*Query 2*/SELECT SUM([t1].[total_amount]) AS [a0]FROM ((SELECT * FROM dbo.taxi201901csvUNION ALLSELECT * FROM dbo.taxi201902csv)) AS [t1]WHERE (([t1].[tpep_pickup_datetime] IN (CAST( '20190112 00:00:00' AS datetime),CAST( '20190113 00:00:00' AS datetime),CAST( '20190101 00:00:00' AS datetime),CAST( '20190102 00:00:00' AS datetime),CAST( '20190103 00:00:00' AS datetime),CAST( '20190104 00:00:00' AS datetime),CAST( '20190105 00:00:00' AS datetime),CAST( '20190106 00:00:00' AS datetime),CAST( '20190107 00:00:00' AS datetime),CAST( '20190108 00:00:00' AS datetime),CAST( '20190109 00:00:00' AS datetime),CAST( '20190110 00:00:00' AS datetime),CAST( '20190111 00:00:00' AS datetime)))) However, it appears that the underlying view scans the whole chunk of the data within the CSV file! So, there is no benefit at all in terms of savings if you use a date slicer to limit the volume of data, as the whole CSV file will be scanned in any case... The next test will show us what happens if we create an aggregated table and store it in DirectQuery mode within the Power BI. It’s quite a simple aggregated table, consisting of total amount and pickup time columns. My query hit the aggregated table, but it didn’t change anything in terms of the total query cost, as it was exactly the same as in the previous use case: ~0.02$! After that, I want to check what happens if I import a previously aggregated table into Power BI. I believe that calculations will be faster, but let’s see how it will affect the query costs. As expected, this was pretty quick, aggregated table was hit, so we are only paying the price of the data refresh, as in our Use Case #1: 0.018$! One last thing I want to check, is what happens if I know my analytic workloads, and can prepare some most frequent queries in advance, using a Serverless SQL pool. Therefore, I will create a view, that will aggregate the data like in the previous case within Power BI Desktop, but this time within the Serverless SQL pool: DROP VIEW IF EXISTS taxi201901_02_agg;GOCREATE VIEW taxi201901_02_agg AS SELECT CAST(C2 AS DATE) AS tpep_pickup_datetime, SUM(CAST(C17 AS DECIMAL(10,2))) AS total_amountFROM OPENROWSET( BULK N'https://nikola.dfs.core.windows.net/nikola/Data/yellow_tripdata_2019-01.csv', FORMAT = 'CSV', PARSER_VERSION='2.0', HEADER_ROW = TRUE ) AS [taxi201901_02_agg] GROUP BY CAST(C2 AS DATE) Basically, we are aggregating data on the source side and that should obviously help. So, let’s check the outcome: It returned requested figures faster, but the amount of the processed data was again the same! This query again costs me ~0.02$! This brings me to a conclusion: no matter what you perform within the Serverless SQL pool on top of the CSV files, they will be fully scanned at the lowest level of the data preparation process! However, and that’s important, it’s not just the scanned data amount that makes the total of the processed data, but also the amount of streamed data to a client: in my example, the difference between streamed data is not so big (275 MB when I’ve included all columns vs 1 MB when targeting aggregated data), and that’s why the final price wasn’t noticeably different. I assume that when you’re working with larger data sets (few TBs), the cost difference would be far more obvious. So, keep in mind that pre-aggregating data within a Serverless SQL pool can save you the amount of streamed data, which also means that your overall costs will be reduced! You can find all the details here. Let’s now evaluate if something changes if we use data from Parquet files, instead of CSV. The first use case is importing Parquet files. As expected, as they are better compressed than CSV files, costs decreased, almost by double: ~0.01$! And finally, let’s examine the figures if we use DirectQuery mode in Power BI to query the data directly from the Parquet files within the Serverless SQL pool in Synapse. To my negative surprise, this query processed ~26 GB of data, which translates to ~0.13$! As that looked completely strange, I started to investigate and found out that the main culprit for the high cost was the Date dimension created using M! While debugging the SQL query generated by Power BI and sent to SQL engine in the background, I’ve noticed that extremely complex query had been created, performing joins and UNION ALLs on every single value from the Date dimension: SELECT TOP (1000001) [semijoin1].[c1],SUM([a0]) AS [a0]FROM ((SELECT [t1].[TpepPickupDatetime] AS [c13],[t1].[TotalAmount] AS [a0]FROM ((SELECT *FROM taxi201901parquetUNION ALLSELECT *FROM taxi201902parquet)) AS [t1]) AS [basetable0] INNER JOIN ((SELECT 3 AS [c1],CAST( '20190101 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 4 AS [c1],CAST( '20190102 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 5 AS [c1],CAST( '20190103 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 6 AS [c1],CAST( '20190104 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 7 AS [c1],CAST( '20190105 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 8 AS [c1],CAST( '20190106 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 9 AS [c1],CAST( '20190107 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 10 AS [c1],CAST( '20190108 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 11 AS [c1],CAST( '20190109 00:00:00' AS datetime) AS [c13] ) UNION ALL..... This is just an excerpt from the generated query, I’ve removed rest of the code for the sake of readability. Once I excluded my Date dimension from the calculations, costs expectedly decreased to under 400 MBs!!! So, instead of 26GB with the Date dimension, the processed data amount was now ~400MB! To conclude, these scenarios using the Composite model need careful evaluation and testing. That’s our tipping point! Here is where the magic happens! By being able to store columns physically separated, Parquet outperforms all previous use cases in this situation — and when I say this situation — I mean when you are able to reduce the number of necessary columns (include only those columns you need to query from the Power BI report). Once I’ve created a view containing pre-aggregated data in the Serverless SQL pool, only 400 MB of data was processed! That’s an enormous difference comparing to all previous tests. Basically, that means that this query costs: 0.002$! For easier calculation — I can run it 500x to pay 1$! Here is the table with costs for every single use case I’ve examined: Looking at the table, and considering different use cases we’ve examined above, the following conclusions can be made: Whenever possible, use Parquet files instead of CSV Whenever possible, Import the data into Power BI — that means you will pay only when the data snapshot is being refreshed, not for every single query within the report If you are dealing with Parquet files, whenever possible, create pre-aggregated data (views) in the Serverless SQL pool in the Synapse Since Serverless SQL pool still doesn’t support ResultSet Cache (as far as I know, Microsoft’s team is working on it), keep in mind that each time you run the query (even if you’re returning the same result set), the query will be generated and you will need to pay for it! If your analytic workloads require a high number of queries over a large dataset (so large that Import mode is not an option), maybe you should consider storing data in the Dedicated SQL pool, as you will pay fixed storage costs then, instead of data processing costs each time you query the data. Here, in order to additionally benefit from using this scenario, you should materialize intermediate results using external tables, BEFORE importing them into a Dedicated SQL pool! That way, your queries will read already prepared data, instead of raw data Stick with the general best practices when using Serverless SQL pool within Synapse Analytics In this article, we dived deep to test different scenarios and multiple use cases, when using Power BI in combination with the Serverless SQL pool in Synapse Analytics. In my opinion, even though Synapse has a long way to go to fine-tune all the features and offerings within the Serverless SQL pool, there is no doubt that it is moving in the right direction. With constantly improving the product, and regularly adding cool new features, Synapse can really be a one-stop-shop for all your data workloads. In the last part of this blog series, we will check how Power BI integrates with Azure’s NoSQL solution (Cosmos DB), and how the Serverless SQL pool can help to optimize analytic workloads with the assistance of Azure Synapse Link for Cosmos DB. Thanks for reading! Become a member and read every story on Medium!
[ { "code": null, "e": 516, "s": 172, "text": "By introducing Azure Synapse Analytics in late 2019, a whole new perspective was created when it comes to data treatment. Some core concepts, such as traditional data warehousing, came under more scrutiny, while various fresh approaches started to pop up after data nerds became aware of the new capabilities that Synapse brought to the table." }, { "code": null, "e": 701, "s": 516, "text": "Not that Synapse made a strong impact on data ingestion, transformation, and storage options only — it also offered a whole new set of possibilities for data serving and visualization!" }, { "code": null, "e": 1309, "s": 701, "text": "Therefore, in this series of blog posts, I will try to explore how Power BI works in synergy with the new platform. What options we, Power BI developers, have when working with Synapse? In which data analytics scenarios, Synapse will play on the edge, helping you to achieve the (im)possible? When would you want to take advantage of the innovative solutions within Synapse, and when would you be better sticking with more conventional approaches? What are the best practices when using Power BI — Synapse combo, and which parameters should you evaluate before making a final decision on which path to take." }, { "code": null, "e": 1497, "s": 1309, "text": "Once we’re done, I believe that you should get a better understanding of the “pros” and “cons” for each of the available options when it comes to integration between Power BI and Synapse." }, { "code": null, "e": 1550, "s": 1497, "text": "Power BI & Synapse Part 1 — The Art of (im)possible!" }, { "code": null, "e": 1612, "s": 1550, "text": "Power BI & Synapse Part 2 — What Synapse brings to the table?" }, { "code": null, "e": 1916, "s": 1612, "text": "Don’t get me wrong — the theory is nice, and you should definitely spend your time trying to absorb basic architectural concepts of new technology, tool, or feature. Because, if you don’t understand how something works under the hood, there is a great chance that you won’t take maximum advantage of it." }, { "code": null, "e": 2268, "s": 1916, "text": "But, putting this theoretical knowledge on a practice test is what makes the most interesting part! At least for me:)...It reminds me of the car production process: they build everything, you read the specs and you are impressed! Features, equipment...However, it’s all irrelevant until they put the car on a crash test and get a proper reality check." }, { "code": null, "e": 2432, "s": 2268, "text": "Therefore, this article will be like a crash test for the Serverless SQL pool within Synapse Analytics — a lot of different scenarios, tests, demos, measures, etc." }, { "code": null, "e": 2622, "s": 2432, "text": "I’ve already written about the Serverless SQL pool, and I firmly believe that it is the next big thing when it comes to dealing with large volumes of semi-structured or non-structured data." }, { "code": null, "e": 3044, "s": 2622, "text": "The greatest advantage of the Serverless SQL pool is that you can query the data directly from the CSV, parquet, or JSON files, stored in your Azure Data Lake, without the need to transfer the data! Even more, you can write plain T-SQL to retrieve the data directly from the files! But, let’s see how this works in reality in various use-cases, and, the most important thing, how much each of the solutions will cost you!" }, { "code": null, "e": 3173, "s": 3044, "text": "There are still some things that Microsoft won’t charge you for data processing when using Synapse Serverless SQL pool, such as:" }, { "code": null, "e": 3241, "s": 3173, "text": "Server-level metadata (logins, roles, and server-level credentials)" }, { "code": null, "e": 3466, "s": 3241, "text": "Databases you create in your endpoint. Those databases contain only metadata (users, roles, schemas, views, inline table-valued functions, stored procedures, external data sources, external file formats, and external tables)" }, { "code": null, "e": 3605, "s": 3466, "text": "DDL statements, except for the CREATE STATISTICS statement because it processes data from storage based on the specified sample percentage" }, { "code": null, "e": 3627, "s": 3605, "text": "Metadata-only queries" }, { "code": null, "e": 3888, "s": 3627, "text": "Here is the scenario: I have two CSV files related to the NYC taxi dataset, that I’ve already used in one of the previous demos. One contains data about all yellow cab rides from January 2019 (650 MB), while the other contains data from February 2019 (620 MB)." }, { "code": null, "e": 4052, "s": 3888, "text": "I’ve created two separate views, for each month’s data. The idea is to check what happens when we query the data from Power BI under multiple different conditions." }, { "code": null, "e": 4111, "s": 4052, "text": "Here is the T-SQL for creating a view over a single month:" }, { "code": null, "e": 5402, "s": 4111, "text": "DROP VIEW IF EXISTS taxi201902csv;GOCREATE VIEW taxi201902csv ASSELECT VendorID ,cast(tpep_pickup_datetime as DATE) tpep_pickup_datetime ,cast(tpep_dropoff_datetime as DATE) tpep_dropoff_datetime ,passenger_count ,trip_distance ,RateCodeID ,store_and_fwd_flag ,PULocationID ,DOLocationID ,payment_type ,fare_amount ,extra ,mta_tax ,tip_amount ,tolls_amount ,improvement_surcharge ,total_amount ,congestion_surchargeFROM OPENROWSET( BULK N'https://nikola.dfs.core.windows.net/nikola/Data/yellow_tripdata_2019-02.csv', FORMAT = 'CSV', PARSER_VERSION='2.0', HEADER_ROW = TRUE ) WITH( VendorID INT, tpep_pickup_datetime DATETIME2, tpep_dropoff_datetime DATETIME2, passenger_count INT, trip_distance DECIMAL(10,2), RateCodeID INT, store_and_fwd_flag VARCHAR(10), PULocationID INT, DOLocationID INT, payment_type INT, fare_amount DECIMAL(10,2), extra DECIMAL(10,2), mta_tax DECIMAL(10,2), tip_amount DECIMAL(10,2), tolls_amount DECIMAL(10,2), improvement_surcharge DECIMAL(10,2), total_amount DECIMAL(10,2), congestion_surcharge DECIMAL(10,2) ) AS [taxi201902csv]" }, { "code": null, "e": 5594, "s": 5402, "text": "I am using WITH block to explicitly define data types, as if you don’t do it, all your character columns will be automatically set to VARCHAR(8000), and consequentially more memory expensive." }, { "code": null, "e": 5943, "s": 5594, "text": "As you can notice, I’ve renamed my generic column names from CSV files, so they now look more readable. I’ve also cast DateTime columns to Date type only, as I don’t need the time portion for this demo. This way, we’ve reduced the cardinality and the whole data model size consequentially. Let’s check how many records each of these files contains:" }, { "code": null, "e": 6021, "s": 5943, "text": "January contains ~7.6 million records, while February has ~7 million records." }, { "code": null, "e": 6211, "s": 6021, "text": "Additionally, I’ve also applied the same logic and built two views over exactly the same portion of data coming from parquet files. So, we can compare metrics between CSV and Parquet files." }, { "code": null, "e": 6350, "s": 6211, "text": "Views used for Parquet files were built slightly different, using FILENAME() and FILEPATH() functions to eliminate unnecessary partitions:" }, { "code": null, "e": 7213, "s": 6350, "text": "DROP VIEW IF EXISTS taxi201901parquet;GOCREATE VIEW taxi201901parquet ASSELECT VendorID ,CAST(TpepPickupDatetime AS DATE) TpepPickupDatetime ,CAST(TpepDropoffDatetime AS DATE) TpepDropoffDatetime ,PassengerCount ,TripDistance ,PuLocationId ,DoLocationId ,StartLon ,StartLat ,EndLon ,EndLat ,RateCodeId ,StoreAndFwdFlag ,PaymentType ,FareAmount ,Extra ,MtaTax ,ImprovementSurcharge ,TipAmount ,TollsAmount ,TotalAmountFROM OPENROWSET( BULK 'puYear=*/puMonth=*/*.snappy.parquet', DATA_SOURCE = 'YellowTaxi', FORMAT='PARQUET' ) nycWHERE nyc.filepath(1) = 2019 AND nyc.filepath(2) IN (1) AND tpepPickupDateTime BETWEEN CAST('1/1/2019' AS datetime) AND CAST('1/31/2019' AS datetime)" }, { "code": null, "e": 7559, "s": 7213, "text": "As specified in the best practices for using Serverless SQL pool in Synapse, we explicitly instructed our query to target only the year 2019 and only the month of January! This will reduce the amount of data for scanning and processing. We didn’t have to do this for our CSV files, as they were already partitioned per month and saved like that." }, { "code": null, "e": 7904, "s": 7559, "text": "One important disclaimer: As the usage of the Serverless SQL pool is being charged per volume of the data processed (it’s currently priced starting at 5$/TB of processed data), I will not measure performance in terms of speed. I want to focus solely on the analysis of data volume processed and costs generated by examining different scenarios." }, { "code": null, "e": 8103, "s": 7904, "text": "Before we proceed with testing, a little more theory...As I plan to compare data processing between CSV and Parquet files, I believe we should understand the key differences between these two types:" }, { "code": null, "e": 8330, "s": 8103, "text": "In Parquet files, data is compressed in a more optimal way. As you may recall from one of the screenshots above, a parquet file consumes approximately 1/3 of memory compared to a CSV file that contains the same portion of data" }, { "code": null, "e": 8701, "s": 8330, "text": "Parquet files support column storage format — that being said, columns within the Parquet file are physically separated, which means that you don’t need to scan the whole file if you need data from few columns only! On the opposite, when you’re querying a CSV file, every time you send the query, it will scan the whole file, even if you need data from one single column" }, { "code": null, "e": 8823, "s": 8701, "text": "For those coming from the traditional SQL world, you can think of CSV vs Parquet, such as row-store vs columnar databases" }, { "code": null, "e": 9001, "s": 8823, "text": "Let’s start with the most obvious and desirable scenario — using Import mode to ingest all the data into Power BI, and performing data refresh to check how much it will cost us." }, { "code": null, "e": 9344, "s": 9001, "text": "One last thing before we get our hands dirty — in Synapse Analytics, there is still not a feature to measure the cost of a specific query. You can check the volume of processed data on a daily, weekly, or monthly level. Additionally, you can set the limits if you want, on each of those time-granularity levels — more on that in this article." }, { "code": null, "e": 9496, "s": 9344, "text": "This feature is also relatively new, so I’m glad to see that Synapse permanently makes progress in a promising way of providing full cost transparency." }, { "code": null, "e": 9664, "s": 9496, "text": "Since we can’t measure the exact cost of every single query, I will try to calculate the figures by querying DMV within the master database, using the following T-SQL:" }, { "code": null, "e": 9729, "s": 9664, "text": "SELECT * FROM sys.dm_external_data_processedWHERE type = 'daily'" }, { "code": null, "e": 9876, "s": 9729, "text": "So, my starting point for today is around 29.7 GB, and I will calculate the difference each time I target the Serverless SQL pool to get the data." }, { "code": null, "e": 9984, "s": 9876, "text": "Ok, going back to the first scenario, I will import the data into Power BI, for both months from CSV files:" }, { "code": null, "e": 10380, "s": 9984, "text": "The most fascinating thing is that I’m writing plain T-SQL, so my users don’t even know that they are getting the data directly from CSV files! I’m using UNION ALL, as I’m sure that there are no identical records in my two views, and in theory, it should run faster than UNION, but I’ve could also create a separate view using this same T-SQL statement, and then use that joint view in Power BI." }, { "code": null, "e": 10674, "s": 10380, "text": "I will need a proper date dimension table for testing different scenarios, and I will create it using Power Query. This date table will be in Import mode in all scenarios, so it should not affect the amount of processed data from the Serverless SQL pool. Here is the M code for the date table:" }, { "code": null, "e": 12571, "s": 10674, "text": "let StartDate = #date(StartYear,1,1), EndDate = #date(EndYear,12,31), NumberOfDays = Duration.Days( EndDate - StartDate ), Dates = List.Dates(StartDate, NumberOfDays+1, #duration(1,0,0,0)), #\"Converted to Table\" = Table.FromList(Dates, Splitter.SplitByNothing(), null, null, ExtraValues.Error), #\"Renamed Columns\" = Table.RenameColumns(#\"Converted to Table\",{{\"Column1\", \"FullDateAlternateKey\"}}), #\"Changed Type\" = Table.TransformColumnTypes(#\"Renamed Columns\",{{\"FullDateAlternateKey\", type date}}), #\"Inserted Year\" = Table.AddColumn(#\"Changed Type\", \"Year\", each Date.Year([FullDateAlternateKey]), type number), #\"Inserted Month\" = Table.AddColumn(#\"Inserted Year\", \"Month\", each Date.Month([FullDateAlternateKey]), type number), #\"Inserted Month Name\" = Table.AddColumn(#\"Inserted Month\", \"Month Name\", each Date.MonthName([FullDateAlternateKey]), type text), #\"Inserted Quarter\" = Table.AddColumn(#\"Inserted Month Name\", \"Quarter\", each Date.QuarterOfYear([FullDateAlternateKey]), type number), #\"Inserted Week of Year\" = Table.AddColumn(#\"Inserted Quarter\", \"Week of Year\", each Date.WeekOfYear([FullDateAlternateKey]), type number), #\"Inserted Week of Month\" = Table.AddColumn(#\"Inserted Week of Year\", \"Week of Month\", each Date.WeekOfMonth([FullDateAlternateKey]), type number), #\"Inserted Day\" = Table.AddColumn(#\"Inserted Week of Month\", \"Day\", each Date.Day([FullDateAlternateKey]), type number), #\"Inserted Day of Week\" = Table.AddColumn(#\"Inserted Day\", \"Day of Week\", each Date.DayOfWeek([FullDateAlternateKey]), type number), #\"Inserted Day of Year\" = Table.AddColumn(#\"Inserted Day of Week\", \"Day of Year\", each Date.DayOfYear([FullDateAlternateKey]), type number), #\"Inserted Day Name\" = Table.AddColumn(#\"Inserted Day of Year\", \"Day Name\", each Date.DayOfWeekName([FullDateAlternateKey]), type text)in #\"Inserted Day Name\"" }, { "code": null, "e": 12666, "s": 12571, "text": "It took a while to load the data in the Power BI Desktop, so let’s check now some key metrics." }, { "code": null, "e": 12699, "s": 12666, "text": "My table has ~ 14.7 million rows" }, { "code": null, "e": 12847, "s": 12699, "text": "My whole data model size is ~92 MB, as the data is optimally compressed within Power BI Desktop (we’ve reduced the cardinality of DateTime columns)" }, { "code": null, "e": 13211, "s": 12847, "text": "Once I’ve created my table visual, displaying total records per date, my daily data processed volume is ~33.3 GB. Let’s refresh the data model to check how expensive it would be. So, Power BI Desktop will now go to a Serverless SQL pool, querying data from my two views, but don’t forget that in the background are two CSV files as an ultimate source of our data!" }, { "code": null, "e": 13382, "s": 13211, "text": "After the refresh, my daily value had increased to ~36.9 GB, which means that this refresh costs me ~3.6 GB. In terms of money, that’s around 0.018 $ (0.0036 TB x 5 USD)." }, { "code": null, "e": 13570, "s": 13382, "text": "In this use case, it would cost me money only when my Power BI model is being refreshed! Simply said, if I refresh my data model once per day, this report will cost me 54 cents per month." }, { "code": null, "e": 13727, "s": 13570, "text": "Let’s now check what would happen if we use exactly the same query, but instead of importing data into Power BI Desktop, we will use the DirectQuery option." }, { "code": null, "e": 13896, "s": 13727, "text": "Let’s first interact with the date slicer, so we can check how much this will cost us. My starting point for measuring is ~87.7 GB and this is how my report looks like:" }, { "code": null, "e": 14196, "s": 13896, "text": "Refreshing the whole query burned out ~2.8 GB, which is ~0.014$. Now, this is for one single visual on the page! Keep in mind that when you’re using DirectQuery, each visual will generate a separate query to the underlying data source. Let’s check what happens when I add another visual on the page:" }, { "code": null, "e": 14350, "s": 14196, "text": "Now, this query costs me ~4 GB, which is 0.02$. As you can conclude, increasing the number of visuals on your report canvas will also increase the costs." }, { "code": null, "e": 14582, "s": 14350, "text": "One more important thing to keep in mind: these costs are per user! So, if you have 10 users running this same report in parallel, you should multiply costs by 10, as a new query will be generated for each visual and for each user." }, { "code": null, "e": 14718, "s": 14582, "text": "Now, I want to check what happens if I select a specific date range within my slicer, for example between January 1st and January 13th:" }, { "code": null, "e": 14947, "s": 14718, "text": "The first thing I notice is that query cost me exactly the same! The strange thing is, if I look at the SQL query generated to retrieve the data, I can see the engine was clever enough to apply a date filter in the WHERE clause:" }, { "code": null, "e": 17008, "s": 14947, "text": "/*Query 1*/SELECT TOP (1000001) [semijoin1].[c1],SUM([a0]) AS [a0]FROM ((SELECT [t1].[tpep_pickup_datetime] AS [c14],[t1].[total_amount] AS [a0]FROM ((SELECT * FROM dbo.taxi201901csvUNION ALLSELECT * FROM dbo.taxi201902csv)) AS [t1]) AS [basetable0] INNER JOIN ((SELECT 3 AS [c1],CAST( '20190101 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 4 AS [c1],CAST( '20190102 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 5 AS [c1],CAST( '20190103 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 6 AS [c1],CAST( '20190104 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 7 AS [c1],CAST( '20190105 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 8 AS [c1],CAST( '20190106 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 9 AS [c1],CAST( '20190107 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 10 AS [c1],CAST( '20190108 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 11 AS [c1],CAST( '20190109 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 12 AS [c1],CAST( '20190110 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 13 AS [c1],CAST( '20190111 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 14 AS [c1],CAST( '20190112 00:00:00' AS datetime) AS [c14] ) UNION ALL (SELECT 15 AS [c1],CAST( '20190113 00:00:00' AS datetime) AS [c14] ) ) AS [semijoin1] on (([semijoin1].[c14] = [basetable0].[c14])))GROUP BY [semijoin1].[c1] /*Query 2*/SELECT SUM([t1].[total_amount]) AS [a0]FROM ((SELECT * FROM dbo.taxi201901csvUNION ALLSELECT * FROM dbo.taxi201902csv)) AS [t1]WHERE (([t1].[tpep_pickup_datetime] IN (CAST( '20190112 00:00:00' AS datetime),CAST( '20190113 00:00:00' AS datetime),CAST( '20190101 00:00:00' AS datetime),CAST( '20190102 00:00:00' AS datetime),CAST( '20190103 00:00:00' AS datetime),CAST( '20190104 00:00:00' AS datetime),CAST( '20190105 00:00:00' AS datetime),CAST( '20190106 00:00:00' AS datetime),CAST( '20190107 00:00:00' AS datetime),CAST( '20190108 00:00:00' AS datetime),CAST( '20190109 00:00:00' AS datetime),CAST( '20190110 00:00:00' AS datetime),CAST( '20190111 00:00:00' AS datetime))))" }, { "code": null, "e": 17266, "s": 17008, "text": "However, it appears that the underlying view scans the whole chunk of the data within the CSV file! So, there is no benefit at all in terms of savings if you use a date slicer to limit the volume of data, as the whole CSV file will be scanned in any case..." }, { "code": null, "e": 17483, "s": 17266, "text": "The next test will show us what happens if we create an aggregated table and store it in DirectQuery mode within the Power BI. It’s quite a simple aggregated table, consisting of total amount and pickup time columns." }, { "code": null, "e": 17646, "s": 17483, "text": "My query hit the aggregated table, but it didn’t change anything in terms of the total query cost, as it was exactly the same as in the previous use case: ~0.02$!" }, { "code": null, "e": 17838, "s": 17646, "text": "After that, I want to check what happens if I import a previously aggregated table into Power BI. I believe that calculations will be faster, but let’s see how it will affect the query costs." }, { "code": null, "e": 17984, "s": 17838, "text": "As expected, this was pretty quick, aggregated table was hit, so we are only paying the price of the data refresh, as in our Use Case #1: 0.018$!" }, { "code": null, "e": 18149, "s": 17984, "text": "One last thing I want to check, is what happens if I know my analytic workloads, and can prepare some most frequent queries in advance, using a Serverless SQL pool." }, { "code": null, "e": 18308, "s": 18149, "text": "Therefore, I will create a view, that will aggregate the data like in the previous case within Power BI Desktop, but this time within the Serverless SQL pool:" }, { "code": null, "e": 18736, "s": 18308, "text": "DROP VIEW IF EXISTS taxi201901_02_agg;GOCREATE VIEW taxi201901_02_agg AS SELECT CAST(C2 AS DATE) AS tpep_pickup_datetime, SUM(CAST(C17 AS DECIMAL(10,2))) AS total_amountFROM OPENROWSET( BULK N'https://nikola.dfs.core.windows.net/nikola/Data/yellow_tripdata_2019-01.csv', FORMAT = 'CSV', PARSER_VERSION='2.0', HEADER_ROW = TRUE ) AS [taxi201901_02_agg] GROUP BY CAST(C2 AS DATE)" }, { "code": null, "e": 18851, "s": 18736, "text": "Basically, we are aggregating data on the source side and that should obviously help. So, let’s check the outcome:" }, { "code": null, "e": 18980, "s": 18851, "text": "It returned requested figures faster, but the amount of the processed data was again the same! This query again costs me ~0.02$!" }, { "code": null, "e": 19175, "s": 18980, "text": "This brings me to a conclusion: no matter what you perform within the Serverless SQL pool on top of the CSV files, they will be fully scanned at the lowest level of the data preparation process!" }, { "code": null, "e": 19865, "s": 19175, "text": "However, and that’s important, it’s not just the scanned data amount that makes the total of the processed data, but also the amount of streamed data to a client: in my example, the difference between streamed data is not so big (275 MB when I’ve included all columns vs 1 MB when targeting aggregated data), and that’s why the final price wasn’t noticeably different. I assume that when you’re working with larger data sets (few TBs), the cost difference would be far more obvious. So, keep in mind that pre-aggregating data within a Serverless SQL pool can save you the amount of streamed data, which also means that your overall costs will be reduced! You can find all the details here." }, { "code": null, "e": 19956, "s": 19865, "text": "Let’s now evaluate if something changes if we use data from Parquet files, instead of CSV." }, { "code": null, "e": 20105, "s": 19956, "text": "The first use case is importing Parquet files. As expected, as they are better compressed than CSV files, costs decreased, almost by double: ~0.01$!" }, { "code": null, "e": 20276, "s": 20105, "text": "And finally, let’s examine the figures if we use DirectQuery mode in Power BI to query the data directly from the Parquet files within the Serverless SQL pool in Synapse." }, { "code": null, "e": 20366, "s": 20276, "text": "To my negative surprise, this query processed ~26 GB of data, which translates to ~0.13$!" }, { "code": null, "e": 20753, "s": 20366, "text": "As that looked completely strange, I started to investigate and found out that the main culprit for the high cost was the Date dimension created using M! While debugging the SQL query generated by Power BI and sent to SQL engine in the background, I’ve noticed that extremely complex query had been created, performing joins and UNION ALLs on every single value from the Date dimension:" }, { "code": null, "e": 21717, "s": 20753, "text": "SELECT TOP (1000001) [semijoin1].[c1],SUM([a0]) AS [a0]FROM ((SELECT [t1].[TpepPickupDatetime] AS [c13],[t1].[TotalAmount] AS [a0]FROM ((SELECT *FROM taxi201901parquetUNION ALLSELECT *FROM taxi201902parquet)) AS [t1]) AS [basetable0] INNER JOIN ((SELECT 3 AS [c1],CAST( '20190101 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 4 AS [c1],CAST( '20190102 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 5 AS [c1],CAST( '20190103 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 6 AS [c1],CAST( '20190104 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 7 AS [c1],CAST( '20190105 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 8 AS [c1],CAST( '20190106 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 9 AS [c1],CAST( '20190107 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 10 AS [c1],CAST( '20190108 00:00:00' AS datetime) AS [c13] ) UNION ALL (SELECT 11 AS [c1],CAST( '20190109 00:00:00' AS datetime) AS [c13] ) UNION ALL....." }, { "code": null, "e": 21826, "s": 21717, "text": "This is just an excerpt from the generated query, I’ve removed rest of the code for the sake of readability." }, { "code": null, "e": 22017, "s": 21826, "text": "Once I excluded my Date dimension from the calculations, costs expectedly decreased to under 400 MBs!!! So, instead of 26GB with the Date dimension, the processed data amount was now ~400MB!" }, { "code": null, "e": 22109, "s": 22017, "text": "To conclude, these scenarios using the Composite model need careful evaluation and testing." }, { "code": null, "e": 22456, "s": 22109, "text": "That’s our tipping point! Here is where the magic happens! By being able to store columns physically separated, Parquet outperforms all previous use cases in this situation — and when I say this situation — I mean when you are able to reduce the number of necessary columns (include only those columns you need to query from the Power BI report)." }, { "code": null, "e": 22745, "s": 22456, "text": "Once I’ve created a view containing pre-aggregated data in the Serverless SQL pool, only 400 MB of data was processed! That’s an enormous difference comparing to all previous tests. Basically, that means that this query costs: 0.002$! For easier calculation — I can run it 500x to pay 1$!" }, { "code": null, "e": 22815, "s": 22745, "text": "Here is the table with costs for every single use case I’ve examined:" }, { "code": null, "e": 22934, "s": 22815, "text": "Looking at the table, and considering different use cases we’ve examined above, the following conclusions can be made:" }, { "code": null, "e": 22986, "s": 22934, "text": "Whenever possible, use Parquet files instead of CSV" }, { "code": null, "e": 23154, "s": 22986, "text": "Whenever possible, Import the data into Power BI — that means you will pay only when the data snapshot is being refreshed, not for every single query within the report" }, { "code": null, "e": 23289, "s": 23154, "text": "If you are dealing with Parquet files, whenever possible, create pre-aggregated data (views) in the Serverless SQL pool in the Synapse" }, { "code": null, "e": 23563, "s": 23289, "text": "Since Serverless SQL pool still doesn’t support ResultSet Cache (as far as I know, Microsoft’s team is working on it), keep in mind that each time you run the query (even if you’re returning the same result set), the query will be generated and you will need to pay for it!" }, { "code": null, "e": 24118, "s": 23563, "text": "If your analytic workloads require a high number of queries over a large dataset (so large that Import mode is not an option), maybe you should consider storing data in the Dedicated SQL pool, as you will pay fixed storage costs then, instead of data processing costs each time you query the data. Here, in order to additionally benefit from using this scenario, you should materialize intermediate results using external tables, BEFORE importing them into a Dedicated SQL pool! That way, your queries will read already prepared data, instead of raw data" }, { "code": null, "e": 24212, "s": 24118, "text": "Stick with the general best practices when using Serverless SQL pool within Synapse Analytics" }, { "code": null, "e": 24381, "s": 24212, "text": "In this article, we dived deep to test different scenarios and multiple use cases, when using Power BI in combination with the Serverless SQL pool in Synapse Analytics." }, { "code": null, "e": 24719, "s": 24381, "text": "In my opinion, even though Synapse has a long way to go to fine-tune all the features and offerings within the Serverless SQL pool, there is no doubt that it is moving in the right direction. With constantly improving the product, and regularly adding cool new features, Synapse can really be a one-stop-shop for all your data workloads." }, { "code": null, "e": 24965, "s": 24719, "text": "In the last part of this blog series, we will check how Power BI integrates with Azure’s NoSQL solution (Cosmos DB), and how the Serverless SQL pool can help to optimize analytic workloads with the assistance of Azure Synapse Link for Cosmos DB." }, { "code": null, "e": 24985, "s": 24965, "text": "Thanks for reading!" } ]
HTML Tables
18 May, 2022 In this article, we will know the HTML Table, various ways to implement it, & will also understand its usage through the examples. HTML Table is an arrangement of data in rows and columns, or possibly in a more complex structure. Tables are widely used in communication, research, and data analysis. Tables are useful for various tasks such as presenting text information and numerical data. It can be used to compare two or more items in the tabular form layout. Tables are used to create databases. Defining Tables in HTML: An HTML table is defined with the “table” tag. Each table row is defined with the “tr” tag. A table header is defined with the “th” tag. By default, table headings are bold and centered. A table data/cell is defined with the “td” tag. Example 1: In this example, we are creating a simple table in HTML using a table tag. HTML <!DOCTYPE html><html> <body> <table> <tr> <th>Book Name</th> <th>Author Name</th> <th>Genre</th> </tr> <tr> <td>The Book Thief</td> <td>Markus Zusak</td> <td>Historical Fiction</td> </tr> <tr> <td>The Cruel Prince</td> <td>Holly Black</td> <td>Fantasy</td> </tr> <tr> <td>The Silent Patient</td> <td> Alex Michaelides</td> <td>Psychological Fiction</td> </tr> </table></body> </html> Output: HTML Table Example 2: This example explains the use of the HTML Table. HTML <!DOCTYPE html><html> <body> <table> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: Simple HTML Table Accepted Attributes: <table> cellspacing Attribute <table> rules Attribute Adding a border to an HTML Table: A border is set using the CSS border property. If you do not specify a border for the table, it will be displayed without borders. Example 3: This example explains the addition of the border to the HTML Table. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: HTML Table with border Adding Collapsed Borders in an HTML Table: For borders to collapse into one border, add the CSS border-collapse property. Example 4: This example describes the addition of Collapsed Borders in HTML. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: HTML Table with Collapsed Borders Adding Cell Padding in an HTML Table: Cell padding specifies the space between the cell content and its borders. If we do not specify a padding, the table cells will be displayed without padding. Example 5: This example describes the addition of Table cell padding in HTML. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: Adding Table cell padding Adding Left Align Headings in an HTML Table: By default, the table headings are bold and centered. To left-align the table headings, we must use the CSS text-align property. Example 6: This example explains the text-align property where the text is aligned to the left. HTML <html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } th { text-align: left; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: text-align Property Adding Border Spacing in an HTML Table: Border spacing specifies the space between the cells. To set the border-spacing for a table, we must use the CSS border-spacing property. Example 7: This example explains the border space property to make the space between the Table cells. HTML <html> <head> <style> table, th, td { border: 1px solid black; } table { border-spacing: 5px; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: Border Spacing Property Adding Cells that Span Many Columns in HTML Tables: To make a cell span more than one column, we must use the colspan attribute. Example 8: This example describes the use of the colspan attribute in HTML. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style></head> <body> <h2>Cell that spans two columns:</h2> <table style="width:100%"> <tr> <th>Name</th> <th colspan="2">Telephone</th> </tr> <tr> <td>Vikas Rawat</td> <td>9125577854</td> <td>8565557785</td> </tr> </table></body> </html> Output: colspan attribute Adding Cells that span many rows in HTML Tables: To make a cell span more than one row, we must use the rowspan attribute. Example 9: This example describes the use of the rowspan attribute in HTML. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style></head> <body> <h2>Cell that spans two rows:</h2> <table style="width:100%"> <tr> <th>Name:</th> <td>Vikas Rawat</td> </tr> <tr> <th rowspan="2">Telephone:</th> <td>9125577854</td> </tr> <tr> <td>8565557785</td> </tr> </table></body> </html> Output: Use of rowspan attribute Adding a Caption in an HTML Table: To add a caption to a table, we must use the “caption” tag. Example 10: This example describes the HTML Table caption by specifying the CSS properties for setting its width. HTML <html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } th { text-align: left; } </style></head> <body> <table style="width:100%"> <caption>DETAILS</caption> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: Adding the caption using the <caption> tag Adding a Background Colour to the Table: A color can be added as a background in an HTML table using the “background-color” option. Example 11: This example describes the addition of the Table background color in HTML. HTML <!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } table#t01 { width: 100%; background-color: #f2f2d1; } </style></head> <body> <table style="width:100%"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table> <br /> <br /> <table id="t01"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html> Output: Adding Table Background color using CSS properties Creating Nested Tables: Nesting tables simply means making a Table inside another Table. Nesting tables can lead to complex tables layouts, which are visually interesting and have the potential of introducing errors. Example 12: This example describes the Nested of HTML Table. HTML <!DOCTYPE html><html> <body> <table border=5 bordercolor=black> <tr> <td> First Column of Outer Table </td> <td> <table border=5 bordercolor=grey> <tr> <td> First row of Inner Table </td> </tr> <tr> <td> Second row of Inner Table </td> </tr> </table> </td> </tr> </table></body> </html> Output: Nested HTML Table Supported Browsers: Google Chrome Firefox Microsoft Edge Internet Explorer Safari Opera HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps. You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples. yashpandey2002 arorakashish0911 ysachin2314 bhaskargeeksforgeeks as5853535 hardikkoriintern HTML and XML HTML-Basics Web technologies-HTML and XML HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to update Node.js and NPM to next version ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? REST API (Introduction) Hide or show elements in HTML using display property Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 52, "s": 24, "text": "\n18 May, 2022" }, { "code": null, "e": 553, "s": 52, "text": "In this article, we will know the HTML Table, various ways to implement it, & will also understand its usage through the examples. HTML Table is an arrangement of data in rows and columns, or possibly in a more complex structure. Tables are widely used in communication, research, and data analysis. Tables are useful for various tasks such as presenting text information and numerical data. It can be used to compare two or more items in the tabular form layout. Tables are used to create databases." }, { "code": null, "e": 813, "s": 553, "text": "Defining Tables in HTML: An HTML table is defined with the “table” tag. Each table row is defined with the “tr” tag. A table header is defined with the “th” tag. By default, table headings are bold and centered. A table data/cell is defined with the “td” tag." }, { "code": null, "e": 900, "s": 813, "text": "Example 1: In this example, we are creating a simple table in HTML using a table tag. " }, { "code": null, "e": 905, "s": 900, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <table> <tr> <th>Book Name</th> <th>Author Name</th> <th>Genre</th> </tr> <tr> <td>The Book Thief</td> <td>Markus Zusak</td> <td>Historical Fiction</td> </tr> <tr> <td>The Cruel Prince</td> <td>Holly Black</td> <td>Fantasy</td> </tr> <tr> <td>The Silent Patient</td> <td> Alex Michaelides</td> <td>Psychological Fiction</td> </tr> </table></body> </html>", "e": 1483, "s": 905, "text": null }, { "code": null, "e": 1491, "s": 1483, "text": "Output:" }, { "code": null, "e": 1502, "s": 1491, "text": "HTML Table" }, { "code": null, "e": 1562, "s": 1502, "text": "Example 2: This example explains the use of the HTML Table." }, { "code": null, "e": 1567, "s": 1562, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <table> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 2041, "s": 1567, "text": null }, { "code": null, "e": 2049, "s": 2041, "text": "Output:" }, { "code": null, "e": 2067, "s": 2049, "text": "Simple HTML Table" }, { "code": null, "e": 2088, "s": 2067, "text": "Accepted Attributes:" }, { "code": null, "e": 2118, "s": 2088, "text": "<table> cellspacing Attribute" }, { "code": null, "e": 2143, "s": 2118, "text": "<table> rules Attribute " }, { "code": null, "e": 2308, "s": 2143, "text": "Adding a border to an HTML Table: A border is set using the CSS border property. If you do not specify a border for the table, it will be displayed without borders." }, { "code": null, "e": 2387, "s": 2308, "text": "Example 3: This example explains the addition of the border to the HTML Table." }, { "code": null, "e": 2392, "s": 2387, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 2984, "s": 2392, "text": null }, { "code": null, "e": 2992, "s": 2984, "text": "Output:" }, { "code": null, "e": 3015, "s": 2992, "text": "HTML Table with border" }, { "code": null, "e": 3137, "s": 3015, "text": "Adding Collapsed Borders in an HTML Table: For borders to collapse into one border, add the CSS border-collapse property." }, { "code": null, "e": 3214, "s": 3137, "text": "Example 4: This example describes the addition of Collapsed Borders in HTML." }, { "code": null, "e": 3219, "s": 3214, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 3845, "s": 3219, "text": null }, { "code": null, "e": 3854, "s": 3845, "text": " Output:" }, { "code": null, "e": 3888, "s": 3854, "text": "HTML Table with Collapsed Borders" }, { "code": null, "e": 4084, "s": 3888, "text": "Adding Cell Padding in an HTML Table: Cell padding specifies the space between the cell content and its borders. If we do not specify a padding, the table cells will be displayed without padding." }, { "code": null, "e": 4162, "s": 4084, "text": "Example 5: This example describes the addition of Table cell padding in HTML." }, { "code": null, "e": 4167, "s": 4162, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 4840, "s": 4167, "text": null }, { "code": null, "e": 4848, "s": 4840, "text": "Output:" }, { "code": null, "e": 4874, "s": 4848, "text": "Adding Table cell padding" }, { "code": null, "e": 5048, "s": 4874, "text": "Adding Left Align Headings in an HTML Table: By default, the table headings are bold and centered. To left-align the table headings, we must use the CSS text-align property." }, { "code": null, "e": 5144, "s": 5048, "text": "Example 6: This example explains the text-align property where the text is aligned to the left." }, { "code": null, "e": 5149, "s": 5144, "text": "HTML" }, { "code": "<html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } th { text-align: left; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 5850, "s": 5149, "text": null }, { "code": null, "e": 5858, "s": 5850, "text": "Output:" }, { "code": null, "e": 5878, "s": 5858, "text": "text-align Property" }, { "code": null, "e": 6056, "s": 5878, "text": "Adding Border Spacing in an HTML Table: Border spacing specifies the space between the cells. To set the border-spacing for a table, we must use the CSS border-spacing property." }, { "code": null, "e": 6158, "s": 6056, "text": "Example 7: This example explains the border space property to make the space between the Table cells." }, { "code": null, "e": 6163, "s": 6158, "text": "HTML" }, { "code": "<html> <head> <style> table, th, td { border: 1px solid black; } table { border-spacing: 5px; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 6789, "s": 6163, "text": null }, { "code": null, "e": 6797, "s": 6789, "text": "Output:" }, { "code": null, "e": 6821, "s": 6797, "text": "Border Spacing Property" }, { "code": null, "e": 6950, "s": 6821, "text": "Adding Cells that Span Many Columns in HTML Tables: To make a cell span more than one column, we must use the colspan attribute." }, { "code": null, "e": 7026, "s": 6950, "text": "Example 8: This example describes the use of the colspan attribute in HTML." }, { "code": null, "e": 7031, "s": 7026, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style></head> <body> <h2>Cell that spans two columns:</h2> <table style=\"width:100%\"> <tr> <th>Name</th> <th colspan=\"2\">Telephone</th> </tr> <tr> <td>Vikas Rawat</td> <td>9125577854</td> <td>8565557785</td> </tr> </table></body> </html>", "e": 7573, "s": 7031, "text": null }, { "code": null, "e": 7581, "s": 7573, "text": "Output:" }, { "code": null, "e": 7599, "s": 7581, "text": "colspan attribute" }, { "code": null, "e": 7722, "s": 7599, "text": "Adding Cells that span many rows in HTML Tables: To make a cell span more than one row, we must use the rowspan attribute." }, { "code": null, "e": 7798, "s": 7722, "text": "Example 9: This example describes the use of the rowspan attribute in HTML." }, { "code": null, "e": 7803, "s": 7798, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } </style></head> <body> <h2>Cell that spans two rows:</h2> <table style=\"width:100%\"> <tr> <th>Name:</th> <td>Vikas Rawat</td> </tr> <tr> <th rowspan=\"2\">Telephone:</th> <td>9125577854</td> </tr> <tr> <td>8565557785</td> </tr> </table></body> </html>", "e": 8369, "s": 7803, "text": null }, { "code": null, "e": 8377, "s": 8369, "text": "Output:" }, { "code": null, "e": 8402, "s": 8377, "text": "Use of rowspan attribute" }, { "code": null, "e": 8497, "s": 8402, "text": "Adding a Caption in an HTML Table: To add a caption to a table, we must use the “caption” tag." }, { "code": null, "e": 8611, "s": 8497, "text": "Example 10: This example describes the HTML Table caption by specifying the CSS properties for setting its width." }, { "code": null, "e": 8616, "s": 8611, "text": "HTML" }, { "code": "<html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 20px; } th { text-align: left; } </style></head> <body> <table style=\"width:100%\"> <caption>DETAILS</caption> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 9351, "s": 8616, "text": null }, { "code": null, "e": 9359, "s": 9351, "text": "Output:" }, { "code": null, "e": 9402, "s": 9359, "text": "Adding the caption using the <caption> tag" }, { "code": null, "e": 9534, "s": 9402, "text": "Adding a Background Colour to the Table: A color can be added as a background in an HTML table using the “background-color” option." }, { "code": null, "e": 9621, "s": 9534, "text": "Example 11: This example describes the addition of the Table background color in HTML." }, { "code": null, "e": 9626, "s": 9621, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <head> <style> table, th, td { border: 1px solid black; border-collapse: collapse; } th, td { padding: 5px; text-align: left; } table#t01 { width: 100%; background-color: #f2f2d1; } </style></head> <body> <table style=\"width:100%\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table> <br /> <br /> <table id=\"t01\"> <tr> <th>Firstname</th> <th>Lastname</th> <th>Age</th> </tr> <tr> <td>Priya</td> <td>Sharma</td> <td>24</td> </tr> <tr> <td>Arun</td> <td>Singh</td> <td>32</td> </tr> <tr> <td>Sam</td> <td>Watson</td> <td>41</td> </tr> </table></body> </html>", "e": 10861, "s": 9626, "text": null }, { "code": null, "e": 10869, "s": 10861, "text": "Output:" }, { "code": null, "e": 10920, "s": 10869, "text": "Adding Table Background color using CSS properties" }, { "code": null, "e": 11138, "s": 10920, "text": "Creating Nested Tables: Nesting tables simply means making a Table inside another Table. Nesting tables can lead to complex tables layouts, which are visually interesting and have the potential of introducing errors. " }, { "code": null, "e": 11199, "s": 11138, "text": "Example 12: This example describes the Nested of HTML Table." }, { "code": null, "e": 11204, "s": 11199, "text": "HTML" }, { "code": "<!DOCTYPE html><html> <body> <table border=5 bordercolor=black> <tr> <td> First Column of Outer Table </td> <td> <table border=5 bordercolor=grey> <tr> <td> First row of Inner Table </td> </tr> <tr> <td> Second row of Inner Table </td> </tr> </table> </td> </tr> </table></body> </html>", "e": 11696, "s": 11204, "text": null }, { "code": null, "e": 11704, "s": 11696, "text": "Output:" }, { "code": null, "e": 11722, "s": 11704, "text": "Nested HTML Table" }, { "code": null, "e": 11742, "s": 11722, "text": "Supported Browsers:" }, { "code": null, "e": 11756, "s": 11742, "text": "Google Chrome" }, { "code": null, "e": 11764, "s": 11756, "text": "Firefox" }, { "code": null, "e": 11779, "s": 11764, "text": "Microsoft Edge" }, { "code": null, "e": 11797, "s": 11779, "text": "Internet Explorer" }, { "code": null, "e": 11804, "s": 11797, "text": "Safari" }, { "code": null, "e": 11810, "s": 11804, "text": "Opera" }, { "code": null, "e": 12005, "s": 11810, "text": "HTML is the foundation of webpages, is used for webpage development by structuring websites and web apps. You can learn HTML from the ground up by following this HTML Tutorial and HTML Examples." }, { "code": null, "e": 12020, "s": 12005, "text": "yashpandey2002" }, { "code": null, "e": 12037, "s": 12020, "text": "arorakashish0911" }, { "code": null, "e": 12049, "s": 12037, "text": "ysachin2314" }, { "code": null, "e": 12070, "s": 12049, "text": "bhaskargeeksforgeeks" }, { "code": null, "e": 12080, "s": 12070, "text": "as5853535" }, { "code": null, "e": 12097, "s": 12080, "text": "hardikkoriintern" }, { "code": null, "e": 12110, "s": 12097, "text": "HTML and XML" }, { "code": null, "e": 12122, "s": 12110, "text": "HTML-Basics" }, { "code": null, "e": 12152, "s": 12122, "text": "Web technologies-HTML and XML" }, { "code": null, "e": 12157, "s": 12152, "text": "HTML" }, { "code": null, "e": 12174, "s": 12157, "text": "Web Technologies" }, { "code": null, "e": 12179, "s": 12174, "text": "HTML" }, { "code": null, "e": 12277, "s": 12179, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 12325, "s": 12277, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 12387, "s": 12325, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 12437, "s": 12387, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 12461, "s": 12437, "text": "REST API (Introduction)" }, { "code": null, "e": 12514, "s": 12461, "text": "Hide or show elements in HTML using display property" }, { "code": null, "e": 12547, "s": 12514, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 12609, "s": 12547, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 12670, "s": 12609, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 12720, "s": 12670, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
K’th smallest element in BST using O(1) Extra Space
27 Jun, 2022 Given a Binary Search Tree (BST) and a positive integer k, find the k’th smallest element in the Binary Search Tree. For example, in the following BST, if k = 3, then output should be 10, and if k = 5, then output should be 14. We have discussed two methods in this post and one method in this post. All of the previous methods require extra space. How to find the k’th largest element without extra space? The idea is to use Morris Traversal. In this traversal, we first create links to Inorder successor and print the data using these links, and finally revert the changes to restore original tree. See this for more details. Below is the implementation of the idea. C++ Java Python3 C# Javascript // C++ program to find k'th largest element in BST#include<bits/stdc++.h>using namespace std; // A BST nodestruct Node{ int key; Node *left, *right;}; // A function to findint KSmallestUsingMorris(Node *root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = INT_MIN; // store the Kth smallest Node *curr = root; // to store the current node while (curr != NULL) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr->left == NULL) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr->key; // go to current's right child curr = curr->right; } else { // we create links to Inorder Successor and // count using these links Node *pre = curr->left; while (pre->right != NULL && pre->right != curr) pre = pre->right; // building links if (pre->right==NULL) { //link made to Inorder Successor pre->right = curr; curr = curr->left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre->right = NULL; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr->key; curr = curr->right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodeNode *newNode(int item){ Node *temp = new Node; temp->key = item; temp->left = temp->right = NULL; return temp;} /* A utility function to insert a new node with given key in BST */Node* insert(Node* node, int key){ /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionsint main(){ /* Let us create following BST 50 / \ 30 70 / \ / \ 20 40 60 80 */ Node *root = NULL; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k=1; k<=7; k++) cout << KSmallestUsingMorris(root, k) << " "; return 0;} // Java program to find k'th largest element in BSTimport java.util.*;class GfG { // A BST nodestatic class Node{ int key; Node left, right;} // A function to findstatic int KSmallestUsingMorris(Node root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = Integer.MIN_VALUE; // store the Kth smallest Node curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links Node pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right== null) { //link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodestatic Node newNode(int item){ Node temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */static Node insert(Node node, int key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionspublic static void main(String[] args){ /* Let us create following BST 50 / \ 30 70 / \ / \ 20 40 60 80 */ Node root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k=1; k<=7; k++) System.out.print(KSmallestUsingMorris(root, k) + " "); }} # Python 3 program to find k'th# largest element in BST # A BST nodeclass Node: # Constructor to create a new node def __init__(self, data): self.key = data self.left = None self.right = None # A function to finddef KSmallestUsingMorris(root, k): # Count to iterate over elements # till we get the kth smallest number count = 0 ksmall = -9999999999 # store the Kth smallest curr = root # to store the current node while curr != None: # Like Morris traversal if current does # not have left child rather than # printing as we did in inorder, we # will just increment the count as the # number will be in an increasing order if curr.left == None: count += 1 # if count is equal to K then we # found the kth smallest, so store # it in ksmall if count == k: ksmall = curr.key # go to current's right child curr = curr.right else: # we create links to Inorder Successor # and count using these links pre = curr.left while (pre.right != None and pre.right != curr): pre = pre.right # building links if pre.right == None: # link made to Inorder Successor pre.right = curr curr = curr.left # While breaking the links in so made # temporary threaded tree we will check # for the K smallest condition else: # Revert the changes made in if part # (break link from the Inorder Successor) pre.right = None count += 1 # If count is equal to K then we # found the kth smallest and so # store it in ksmall if count == k: ksmall = curr.key curr = curr.right return ksmall # return the found value # A utility function to insert a new# node with given key in BSTdef insert(node, key): # If the tree is empty, # return a new node if node == None: return Node(key) # Otherwise, recur down the tree if key < node.key: node.left = insert(node.left, key) elif key > node.key: node.right = insert(node.right, key) # return the (unchanged) node pointer return node # Driver Codeif __name__ == '__main__': # Let us create following BST # 50 # / \ # 30 70 # / \ / \ # 20 40 60 80 root = None root = insert(root, 50) insert(root, 30) insert(root, 20) insert(root, 40) insert(root, 70) insert(root, 60) insert(root, 80) for k in range(1,8): print(KSmallestUsingMorris(root, k), end = " ") # This code is contributed by PranchalK // C# program to find k'th largest element in BSTusing System; class GfG{ // A BST nodepublic class Node{ public int key; public Node left, right;} // A function to findstatic int KSmallestUsingMorris(Node root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = int.MinValue; // store the Kth smallest Node curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links Node pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right == null) { // link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count == k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodestatic Node newNode(int item){ Node temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */static Node insert(Node node, int key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionspublic static void Main(String[] args){ /* Let us create following BST 50 / \ 30 70 / \ / \ 20 40 60 80 */ Node root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k = 1; k <= 7; k++) Console.Write(KSmallestUsingMorris(root, k) + " "); }} // This code has been contributed by 29AjayKumar <script>// javascript program to find k'th largest element in BST // A BST node class Node { constructor() { this.key = 0; this.left = null; this.right = null; } } // A function to findfunction KSmallestUsingMorris(root , k){ // Count to iterate over elements till we // get the kth smallest number var count = 0; var ksmall = Number.MIN_VALUE; // store the Kth smallest var curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links var pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right== null) { //link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodefunction newNode(item){ var temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */function insert(node , key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functions /* Let us create following BST 50 / \ 30 70 / \ / \ 20 40 60 80 */ var root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (k=1; k<=7; k++) document.write(KSmallestUsingMorris(root, k) + " "); // This code contributed by Rajput-Ji</script> 20 30 40 50 60 70 80 Time Complexity: O(n) where n is the size of BSTAuxiliary Space: O(n) This article is contributed by Abhishek Somani. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above prerna saini PranchalKatiyar 29AjayKumar Rajput-Ji technophpfij hardikkoriintern Amazon Google Order-Statistics Binary Search Tree Amazon Google Binary Search Tree Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. A program to check if a binary tree is BST or not Find postorder traversal of BST from preorder traversal Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash) Optimal Binary Search Tree | DP-24 Sorted Array to Balanced BST Inorder Successor in Binary Search Tree Convert a normal BST to Balanced BST set vs unordered_set in C++ STL Find k-th smallest element in BST (Order Statistics in BST) Check if a given array can represent Preorder Traversal of Binary Search Tree
[ { "code": null, "e": 52, "s": 24, "text": "\n27 Jun, 2022" }, { "code": null, "e": 281, "s": 52, "text": "Given a Binary Search Tree (BST) and a positive integer k, find the k’th smallest element in the Binary Search Tree. For example, in the following BST, if k = 3, then output should be 10, and if k = 5, then output should be 14. " }, { "code": null, "e": 460, "s": 281, "text": "We have discussed two methods in this post and one method in this post. All of the previous methods require extra space. How to find the k’th largest element without extra space?" }, { "code": null, "e": 681, "s": 460, "text": "The idea is to use Morris Traversal. In this traversal, we first create links to Inorder successor and print the data using these links, and finally revert the changes to restore original tree. See this for more details." }, { "code": null, "e": 723, "s": 681, "text": "Below is the implementation of the idea. " }, { "code": null, "e": 727, "s": 723, "text": "C++" }, { "code": null, "e": 732, "s": 727, "text": "Java" }, { "code": null, "e": 740, "s": 732, "text": "Python3" }, { "code": null, "e": 743, "s": 740, "text": "C#" }, { "code": null, "e": 754, "s": 743, "text": "Javascript" }, { "code": "// C++ program to find k'th largest element in BST#include<bits/stdc++.h>using namespace std; // A BST nodestruct Node{ int key; Node *left, *right;}; // A function to findint KSmallestUsingMorris(Node *root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = INT_MIN; // store the Kth smallest Node *curr = root; // to store the current node while (curr != NULL) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr->left == NULL) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr->key; // go to current's right child curr = curr->right; } else { // we create links to Inorder Successor and // count using these links Node *pre = curr->left; while (pre->right != NULL && pre->right != curr) pre = pre->right; // building links if (pre->right==NULL) { //link made to Inorder Successor pre->right = curr; curr = curr->left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre->right = NULL; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr->key; curr = curr->right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodeNode *newNode(int item){ Node *temp = new Node; temp->key = item; temp->left = temp->right = NULL; return temp;} /* A utility function to insert a new node with given key in BST */Node* insert(Node* node, int key){ /* If the tree is empty, return a new node */ if (node == NULL) return newNode(key); /* Otherwise, recur down the tree */ if (key < node->key) node->left = insert(node->left, key); else if (key > node->key) node->right = insert(node->right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionsint main(){ /* Let us create following BST 50 / \\ 30 70 / \\ / \\ 20 40 60 80 */ Node *root = NULL; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k=1; k<=7; k++) cout << KSmallestUsingMorris(root, k) << \" \"; return 0;}", "e": 3903, "s": 754, "text": null }, { "code": "// Java program to find k'th largest element in BSTimport java.util.*;class GfG { // A BST nodestatic class Node{ int key; Node left, right;} // A function to findstatic int KSmallestUsingMorris(Node root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = Integer.MIN_VALUE; // store the Kth smallest Node curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links Node pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right== null) { //link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodestatic Node newNode(int item){ Node temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */static Node insert(Node node, int key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionspublic static void main(String[] args){ /* Let us create following BST 50 / \\ 30 70 / \\ / \\ 20 40 60 80 */ Node root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k=1; k<=7; k++) System.out.print(KSmallestUsingMorris(root, k) + \" \"); }}", "e": 7055, "s": 3903, "text": null }, { "code": "# Python 3 program to find k'th# largest element in BST # A BST nodeclass Node: # Constructor to create a new node def __init__(self, data): self.key = data self.left = None self.right = None # A function to finddef KSmallestUsingMorris(root, k): # Count to iterate over elements # till we get the kth smallest number count = 0 ksmall = -9999999999 # store the Kth smallest curr = root # to store the current node while curr != None: # Like Morris traversal if current does # not have left child rather than # printing as we did in inorder, we # will just increment the count as the # number will be in an increasing order if curr.left == None: count += 1 # if count is equal to K then we # found the kth smallest, so store # it in ksmall if count == k: ksmall = curr.key # go to current's right child curr = curr.right else: # we create links to Inorder Successor # and count using these links pre = curr.left while (pre.right != None and pre.right != curr): pre = pre.right # building links if pre.right == None: # link made to Inorder Successor pre.right = curr curr = curr.left # While breaking the links in so made # temporary threaded tree we will check # for the K smallest condition else: # Revert the changes made in if part # (break link from the Inorder Successor) pre.right = None count += 1 # If count is equal to K then we # found the kth smallest and so # store it in ksmall if count == k: ksmall = curr.key curr = curr.right return ksmall # return the found value # A utility function to insert a new# node with given key in BSTdef insert(node, key): # If the tree is empty, # return a new node if node == None: return Node(key) # Otherwise, recur down the tree if key < node.key: node.left = insert(node.left, key) elif key > node.key: node.right = insert(node.right, key) # return the (unchanged) node pointer return node # Driver Codeif __name__ == '__main__': # Let us create following BST # 50 # / \\ # 30 70 # / \\ / \\ # 20 40 60 80 root = None root = insert(root, 50) insert(root, 30) insert(root, 20) insert(root, 40) insert(root, 70) insert(root, 60) insert(root, 80) for k in range(1,8): print(KSmallestUsingMorris(root, k), end = \" \") # This code is contributed by PranchalK", "e": 10040, "s": 7055, "text": null }, { "code": "// C# program to find k'th largest element in BSTusing System; class GfG{ // A BST nodepublic class Node{ public int key; public Node left, right;} // A function to findstatic int KSmallestUsingMorris(Node root, int k){ // Count to iterate over elements till we // get the kth smallest number int count = 0; int ksmall = int.MinValue; // store the Kth smallest Node curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links Node pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right == null) { // link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count == k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodestatic Node newNode(int item){ Node temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */static Node insert(Node node, int key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functionspublic static void Main(String[] args){ /* Let us create following BST 50 / \\ 30 70 / \\ / \\ 20 40 60 80 */ Node root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (int k = 1; k <= 7; k++) Console.Write(KSmallestUsingMorris(root, k) + \" \"); }} // This code has been contributed by 29AjayKumar", "e": 13247, "s": 10040, "text": null }, { "code": "<script>// javascript program to find k'th largest element in BST // A BST node class Node { constructor() { this.key = 0; this.left = null; this.right = null; } } // A function to findfunction KSmallestUsingMorris(root , k){ // Count to iterate over elements till we // get the kth smallest number var count = 0; var ksmall = Number.MIN_VALUE; // store the Kth smallest var curr = root; // to store the current node while (curr != null) { // Like Morris traversal if current does // not have left child rather than printing // as we did in inorder, we will just // increment the count as the number will // be in an increasing order if (curr.left == null) { count++; // if count is equal to K then we found the // kth smallest, so store it in ksmall if (count==k) ksmall = curr.key; // go to current's right child curr = curr.right; } else { // we create links to Inorder Successor and // count using these links var pre = curr.left; while (pre.right != null && pre.right != curr) pre = pre.right; // building links if (pre.right== null) { //link made to Inorder Successor pre.right = curr; curr = curr.left; } // While breaking the links in so made temporary // threaded tree we will check for the K smallest // condition else { // Revert the changes made in if part (break link // from the Inorder Successor) pre.right = null; count++; // If count is equal to K then we found // the kth smallest and so store it in ksmall if (count==k) ksmall = curr.key; curr = curr.right; } } } return ksmall; //return the found value} // A utility function to create a new BST nodefunction newNode(item){ var temp = new Node(); temp.key = item; temp.left = null; temp.right = null; return temp;} /* A utility function to insert a new node with given key in BST */function insert(node , key){ /* If the tree is empty, return a new node */ if (node == null) return newNode(key); /* Otherwise, recur down the tree */ if (key < node.key) node.left = insert(node.left, key); else if (key > node.key) node.right = insert(node.right, key); /* return the (unchanged) node pointer */ return node;} // Driver Program to test above functions /* Let us create following BST 50 / \\ 30 70 / \\ / \\ 20 40 60 80 */ var root = null; root = insert(root, 50); insert(root, 30); insert(root, 20); insert(root, 40); insert(root, 70); insert(root, 60); insert(root, 80); for (k=1; k<=7; k++) document.write(KSmallestUsingMorris(root, k) + \" \"); // This code contributed by Rajput-Ji</script>", "e": 16467, "s": 13247, "text": null }, { "code": null, "e": 16489, "s": 16467, "text": "20 30 40 50 60 70 80 " }, { "code": null, "e": 16559, "s": 16489, "text": "Time Complexity: O(n) where n is the size of BSTAuxiliary Space: O(n)" }, { "code": null, "e": 16731, "s": 16559, "text": "This article is contributed by Abhishek Somani. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above" }, { "code": null, "e": 16744, "s": 16731, "text": "prerna saini" }, { "code": null, "e": 16760, "s": 16744, "text": "PranchalKatiyar" }, { "code": null, "e": 16772, "s": 16760, "text": "29AjayKumar" }, { "code": null, "e": 16782, "s": 16772, "text": "Rajput-Ji" }, { "code": null, "e": 16795, "s": 16782, "text": "technophpfij" }, { "code": null, "e": 16812, "s": 16795, "text": "hardikkoriintern" }, { "code": null, "e": 16819, "s": 16812, "text": "Amazon" }, { "code": null, "e": 16826, "s": 16819, "text": "Google" }, { "code": null, "e": 16843, "s": 16826, "text": "Order-Statistics" }, { "code": null, "e": 16862, "s": 16843, "text": "Binary Search Tree" }, { "code": null, "e": 16869, "s": 16862, "text": "Amazon" }, { "code": null, "e": 16876, "s": 16869, "text": "Google" }, { "code": null, "e": 16895, "s": 16876, "text": "Binary Search Tree" }, { "code": null, "e": 16993, "s": 16895, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 17043, "s": 16993, "text": "A program to check if a binary tree is BST or not" }, { "code": null, "e": 17099, "s": 17043, "text": "Find postorder traversal of BST from preorder traversal" }, { "code": null, "e": 17169, "s": 17099, "text": "Overview of Data Structures | Set 2 (Binary Tree, BST, Heap and Hash)" }, { "code": null, "e": 17204, "s": 17169, "text": "Optimal Binary Search Tree | DP-24" }, { "code": null, "e": 17233, "s": 17204, "text": "Sorted Array to Balanced BST" }, { "code": null, "e": 17273, "s": 17233, "text": "Inorder Successor in Binary Search Tree" }, { "code": null, "e": 17310, "s": 17273, "text": "Convert a normal BST to Balanced BST" }, { "code": null, "e": 17342, "s": 17310, "text": "set vs unordered_set in C++ STL" }, { "code": null, "e": 17402, "s": 17342, "text": "Find k-th smallest element in BST (Order Statistics in BST)" } ]
PHP | range() Function
08 Mar, 2018 The range() function is an inbuilt function in PHP which is used to create an array of elements of any kind such as integer, alphabets within a given range(from low to high) i.e, list’s first element is considered as low and last one is considered as high. Syntax: array range(low, high, step) Parameters: This function accepts three parameters as described below: low: It will be the first value in the array generated by range() function.high: It will be the last value in the array generated by range() function.step: It is used when the increment used in the range and it’s default value is 1. low: It will be the first value in the array generated by range() function. high: It will be the last value in the array generated by range() function. step: It is used when the increment used in the range and it’s default value is 1. Return Value: It returns an array of elements from low to high. Examples: Input : range(0, 6) Output : 0, 1, 2, 3, 4, 5, 6 Explanation: Here range() function print 0 to 6 because the parameter of range function is 0 as low and 6 as high. As the parameter step is not passed, values in the array are incremented by 1. Input : range(0, 100, 10) Output : 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 Explanation: Here range() function accepts parameters as 0, 100, 10 which are values of low, high, step respectively so it returns an array with elements starting from 0 to 100 incremented by 10. Below programs illustrate range() function in PHP:Program 1: <?php // creating array with elements from 0 to 6// using range function$arr = range(0,6); // printing elements of arrayforeach ($arr as $a) { echo "$a ";} ?> Output: 0 1 2 3 4 5 6 Program 2: <?php // creating array with elements from 0 to 100// with difference of 20 between consecutive // elements using range function$arr = range(0,100,20); // printing elements of arrayforeach ($arr as $a) { echo "$a ";} ?> Output: 0 20 40 60 80 100 Program 3: <?php // creating array with elements from a to j// using range function$arr = range('a','j'); // printing elements of arrayforeach ($arr as $a) { echo "$a ";} ?> Output: a b c d e f g h i j Program 4: <?php // creating array with elements from p to a// in reverse order using range function$arr = range('p','a'); // printing elements of arrayforeach ($arr as $a) { echo "$a ";} ?> Output: p o n m l k j i h g f e d c b a Reference:http://php.net/manual/en/function.range.php PHP-array PHP-function PHP Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Insert Form Data into Database using PHP ? How to convert array to string in PHP ? How to Upload Image into Database and Display it using PHP ? How to check whether an array is empty using PHP? PHP | Converting string to Date and DateTime Top 10 Projects For Beginners To Practice HTML and CSS Skills Installation of Node.js on Linux Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Mar, 2018" }, { "code": null, "e": 285, "s": 28, "text": "The range() function is an inbuilt function in PHP which is used to create an array of elements of any kind such as integer, alphabets within a given range(from low to high) i.e, list’s first element is considered as low and last one is considered as high." }, { "code": null, "e": 293, "s": 285, "text": "Syntax:" }, { "code": null, "e": 322, "s": 293, "text": "array range(low, high, step)" }, { "code": null, "e": 393, "s": 322, "text": "Parameters: This function accepts three parameters as described below:" }, { "code": null, "e": 626, "s": 393, "text": "low: It will be the first value in the array generated by range() function.high: It will be the last value in the array generated by range() function.step: It is used when the increment used in the range and it’s default value is 1." }, { "code": null, "e": 702, "s": 626, "text": "low: It will be the first value in the array generated by range() function." }, { "code": null, "e": 778, "s": 702, "text": "high: It will be the last value in the array generated by range() function." }, { "code": null, "e": 861, "s": 778, "text": "step: It is used when the increment used in the range and it’s default value is 1." }, { "code": null, "e": 925, "s": 861, "text": "Return Value: It returns an array of elements from low to high." }, { "code": null, "e": 935, "s": 925, "text": "Examples:" }, { "code": null, "e": 1461, "s": 935, "text": "Input : range(0, 6)\nOutput : 0, 1, 2, 3, 4, 5, 6\nExplanation: Here range() function print 0 to \n6 because the parameter of range function is 0 \nas low and 6 as high. As the parameter step is \nnot passed, values in the array are incremented \nby 1.\n\nInput : range(0, 100, 10)\nOutput : 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100\nExplanation: Here range() function accepts \nparameters as 0, 100, 10 which are values of low, \nhigh, step respectively so it returns an array with \nelements starting from 0 to 100 incremented by 10.\n" }, { "code": null, "e": 1522, "s": 1461, "text": "Below programs illustrate range() function in PHP:Program 1:" }, { "code": "<?php // creating array with elements from 0 to 6// using range function$arr = range(0,6); // printing elements of arrayforeach ($arr as $a) { echo \"$a \";} ?>", "e": 1693, "s": 1522, "text": null }, { "code": null, "e": 1701, "s": 1693, "text": "Output:" }, { "code": null, "e": 1715, "s": 1701, "text": "0 1 2 3 4 5 6" }, { "code": null, "e": 1726, "s": 1715, "text": "Program 2:" }, { "code": "<?php // creating array with elements from 0 to 100// with difference of 20 between consecutive // elements using range function$arr = range(0,100,20); // printing elements of arrayforeach ($arr as $a) { echo \"$a \";} ?>", "e": 1958, "s": 1726, "text": null }, { "code": null, "e": 1966, "s": 1958, "text": "Output:" }, { "code": null, "e": 1984, "s": 1966, "text": "0 20 40 60 80 100" }, { "code": null, "e": 1995, "s": 1984, "text": "Program 3:" }, { "code": "<?php // creating array with elements from a to j// using range function$arr = range('a','j'); // printing elements of arrayforeach ($arr as $a) { echo \"$a \";} ?>", "e": 2170, "s": 1995, "text": null }, { "code": null, "e": 2178, "s": 2170, "text": "Output:" }, { "code": null, "e": 2198, "s": 2178, "text": "a b c d e f g h i j" }, { "code": null, "e": 2209, "s": 2198, "text": "Program 4:" }, { "code": "<?php // creating array with elements from p to a// in reverse order using range function$arr = range('p','a'); // printing elements of arrayforeach ($arr as $a) { echo \"$a \";} ?>", "e": 2401, "s": 2209, "text": null }, { "code": null, "e": 2409, "s": 2401, "text": "Output:" }, { "code": null, "e": 2441, "s": 2409, "text": "p o n m l k j i h g f e d c b a" }, { "code": null, "e": 2495, "s": 2441, "text": "Reference:http://php.net/manual/en/function.range.php" }, { "code": null, "e": 2505, "s": 2495, "text": "PHP-array" }, { "code": null, "e": 2518, "s": 2505, "text": "PHP-function" }, { "code": null, "e": 2522, "s": 2518, "text": "PHP" }, { "code": null, "e": 2539, "s": 2522, "text": "Web Technologies" }, { "code": null, "e": 2543, "s": 2539, "text": "PHP" }, { "code": null, "e": 2641, "s": 2543, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2691, "s": 2641, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 2731, "s": 2691, "text": "How to convert array to string in PHP ?" }, { "code": null, "e": 2792, "s": 2731, "text": "How to Upload Image into Database and Display it using PHP ?" }, { "code": null, "e": 2842, "s": 2792, "text": "How to check whether an array is empty using PHP?" }, { "code": null, "e": 2887, "s": 2842, "text": "PHP | Converting string to Date and DateTime" }, { "code": null, "e": 2949, "s": 2887, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 2982, "s": 2949, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 3043, "s": 2982, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3093, "s": 3043, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
How to position a div at specific coordinates ?
24 Jul, 2019 Given an HTML document, The task is to position a <div> at specific coordinates on the web page using JavaScript. we’re going to discuss few techniques. Approach: First setting the style.position property of the element. Then set the style.top, style.left properties of the element, which we want to position. Example 1: In this example, the DIV is positioned at the end of the document. <!DOCTYPE HTML><html> <head> <title> JavaScript | Position a DIV in a specific coordinates. </title> <style> #GFG_DIV { background: green; height: 100px; width: 200px; margin: 0 auto; color: white; } </style></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 19px; font-weight: bold;"> </p> <div id="GFG_DIV"> This is Div box. </div> <br> <button onClick="GFG_Fun()"> click here </button> <p id="GFG_DOWN" style="color: green; font-size: 24px; font-weight: bold;"> </p> <script> var el_up = document.getElementById( "GFG_UP"); var el_down = document.getElementById( "GFG_DOWN"); el_up.innerHTML = "Click on button to change"+ " the position of the DIV."; function GFG_Fun() { var x = 370; var y = 250; var el = document.getElementById('GFG_DIV'); el.style.position = "absolute"; el.style.left = x + 'px'; el.style.top = y + 'px'; el_down.innerHTML = "Position of element is changed."; } </script></body> </html> Output: Before clicking on the button: After clicking on the button: Example 2: In this example, the DIV is positioned at the top-left corner of the document. <!DOCTYPE HTML><html> <head> <title> JavaScript | Position a DIV in a specific coordinates. </title> <style> #GFG_DIV { background: green; height: 50px; width: 80px; margin: 0 auto; color: white; } </style></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 19px; font-weight: bold;"> </p> <div id="GFG_DIV"> This is Div box. </div> <br> <button onClick="GFG_Fun()"> click here </button> <p id="GFG_DOWN" style="color: green; font-size: 24px; font-weight: bold;"> </p> <script> var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); el_up.innerHTML = "Click on button to change the position of the DIV."; function GFG_Fun() { var x = 0; var y = 0; var el = document.getElementById('GFG_DIV'); el.style.position = "absolute"; el.style.left = x + 'px'; el.style.top = y + 'px'; el_down.innerHTML = "Position of element is changed."; } </script></body> </html> Output: Before clicking on the button: After clicking on the button: JavaScript-Misc JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array Difference Between PUT and PATCH Request Roadmap to Learn JavaScript For Beginners Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n24 Jul, 2019" }, { "code": null, "e": 181, "s": 28, "text": "Given an HTML document, The task is to position a <div> at specific coordinates on the web page using JavaScript. we’re going to discuss few techniques." }, { "code": null, "e": 191, "s": 181, "text": "Approach:" }, { "code": null, "e": 249, "s": 191, "text": "First setting the style.position property of the element." }, { "code": null, "e": 338, "s": 249, "text": "Then set the style.top, style.left properties of the element, which we want to position." }, { "code": null, "e": 416, "s": 338, "text": "Example 1: In this example, the DIV is positioned at the end of the document." }, { "code": "<!DOCTYPE HTML><html> <head> <title> JavaScript | Position a DIV in a specific coordinates. </title> <style> #GFG_DIV { background: green; height: 100px; width: 200px; margin: 0 auto; color: white; } </style></head> <body style=\"text-align:center;\" id=\"body\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\" style=\"font-size: 19px; font-weight: bold;\"> </p> <div id=\"GFG_DIV\"> This is Div box. </div> <br> <button onClick=\"GFG_Fun()\"> click here </button> <p id=\"GFG_DOWN\" style=\"color: green; font-size: 24px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById( \"GFG_UP\"); var el_down = document.getElementById( \"GFG_DOWN\"); el_up.innerHTML = \"Click on button to change\"+ \" the position of the DIV.\"; function GFG_Fun() { var x = 370; var y = 250; var el = document.getElementById('GFG_DIV'); el.style.position = \"absolute\"; el.style.left = x + 'px'; el.style.top = y + 'px'; el_down.innerHTML = \"Position of element is changed.\"; } </script></body> </html>", "e": 1829, "s": 416, "text": null }, { "code": null, "e": 1837, "s": 1829, "text": "Output:" }, { "code": null, "e": 1868, "s": 1837, "text": "Before clicking on the button:" }, { "code": null, "e": 1898, "s": 1868, "text": "After clicking on the button:" }, { "code": null, "e": 1988, "s": 1898, "text": "Example 2: In this example, the DIV is positioned at the top-left corner of the document." }, { "code": "<!DOCTYPE HTML><html> <head> <title> JavaScript | Position a DIV in a specific coordinates. </title> <style> #GFG_DIV { background: green; height: 50px; width: 80px; margin: 0 auto; color: white; } </style></head> <body style=\"text-align:center;\" id=\"body\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\" style=\"font-size: 19px; font-weight: bold;\"> </p> <div id=\"GFG_DIV\"> This is Div box. </div> <br> <button onClick=\"GFG_Fun()\"> click here </button> <p id=\"GFG_DOWN\" style=\"color: green; font-size: 24px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById(\"GFG_UP\"); var el_down = document.getElementById(\"GFG_DOWN\"); el_up.innerHTML = \"Click on button to change the position of the DIV.\"; function GFG_Fun() { var x = 0; var y = 0; var el = document.getElementById('GFG_DIV'); el.style.position = \"absolute\"; el.style.left = x + 'px'; el.style.top = y + 'px'; el_down.innerHTML = \"Position of element is changed.\"; } </script></body> </html>", "e": 3344, "s": 1988, "text": null }, { "code": null, "e": 3352, "s": 3344, "text": "Output:" }, { "code": null, "e": 3383, "s": 3352, "text": "Before clicking on the button:" }, { "code": null, "e": 3413, "s": 3383, "text": "After clicking on the button:" }, { "code": null, "e": 3429, "s": 3413, "text": "JavaScript-Misc" }, { "code": null, "e": 3440, "s": 3429, "text": "JavaScript" }, { "code": null, "e": 3457, "s": 3440, "text": "Web Technologies" }, { "code": null, "e": 3555, "s": 3457, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3616, "s": 3555, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 3688, "s": 3616, "text": "Differences between Functional Components and Class Components in React" }, { "code": null, "e": 3728, "s": 3688, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 3769, "s": 3728, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 3811, "s": 3769, "text": "Roadmap to Learn JavaScript For Beginners" }, { "code": null, "e": 3844, "s": 3811, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 3906, "s": 3844, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 3967, "s": 3906, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 4017, "s": 3967, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
How to take integer input in Python?
20 Jun, 2022 In this post, We will see how to take integer input in Python. As we know that Python’s built-in input() function always returns a str(string) class object. So for taking integer input we have to type cast those inputs into integers by using Python built-in int() function. Let us see the examples: Example 1: Python3 # take input from userinput_a = input() # print data typeprint(type(input_a)) # type cast into integerinput_a = int(input_a) # print data typeprint(type(input_a)) Output: 100 <class 'str'> <class 'int'> Example 2: Python3 # string inputinput_a = input() # print typeprint(type(input_a)) # integer inputinput_b = int(input()) # print typeprint(type(input_b)) Output: 10 <class 'str'> 20 <class 'int'> Example 3: Python3 # take multiple inputs in arrayinput_str_array = input().split() print("array:", input_str_array) # take multiple inputs in arrayinput_int_array = [int(x) for x in input().split()] print("array:", input_int_array) Output: 10 20 30 40 50 60 70 array: ['10', '20', '30', '40', '50', '60', '70'] 10 20 30 40 50 60 70 array: [10, 20, 30, 40, 50, 60, 70] Example 4: Python3 # Python program to take integer input in Python # input size of the listn = int(input("Enter the size of list : "))# store integrs in a list using map, split and strip functionslst = list(map(int, input( "Enter the integer elements of list(Space-Separated): ").strip().split()))[:n]print('The list is:', lst) # printing the list Output: Enter the size of list : 4 Enter the integer elements of list(Space-Separated): 6 3 9 10 The list is: [6, 3, 9, 10] susobhanakhuli python-basics python-input-output Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? Python Classes and Objects Python OOPs Concepts Introduction To PYTHON How to drop one or multiple columns in Pandas Dataframe Python | os.path.join() method Check if element exists in list in Python How To Convert Python Dictionary To JSON? Python | Get unique values from a list Python | datetime.timedelta() function
[ { "code": null, "e": 54, "s": 26, "text": "\n20 Jun, 2022" }, { "code": null, "e": 328, "s": 54, "text": "In this post, We will see how to take integer input in Python. As we know that Python’s built-in input() function always returns a str(string) class object. So for taking integer input we have to type cast those inputs into integers by using Python built-in int() function." }, { "code": null, "e": 353, "s": 328, "text": "Let us see the examples:" }, { "code": null, "e": 364, "s": 353, "text": "Example 1:" }, { "code": null, "e": 372, "s": 364, "text": "Python3" }, { "code": "# take input from userinput_a = input() # print data typeprint(type(input_a)) # type cast into integerinput_a = int(input_a) # print data typeprint(type(input_a))", "e": 535, "s": 372, "text": null }, { "code": null, "e": 543, "s": 535, "text": "Output:" }, { "code": null, "e": 575, "s": 543, "text": "100\n<class 'str'>\n<class 'int'>" }, { "code": null, "e": 586, "s": 575, "text": "Example 2:" }, { "code": null, "e": 594, "s": 586, "text": "Python3" }, { "code": "# string inputinput_a = input() # print typeprint(type(input_a)) # integer inputinput_b = int(input()) # print typeprint(type(input_b))", "e": 730, "s": 594, "text": null }, { "code": null, "e": 738, "s": 730, "text": "Output:" }, { "code": null, "e": 772, "s": 738, "text": "10\n<class 'str'>\n20\n<class 'int'>" }, { "code": null, "e": 783, "s": 772, "text": "Example 3:" }, { "code": null, "e": 791, "s": 783, "text": "Python3" }, { "code": "# take multiple inputs in arrayinput_str_array = input().split() print(\"array:\", input_str_array) # take multiple inputs in arrayinput_int_array = [int(x) for x in input().split()] print(\"array:\", input_int_array)", "e": 1005, "s": 791, "text": null }, { "code": null, "e": 1013, "s": 1005, "text": "Output:" }, { "code": null, "e": 1141, "s": 1013, "text": "10 20 30 40 50 60 70\narray: ['10', '20', '30', '40', '50', '60', '70']\n10 20 30 40 50 60 70\narray: [10, 20, 30, 40, 50, 60, 70]" }, { "code": null, "e": 1152, "s": 1141, "text": "Example 4:" }, { "code": null, "e": 1160, "s": 1152, "text": "Python3" }, { "code": "# Python program to take integer input in Python # input size of the listn = int(input(\"Enter the size of list : \"))# store integrs in a list using map, split and strip functionslst = list(map(int, input( \"Enter the integer elements of list(Space-Separated): \").strip().split()))[:n]print('The list is:', lst) # printing the list", "e": 1496, "s": 1160, "text": null }, { "code": null, "e": 1504, "s": 1496, "text": "Output:" }, { "code": null, "e": 1620, "s": 1504, "text": "Enter the size of list : 4\nEnter the integer elements of list(Space-Separated): 6 3 9 10\nThe list is: [6, 3, 9, 10]" }, { "code": null, "e": 1635, "s": 1620, "text": "susobhanakhuli" }, { "code": null, "e": 1649, "s": 1635, "text": "python-basics" }, { "code": null, "e": 1669, "s": 1649, "text": "python-input-output" }, { "code": null, "e": 1676, "s": 1669, "text": "Python" }, { "code": null, "e": 1774, "s": 1676, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1806, "s": 1774, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 1833, "s": 1806, "text": "Python Classes and Objects" }, { "code": null, "e": 1854, "s": 1833, "text": "Python OOPs Concepts" }, { "code": null, "e": 1877, "s": 1854, "text": "Introduction To PYTHON" }, { "code": null, "e": 1933, "s": 1877, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 1964, "s": 1933, "text": "Python | os.path.join() method" }, { "code": null, "e": 2006, "s": 1964, "text": "Check if element exists in list in Python" }, { "code": null, "e": 2048, "s": 2006, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 2087, "s": 2048, "text": "Python | Get unique values from a list" } ]
Difference between URL and URI
04 Jul, 2022 URL (Uniform Resource Locator): URL (Uniform Resource Locator) is often defined as a string of characters that is directed to an address. It is a very commonly used way to locate resources on the web. It provides a way to retrieve the presentation of the physical location by describing its network location or primary access mechanism. The protocol is described within the URL which is employed to retrieve the resource and resource name. The URL contains http/https at the start if the resource may be a web type resource. Similarly, it begins with ftp if the resource may be a file and mailto if the resource is an email address. The syntax of an URL is shown below where the primary part is employed for protocol and the remainder of the part is employed for the resource which consists of a website name or program name. https://www.geeksforgeeks.org/minimum-cost-graph Here, the domain name describes the server (web service) and program name (path to the directory and file on the server). URI (Uniform Resource Identifier): Similar to URL, URI (Uniform Resource Identifier) is also a string of characters that identifies a resource on the web either by using location, name or both. It allows uniform identification of the resources. A URI is additionally grouped as a locator, a name or both which suggests it can describe a URL, URN or both. The term identifier within the URI refers to the prominence of the resources, despite the technique used. The former category in URI is URL, during which a protocol is employed to specify the accessing method of the resource and resource name is additionally laid out in the URL. A URL may be a non-persistent sort of the URI. A URN is required to exist globally unique and features a global scope. Difference between URL and URI: sayanc170 Technical Scripter 2019 Web technologies Computer Networks Difference Between Technical Scripter Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Differences between TCP and UDP Types of Network Topology RSA Algorithm in Cryptography TCP Server-Client implementation in C Socket Programming in Python Class method vs Static method in Python Difference between BFS and DFS Difference between var, let and const keywords in JavaScript Difference Between Method Overloading and Method Overriding in Java Differences between JDK, JRE and JVM
[ { "code": null, "e": 52, "s": 24, "text": "\n04 Jul, 2022" }, { "code": null, "e": 85, "s": 52, "text": "URL (Uniform Resource Locator): " }, { "code": null, "e": 391, "s": 85, "text": "URL (Uniform Resource Locator) is often defined as a string of characters that is directed to an address. It is a very commonly used way to locate resources on the web. It provides a way to retrieve the presentation of the physical location by describing its network location or primary access mechanism. " }, { "code": null, "e": 880, "s": 391, "text": "The protocol is described within the URL which is employed to retrieve the resource and resource name. The URL contains http/https at the start if the resource may be a web type resource. Similarly, it begins with ftp if the resource may be a file and mailto if the resource is an email address. The syntax of an URL is shown below where the primary part is employed for protocol and the remainder of the part is employed for the resource which consists of a website name or program name." }, { "code": null, "e": 929, "s": 880, "text": "https://www.geeksforgeeks.org/minimum-cost-graph" }, { "code": null, "e": 1052, "s": 929, "text": "Here, the domain name describes the server (web service) and program name (path to the directory and file on the server). " }, { "code": null, "e": 1088, "s": 1052, "text": "URI (Uniform Resource Identifier): " }, { "code": null, "e": 1515, "s": 1088, "text": "Similar to URL, URI (Uniform Resource Identifier) is also a string of characters that identifies a resource on the web either by using location, name or both. It allows uniform identification of the resources. A URI is additionally grouped as a locator, a name or both which suggests it can describe a URL, URN or both. The term identifier within the URI refers to the prominence of the resources, despite the technique used. " }, { "code": null, "e": 1808, "s": 1515, "text": "The former category in URI is URL, during which a protocol is employed to specify the accessing method of the resource and resource name is additionally laid out in the URL. A URL may be a non-persistent sort of the URI. A URN is required to exist globally unique and features a global scope." }, { "code": null, "e": 1843, "s": 1811, "text": "Difference between URL and URI:" }, { "code": null, "e": 1853, "s": 1843, "text": "sayanc170" }, { "code": null, "e": 1877, "s": 1853, "text": "Technical Scripter 2019" }, { "code": null, "e": 1894, "s": 1877, "text": "Web technologies" }, { "code": null, "e": 1912, "s": 1894, "text": "Computer Networks" }, { "code": null, "e": 1931, "s": 1912, "text": "Difference Between" }, { "code": null, "e": 1950, "s": 1931, "text": "Technical Scripter" }, { "code": null, "e": 1968, "s": 1950, "text": "Computer Networks" }, { "code": null, "e": 2066, "s": 1968, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2098, "s": 2066, "text": "Differences between TCP and UDP" }, { "code": null, "e": 2124, "s": 2098, "text": "Types of Network Topology" }, { "code": null, "e": 2154, "s": 2124, "text": "RSA Algorithm in Cryptography" }, { "code": null, "e": 2192, "s": 2154, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 2221, "s": 2192, "text": "Socket Programming in Python" }, { "code": null, "e": 2261, "s": 2221, "text": "Class method vs Static method in Python" }, { "code": null, "e": 2292, "s": 2261, "text": "Difference between BFS and DFS" }, { "code": null, "e": 2353, "s": 2292, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2421, "s": 2353, "text": "Difference Between Method Overloading and Method Overriding in Java" } ]
Python | cmp() function
24 Apr, 2020 cmp() method in Python 2.x compares two integers and returns -1, 0, 1 according to comparison.cmp() does not work in python 3.x. You might want to see list comparison in Python. Syntax: cmp(a, b) Parameters: a and b are the two numbers in which the comparison is being done. Returns: -1 if a<b 0 if a=b 1 if a>b # Python program to demonstrate the # use of cmp() method # when a<ba = 1 b = 2 print(cmp(a, b)) # when a = b a = 2b = 2 print(cmp(a, b)) # when a>b a = 3b = 2 print(cmp(a, b)) Output: -1 0 1 Practical Application: Program to check if a number is even or odd using cmp function. Approach: Compare 0 and n%2, if it returns 0, then it is even, else its odd. Below is the Python implementation of the above program: # Python program to check if a number is # odd or even using cmp function # check 12 n = 12 if cmp(0, n % 2): print"odd"else: print"even" # check 13 n = 13 if cmp(0, n % 2): print"odd"else: print"even" Output: even odd PK 1 ShubhamVm Python-Built-in-functions Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n24 Apr, 2020" }, { "code": null, "e": 231, "s": 53, "text": "cmp() method in Python 2.x compares two integers and returns -1, 0, 1 according to comparison.cmp() does not work in python 3.x. You might want to see list comparison in Python." }, { "code": null, "e": 369, "s": 231, "text": "Syntax:\ncmp(a, b)\nParameters:\na and b are the two numbers in which the comparison is being done. \nReturns:\n-1 if a<b\n\n0 if a=b\n\n1 if a>b\n" }, { "code": "# Python program to demonstrate the # use of cmp() method # when a<ba = 1 b = 2 print(cmp(a, b)) # when a = b a = 2b = 2 print(cmp(a, b)) # when a>b a = 3b = 2 print(cmp(a, b))", "e": 553, "s": 369, "text": null }, { "code": null, "e": 561, "s": 553, "text": "Output:" }, { "code": null, "e": 570, "s": 561, "text": "-1\n0 \n1\n" }, { "code": null, "e": 657, "s": 570, "text": "Practical Application: Program to check if a number is even or odd using cmp function." }, { "code": null, "e": 734, "s": 657, "text": "Approach: Compare 0 and n%2, if it returns 0, then it is even, else its odd." }, { "code": null, "e": 791, "s": 734, "text": "Below is the Python implementation of the above program:" }, { "code": "# Python program to check if a number is # odd or even using cmp function # check 12 n = 12 if cmp(0, n % 2): print\"odd\"else: print\"even\" # check 13 n = 13 if cmp(0, n % 2): print\"odd\"else: print\"even\" ", "e": 1031, "s": 791, "text": null }, { "code": null, "e": 1039, "s": 1031, "text": "Output:" }, { "code": null, "e": 1049, "s": 1039, "text": "even\nodd\n" }, { "code": null, "e": 1054, "s": 1049, "text": "PK 1" }, { "code": null, "e": 1064, "s": 1054, "text": "ShubhamVm" }, { "code": null, "e": 1090, "s": 1064, "text": "Python-Built-in-functions" }, { "code": null, "e": 1097, "s": 1090, "text": "Python" } ]
MySQL - SAVEPOINT Statement
A save point is a logical rollback point within a transaction. When you set a save point, whenever an error occurs past a save point, you can undo the events you have done up to the save point using the rollback. MySQL InnoDB provides support for the statements SAVEPOINT, ROLLBACK TO SAVEPOINT, RELEASE SAVEPOINT. The SAVEPOINT statement is used to set a save point for the transaction with the specified name. If a save point with the given name already exists the old one will be deleted. Following is the syntax of the MySQL SAVEPOINT statement − SAVEPOINT identifier MySQL saves the changes done after the execution of each statement. To save changes automatically, set the autocommit option as shown below − SET autocommit=0; Assume we have created a table in MySQL with name EMPLOYEES as shown below − mysql> CREATE TABLE EMP( FIRST_NAME CHAR(20) NOT NULL, LAST_NAME CHAR(20), AGE INT, SEX CHAR(1), INCOME FLOAT); Query OK, 0 rows affected (0.36 sec) Let us insert 4 records in to it using INSERT statements as − mysql> INSERT INTO EMP VALUES ('Krishna', 'Sharma', 19, 'M', 2000), ('Raj', 'Kandukuri', 20, 'M', 7000), ('Ramya', 'Ramapriya', 25, 'F', 5000); Query OK, 3 rows affected (0.49 sec) Records: 3 Duplicates: 0 Warnings: 0 Following transaction updates, the age values of all the employees in the emp table − START TRANSACTION; SELECT * FROM EMP; UPDATE EMP SET AGE = AGE + 1; SAVEPOINT samplesavepoint; INSERT INTO EMP ('Mac', 'Mohan', 26, 'M', 2000); ROLLBACK TO SAVEPOINT samplesavepoint; COMMIT; If you retrieve the contents of the table, you can see the updated values as − mysql> SELECT * FROM EMP; +------------+-----------+------+------+--------+ | FIRST_NAME | LAST_NAME | AGE | SEX | INCOME | +------------+-----------+------+------+--------+ | Krishna | Sharma | 20 | M | 2000 | | Raj | Kandukuri | 21 | M | 7000 | | Ramya | Ramapriya | 26 | F | 5000 | +------------+-----------+------+------+--------+ 3 rows in set (0.07 sec)
[ { "code": null, "e": 2654, "s": 2441, "text": "A save point is a logical rollback point within a transaction. When you set a save point, whenever an error occurs past a save point, you can undo the events you have done up to the save point using the rollback." }, { "code": null, "e": 2756, "s": 2654, "text": "MySQL InnoDB provides support for the statements SAVEPOINT, ROLLBACK TO SAVEPOINT, RELEASE SAVEPOINT." }, { "code": null, "e": 2933, "s": 2756, "text": "The SAVEPOINT statement is used to set a save point for the transaction with the specified name. If a save point with the given name already exists the old one will be deleted." }, { "code": null, "e": 2992, "s": 2933, "text": "Following is the syntax of the MySQL SAVEPOINT statement −" }, { "code": null, "e": 3014, "s": 2992, "text": "SAVEPOINT identifier\n" }, { "code": null, "e": 3156, "s": 3014, "text": "MySQL saves the changes done after the execution of each statement. To save changes automatically, set the autocommit option as shown below −" }, { "code": null, "e": 3175, "s": 3156, "text": "SET autocommit=0;\n" }, { "code": null, "e": 3252, "s": 3175, "text": "Assume we have created a table in MySQL with name EMPLOYEES as shown below −" }, { "code": null, "e": 3416, "s": 3252, "text": "mysql> CREATE TABLE EMP(\n FIRST_NAME CHAR(20) NOT NULL,\n LAST_NAME CHAR(20),\n AGE INT,\n SEX CHAR(1),\n INCOME FLOAT);\nQuery OK, 0 rows affected (0.36 sec)" }, { "code": null, "e": 3478, "s": 3416, "text": "Let us insert 4 records in to it using INSERT statements as −" }, { "code": null, "e": 3705, "s": 3478, "text": "mysql> INSERT INTO EMP VALUES\n ('Krishna', 'Sharma', 19, 'M', 2000),\n ('Raj', 'Kandukuri', 20, 'M', 7000),\n ('Ramya', 'Ramapriya', 25, 'F', 5000);\nQuery OK, 3 rows affected (0.49 sec)\nRecords: 3 Duplicates: 0 Warnings: 0" }, { "code": null, "e": 3791, "s": 3705, "text": "Following transaction updates, the age values of all the employees in the emp table −" }, { "code": null, "e": 3986, "s": 3791, "text": "START TRANSACTION;\n\nSELECT * FROM EMP;\nUPDATE EMP SET AGE = AGE + 1;\n\nSAVEPOINT samplesavepoint;\n\nINSERT INTO EMP ('Mac', 'Mohan', 26, 'M', 2000);\n\nROLLBACK TO SAVEPOINT samplesavepoint;\nCOMMIT;" }, { "code": null, "e": 4065, "s": 3986, "text": "If you retrieve the contents of the table, you can see the updated values as −" } ]
Simple MLOps with Amazon SageMaker, Lambda and AWS Step Functions Data Science SDK | by Stefan Natu | Towards Data Science
By Stefan Natu, Shreyas Subramanian, and Qingwei Li As the machine learning space matures, there is an increasing need for simple ways to automate and deploy ML pipelines into production. With the explosion of data science platforms, companies and teams are often using diverse machine learning platforms for data exploration, Extract-Transform-Load (ETL) jobs, model training and deployment. In this blog, we describe how users can bring their own algorithm code to build a training and inference image using Docker, train and host their model using Amazon SageMaker and AWS StepFunctions. We use Mask R-CNN, a highly popular instance segmentation model used for numerous computer vision use cases as such as [1]. Readers who wish to get hands-on with the contents of this blog can refer to our Github [2]. AWS CodeBuild: AWS CodeBuild is a fully managed continuous integration (CI) service that allows users to compile and package code into deployable artifacts. Here we will use CodeBuild to package our custom Mask R-CNN container into a Docker image, which we upload into Amazon Elastic Container Registry (ECR) AWS Lambda: AWS Lambda is a service that lets you run code without provisioning servers. Here we will use AWS Lambda to deploy a CodeBuild job. AWS StepFunctions: AWS StepFuntions is an orchestration tool that allows users to build pipelines and coordinate microservices into a workflow. AWS StepFunctions can then trigger that workflow in an event driven manner without having the user manage any underlying servers or compute. Amazon SageMaker: Amazon SageMaker is a fully managed machine learning platform for building, training and deploying machine learning models. Here we use Amazon SageMaker to author training and model deployment jobs, as well as SageMaker Jupyter notebooks to author a StepFunctions workflow. Because we run the same image in training or hosting, Amazon SageMaker runs your container with the argument train or serve. When Amazon SageMaker runs training, your train script is run just like a regular Python program. Hosting has a very different model than training because hosting is responding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests: In the `container` directory are all the components you need to package the sample algorithm for Amazon SageMager. ├── Dockerfile ├── build_and_push.sh └── mask_r_cnn ├── nginx.conf ├── predictor.py ├── serve ├── wsgi.py ├── transforms.py ├── utils.py ├── coco_eval.py ├── coco_utils.py ├── engine.py └── helper.py Let's discuss each of these in turn: Dockerfile describes how to build your Docker container image. More details below. build_and_push.sh is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We'll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms. mask_r_cnn is the directory which contains the files that will be installed in the container. In this simple application, we only install five files in the container. The files that we'll put in the container are: nginx.conf is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is. predictor.py is the program that actually implements the Flask web server and the decision tree predictions for this app. serve is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in predictor.py. You should be able to take this file as-is. train is the program that is invoked when the container is run for training. wsgi.py is a small wrapper used to invoke the Flask app. You should be able to take this file as-is. We have customized train.py and predictor.py for fine tuning Mask R-CNN during training, and for loading tuned model, deserializing request data, making prediction, and sending back the serialized results. The Dockerfile describes the image that we want to build. We will start from a standard Ubuntu installation and run the normal tools to install the things needed such as python, torch, torchvision, and Pillow. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under. The Dockerfile looks like below, FROM ubuntu:16.04MAINTAINER Amazon AI <sage-learner@amazon.com>RUN apt-get -y update && apt-get install -y --no-install-recommends \ wget \ gcc\ g++\ python3 \ python3-dev\ nginx \ ca-certificates \ && rm -rf /var/lib/apt/lists/*RUN wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && \ pip install cython numpy==1.16.2 scipy==1.2.1 pandas flask gevent gunicorn && \ (cd /usr/local/lib/python3.5/dist-packages/scipy/.libs; rm *; ln ../../numpy/.libs/* .) && \ rm -rf /root/.cacheRUN pip install torch torchvision fastai thinc PillowENV PYTHONUNBUFFERED=TRUEENV PYTHONDONTWRITEBYTECODE=TRUEENV PATH="/opt/program:${PATH}"# Set up the program in the imageCOPY mask_r_cnn /opt/programWORKDIR /opt/program For building SageMaker-ready containers, such as the one discussed above for Mask R-CNN, we use the open source “SageMaker Containers” project which can be found here https://github.com/aws/sagemaker-containers. SageMaker Containers gives you tools to create SageMaker-compatible Docker containers, and has additional tools for letting you create Frameworks (SageMaker-compatible Docker containers that can run arbitrary Python or shell scripts). Currently, this library is used by the following containers: TensorFlow Script Mode, MXNet, PyTorch, Chainer, and Scikit-learn. To create a Sagemaker compatible container, we require the following components: train.py file with your training code, andDockerfile, such as the one above train.py file with your training code, and Dockerfile, such as the one above The training script must be located under the folder /opt/ml/code and its relative path is defined in the environment variable SAGEMAKER_PROGRAM. The following scripts are supported: Python scripts: uses the Python interpreter for any script with .py suffix Shell scripts: uses the Shell interpreter to execute any other script When training starts, the interpreter executes the entry point, from the example above: python train.py For more information on hyper-parameters and environment variables, please refer to https://github.com/aws/sagemaker-containers#id10. We will use a Cloud formation template to automate the container build of our Mask R-CNN. The template sets up the following architecture: The Lambda function contains the train.py file and Dockerfile that can be edited inline. Once triggered (manually, or through a step functions approach as shown in the next section), the Lambda function: Creates an ECR repository ,if it doesn’t already exist, to store the container images once builtUploads the train.py and Dockerfile to an S3 bucketCreates a Codebuild project and uses the above files with a buildspec.yml to start the process to build a container push the image to ECR. Creates an ECR repository ,if it doesn’t already exist, to store the container images once built Uploads the train.py and Dockerfile to an S3 bucket Creates a Codebuild project and uses the above files with a buildspec.yml to start the process to build a container push the image to ECR. The Lambda function also contains useful environment variables that can be reconfigured for new builds. Once the Lambda function is set up, we are now ready to build out Automation pipeline to train and deploy the model to an endpoint. For this, we will use AWS Step Functions, which is an orchestration tool which lets users author state machines as JSON objects and execute them without provisioning or managing any servers. Step Functions also now provides a Data Science Python SDK for authoring machine learning pipelines using python in a familiar Jupyter notebook environment. We refer the reader to the Step Functions Github repository to get started [3]. Here we simply demonstrate the key components of the pipeline that are described in detail in our Github [2]. To author the code, we will use Amazon SageMaker, AWS fully managed machine learning platform. As with all AWS Services, we first need to provide the appropriate service the IAM permissions to call other AWS Services. For this we first need to allow Amazon SageMaker to call Step Functions APIs, and AWS Step Functions to call SageMaker for model training, and endpoint creation and deployment. Detailed instructions on how to set up proper IAM credentials are described in [3]. Once this is set up, we will first create a Lambda state to run the Lambda function that takes the code and deploys it as a container to host in ECR. lambda_state = LambdaStep( state_id="Calls CodeBuild to Build Container", parameters={ "FunctionName": "Docker_Lambda", #replace with the name of the Lambda function you created "Payload": { "input": "HelloWorld" } })lambda_state.add_retry(Retry( error_equals=["States.TaskFailed"], interval_seconds=15, max_attempts=2, backoff_rate=4.0))lambda_state.add_catch(Catch( error_equals=["States.TaskFailed"], next_step=Fail("LambdaTaskFailed"))) Retry and Catch steps here are added for error handling. You can modify these with your custom error handling steps, but this lets Step Functions know to end the workflow if the Lambda function fails to deploy the container. The next steps are to chain together the training job and the model creation from the trained artifacts. Fortunately, Step Functions Data Science SDK provides the logic and APIs to chain these steps together, with any custom branching logic that could be required. train_step = TrainingStep('Train Step',estimator=maskrcnn,data=os.path.dirname(data_location),job_name=execution_input['JobName'])model_step = ModelStep('Save model',model=train_step.get_expected_model(),model_name=execution_input['ModelName'])endpoint_config_step = EndpointConfigStep( "Create Endpoint Config", endpoint_config_name=execution_input['ModelName'], model_name=execution_input['ModelName'], initial_instance_count=1, instance_type='ml.m5.large')endpoint_step = EndpointStep("Create Endpoint",endpoint_name=execution_input['EndpointName'],endpoint_config_name=execution_input['ModelName']) Here we use our mask-rcnn estimator which is a general SageMaker Estimator object that lets us specify the type of instance we want to train our model on, identify any network or security settings if needed and model hyperparameters, as well as the output paths to the models. The input to the estimator is the container image that is created by our Lambda function above. maskrcnn = sagemaker.estimator.Estimator(image, role, 1, 'ml.p2.xlarge', #feel free to modify with your own. A cost estimate is provided in Readme. output_path="s3://{}/{}/output".format(sess.default_bucket(), key), sagemaker_session=sess)maskrcnn.set_hyperparameters(num_epochs = 1, num_classes = 2) By using the Chain utility, we can chain all the above steps together to occur sequentially. We can then choose to output the entire workflow as a JSON, that can be used in a much larger Cloud Formation template for example, which also includes information on the provisioning of instances, setting up of network security etc., or run on its own. By creating the workflow and rendering the graph, a state machine will be created in Amazon Step Functions console. workflow_definition = Chain([ lambda_state, train_step, model_step, endpoint_config_step, endpoint_step])# Next, we define the workflowworkflow = Workflow( name="MyWorkflow-BYOC-MaskRCNN-{}".format(uuid.uuid1().hex), definition=workflow_definition, role=workflow_execution_role)workflow.render_graph() This renders the following output: This completes our Step Function workflow. We can now execute this workflow directly in Amazon SageMaker and check the progress using the workflow.execute and workflow.render_progress() APIs or from the Step Functions console directly. Executions will be logged in CloudWatch and can be used to send alerts and notifications in a downstream system to users. With AWS Step Functions Data Science SDK, data scientists and engineers can seamlessly deploy custom containers, train ML models and deploy them into production. For integrating other AWS big data tools such as EMR and AWS Glue into Step Functions workflows, we refer the reader to [4, 5]. [1] He, K., Gkioxari, G., Dollar, P., and Girshick, R., Mask R-CNN, https://arxiv.org/abs/1703.06870. [2] https://github.com/aws-samples/aws-stepfunctions-byoc-mlops-using-data-science-sdk [3] https://github.com/awslabs/amazon-sagemaker-examples/tree/master/step-functions-data-science-sdk [4]https://medium.com/@elesin.olalekan/automating-machine-learning-workflows-with-aws-glue-sagemaker-and-aws-step-functions-data-science-b4ed59e4d7f9 [5]https://aws.amazon.com/blogs/aws/new-using-step-functions-to-orchestrate-amazon-emr-workloads/
[ { "code": null, "e": 224, "s": 172, "text": "By Stefan Natu, Shreyas Subramanian, and Qingwei Li" }, { "code": null, "e": 980, "s": 224, "text": "As the machine learning space matures, there is an increasing need for simple ways to automate and deploy ML pipelines into production. With the explosion of data science platforms, companies and teams are often using diverse machine learning platforms for data exploration, Extract-Transform-Load (ETL) jobs, model training and deployment. In this blog, we describe how users can bring their own algorithm code to build a training and inference image using Docker, train and host their model using Amazon SageMaker and AWS StepFunctions. We use Mask R-CNN, a highly popular instance segmentation model used for numerous computer vision use cases as such as [1]. Readers who wish to get hands-on with the contents of this blog can refer to our Github [2]." }, { "code": null, "e": 2013, "s": 980, "text": "AWS CodeBuild: AWS CodeBuild is a fully managed continuous integration (CI) service that allows users to compile and package code into deployable artifacts. Here we will use CodeBuild to package our custom Mask R-CNN container into a Docker image, which we upload into Amazon Elastic Container Registry (ECR) AWS Lambda: AWS Lambda is a service that lets you run code without provisioning servers. Here we will use AWS Lambda to deploy a CodeBuild job. AWS StepFunctions: AWS StepFuntions is an orchestration tool that allows users to build pipelines and coordinate microservices into a workflow. AWS StepFunctions can then trigger that workflow in an event driven manner without having the user manage any underlying servers or compute. Amazon SageMaker: Amazon SageMaker is a fully managed machine learning platform for building, training and deploying machine learning models. Here we use Amazon SageMaker to author training and model deployment jobs, as well as SageMaker Jupyter notebooks to author a StepFunctions workflow." }, { "code": null, "e": 2483, "s": 2013, "text": "Because we run the same image in training or hosting, Amazon SageMaker runs your container with the argument train or serve. When Amazon SageMaker runs training, your train script is run just like a regular Python program. Hosting has a very different model than training because hosting is responding to inference requests that come in via HTTP. In this example, we use our recommended Python serving stack to provide robust and scalable serving of inference requests:" }, { "code": null, "e": 2598, "s": 2483, "text": "In the `container` directory are all the components you need to package the sample algorithm for Amazon SageMager." }, { "code": null, "e": 2874, "s": 2598, "text": "├── Dockerfile ├── build_and_push.sh └── mask_r_cnn ├── nginx.conf ├── predictor.py ├── serve ├── wsgi.py ├── transforms.py ├── utils.py ├── coco_eval.py ├── coco_utils.py ├── engine.py └── helper.py" }, { "code": null, "e": 2911, "s": 2874, "text": "Let's discuss each of these in turn:" }, { "code": null, "e": 2994, "s": 2911, "text": "Dockerfile describes how to build your Docker container image. More details below." }, { "code": null, "e": 3232, "s": 2994, "text": "build_and_push.sh is a script that uses the Dockerfile to build your container images and then pushes it to ECR. We'll invoke the commands directly later in this notebook, but you can just copy and run the script for your own algorithms." }, { "code": null, "e": 3326, "s": 3232, "text": "mask_r_cnn is the directory which contains the files that will be installed in the container." }, { "code": null, "e": 3446, "s": 3326, "text": "In this simple application, we only install five files in the container. The files that we'll put in the container are:" }, { "code": null, "e": 3563, "s": 3446, "text": "nginx.conf is the configuration file for the nginx front-end. Generally, you should be able to take this file as-is." }, { "code": null, "e": 3685, "s": 3563, "text": "predictor.py is the program that actually implements the Flask web server and the decision tree predictions for this app." }, { "code": null, "e": 3912, "s": 3685, "text": "serve is the program started when the container is started for hosting. It simply launches the gunicorn server which runs multiple instances of the Flask app defined in predictor.py. You should be able to take this file as-is." }, { "code": null, "e": 3989, "s": 3912, "text": "train is the program that is invoked when the container is run for training." }, { "code": null, "e": 4090, "s": 3989, "text": "wsgi.py is a small wrapper used to invoke the Flask app. You should be able to take this file as-is." }, { "code": null, "e": 4296, "s": 4090, "text": "We have customized train.py and predictor.py for fine tuning Mask R-CNN during training, and for loading tuned model, deserializing request data, making prediction, and sending back the serialized results." }, { "code": null, "e": 4667, "s": 4296, "text": "The Dockerfile describes the image that we want to build. We will start from a standard Ubuntu installation and run the normal tools to install the things needed such as python, torch, torchvision, and Pillow. Finally, we add the code that implements our specific algorithm to the container and set up the right environment to run under. The Dockerfile looks like below," }, { "code": null, "e": 5462, "s": 4667, "text": "FROM ubuntu:16.04MAINTAINER Amazon AI <sage-learner@amazon.com>RUN apt-get -y update && apt-get install -y --no-install-recommends \\ wget \\ gcc\\ g++\\ python3 \\ python3-dev\\ nginx \\ ca-certificates \\ && rm -rf /var/lib/apt/lists/*RUN wget https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py && \\ pip install cython numpy==1.16.2 scipy==1.2.1 pandas flask gevent gunicorn && \\ (cd /usr/local/lib/python3.5/dist-packages/scipy/.libs; rm *; ln ../../numpy/.libs/* .) && \\ rm -rf /root/.cacheRUN pip install torch torchvision fastai thinc PillowENV PYTHONUNBUFFERED=TRUEENV PYTHONDONTWRITEBYTECODE=TRUEENV PATH=\"/opt/program:${PATH}\"# Set up the program in the imageCOPY mask_r_cnn /opt/programWORKDIR /opt/program" }, { "code": null, "e": 6118, "s": 5462, "text": "For building SageMaker-ready containers, such as the one discussed above for Mask R-CNN, we use the open source “SageMaker Containers” project which can be found here https://github.com/aws/sagemaker-containers. SageMaker Containers gives you tools to create SageMaker-compatible Docker containers, and has additional tools for letting you create Frameworks (SageMaker-compatible Docker containers that can run arbitrary Python or shell scripts). Currently, this library is used by the following containers: TensorFlow Script Mode, MXNet, PyTorch, Chainer, and Scikit-learn. To create a Sagemaker compatible container, we require the following components:" }, { "code": null, "e": 6194, "s": 6118, "text": "train.py file with your training code, andDockerfile, such as the one above" }, { "code": null, "e": 6237, "s": 6194, "text": "train.py file with your training code, and" }, { "code": null, "e": 6271, "s": 6237, "text": "Dockerfile, such as the one above" }, { "code": null, "e": 6454, "s": 6271, "text": "The training script must be located under the folder /opt/ml/code and its relative path is defined in the environment variable SAGEMAKER_PROGRAM. The following scripts are supported:" }, { "code": null, "e": 6529, "s": 6454, "text": "Python scripts: uses the Python interpreter for any script with .py suffix" }, { "code": null, "e": 6599, "s": 6529, "text": "Shell scripts: uses the Shell interpreter to execute any other script" }, { "code": null, "e": 6687, "s": 6599, "text": "When training starts, the interpreter executes the entry point, from the example above:" }, { "code": null, "e": 6703, "s": 6687, "text": "python train.py" }, { "code": null, "e": 6837, "s": 6703, "text": "For more information on hyper-parameters and environment variables, please refer to https://github.com/aws/sagemaker-containers#id10." }, { "code": null, "e": 6976, "s": 6837, "text": "We will use a Cloud formation template to automate the container build of our Mask R-CNN. The template sets up the following architecture:" }, { "code": null, "e": 7180, "s": 6976, "text": "The Lambda function contains the train.py file and Dockerfile that can be edited inline. Once triggered (manually, or through a step functions approach as shown in the next section), the Lambda function:" }, { "code": null, "e": 7466, "s": 7180, "text": "Creates an ECR repository ,if it doesn’t already exist, to store the container images once builtUploads the train.py and Dockerfile to an S3 bucketCreates a Codebuild project and uses the above files with a buildspec.yml to start the process to build a container push the image to ECR." }, { "code": null, "e": 7563, "s": 7466, "text": "Creates an ECR repository ,if it doesn’t already exist, to store the container images once built" }, { "code": null, "e": 7615, "s": 7563, "text": "Uploads the train.py and Dockerfile to an S3 bucket" }, { "code": null, "e": 7754, "s": 7615, "text": "Creates a Codebuild project and uses the above files with a buildspec.yml to start the process to build a container push the image to ECR." }, { "code": null, "e": 7858, "s": 7754, "text": "The Lambda function also contains useful environment variables that can be reconfigured for new builds." }, { "code": null, "e": 9161, "s": 7858, "text": "Once the Lambda function is set up, we are now ready to build out Automation pipeline to train and deploy the model to an endpoint. For this, we will use AWS Step Functions, which is an orchestration tool which lets users author state machines as JSON objects and execute them without provisioning or managing any servers. Step Functions also now provides a Data Science Python SDK for authoring machine learning pipelines using python in a familiar Jupyter notebook environment. We refer the reader to the Step Functions Github repository to get started [3]. Here we simply demonstrate the key components of the pipeline that are described in detail in our Github [2]. To author the code, we will use Amazon SageMaker, AWS fully managed machine learning platform. As with all AWS Services, we first need to provide the appropriate service the IAM permissions to call other AWS Services. For this we first need to allow Amazon SageMaker to call Step Functions APIs, and AWS Step Functions to call SageMaker for model training, and endpoint creation and deployment. Detailed instructions on how to set up proper IAM credentials are described in [3]. Once this is set up, we will first create a Lambda state to run the Lambda function that takes the code and deploys it as a container to host in ECR." }, { "code": null, "e": 9664, "s": 9161, "text": "lambda_state = LambdaStep( state_id=\"Calls CodeBuild to Build Container\", parameters={ \"FunctionName\": \"Docker_Lambda\", #replace with the name of the Lambda function you created \"Payload\": { \"input\": \"HelloWorld\" } })lambda_state.add_retry(Retry( error_equals=[\"States.TaskFailed\"], interval_seconds=15, max_attempts=2, backoff_rate=4.0))lambda_state.add_catch(Catch( error_equals=[\"States.TaskFailed\"], next_step=Fail(\"LambdaTaskFailed\")))" }, { "code": null, "e": 10155, "s": 9664, "text": "Retry and Catch steps here are added for error handling. You can modify these with your custom error handling steps, but this lets Step Functions know to end the workflow if the Lambda function fails to deploy the container. The next steps are to chain together the training job and the model creation from the trained artifacts. Fortunately, Step Functions Data Science SDK provides the logic and APIs to chain these steps together, with any custom branching logic that could be required." }, { "code": null, "e": 10773, "s": 10155, "text": "train_step = TrainingStep('Train Step',estimator=maskrcnn,data=os.path.dirname(data_location),job_name=execution_input['JobName'])model_step = ModelStep('Save model',model=train_step.get_expected_model(),model_name=execution_input['ModelName'])endpoint_config_step = EndpointConfigStep( \"Create Endpoint Config\", endpoint_config_name=execution_input['ModelName'], model_name=execution_input['ModelName'], initial_instance_count=1, instance_type='ml.m5.large')endpoint_step = EndpointStep(\"Create Endpoint\",endpoint_name=execution_input['EndpointName'],endpoint_config_name=execution_input['ModelName'])" }, { "code": null, "e": 11146, "s": 10773, "text": "Here we use our mask-rcnn estimator which is a general SageMaker Estimator object that lets us specify the type of instance we want to train our model on, identify any network or security settings if needed and model hyperparameters, as well as the output paths to the models. The input to the estimator is the container image that is created by our Lambda function above." }, { "code": null, "e": 11542, "s": 11146, "text": "maskrcnn = sagemaker.estimator.Estimator(image, role, 1, 'ml.p2.xlarge', #feel free to modify with your own. A cost estimate is provided in Readme. output_path=\"s3://{}/{}/output\".format(sess.default_bucket(), key), sagemaker_session=sess)maskrcnn.set_hyperparameters(num_epochs = 1, num_classes = 2)" }, { "code": null, "e": 12005, "s": 11542, "text": "By using the Chain utility, we can chain all the above steps together to occur sequentially. We can then choose to output the entire workflow as a JSON, that can be used in a much larger Cloud Formation template for example, which also includes information on the provisioning of instances, setting up of network security etc., or run on its own. By creating the workflow and rendering the graph, a state machine will be created in Amazon Step Functions console." }, { "code": null, "e": 12331, "s": 12005, "text": "workflow_definition = Chain([ lambda_state, train_step, model_step, endpoint_config_step, endpoint_step])# Next, we define the workflowworkflow = Workflow( name=\"MyWorkflow-BYOC-MaskRCNN-{}\".format(uuid.uuid1().hex), definition=workflow_definition, role=workflow_execution_role)workflow.render_graph()" }, { "code": null, "e": 12366, "s": 12331, "text": "This renders the following output:" }, { "code": null, "e": 13016, "s": 12366, "text": "This completes our Step Function workflow. We can now execute this workflow directly in Amazon SageMaker and check the progress using the workflow.execute and workflow.render_progress() APIs or from the Step Functions console directly. Executions will be logged in CloudWatch and can be used to send alerts and notifications in a downstream system to users. With AWS Step Functions Data Science SDK, data scientists and engineers can seamlessly deploy custom containers, train ML models and deploy them into production. For integrating other AWS big data tools such as EMR and AWS Glue into Step Functions workflows, we refer the reader to [4, 5]." } ]
CSS - Pseudo-class :hover
The :hover pseudo-class is used to add special effect to an element when you mouse over it. While defining pseudo-classes in a <style>...</style> block, following points should be taken care − a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective. a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective. a:active MUST come after a:hover in the CSS definition in order to be effective. a:active MUST come after a:hover in the CSS definition in order to be effective. Pseudo-class names are not case-sensitive. Pseudo-class names are not case-sensitive. Pseudo-class are different from CSS classes but they can be combined. Pseudo-class are different from CSS classes but they can be combined. color − Any valid color value. color − Any valid color value. Anchor / Link element. Following is the example which demonstrates how use :hover class to change the color of links when we bring a mouse pointer over that link. <html> <head> <style type = "text/css"> a:hover {color: #FFCC00} </style> </head> <body> <a href = "/html/index.htm">Bring Mouse Here</a> </body> </html> This will produce following link. Now you bring your mouse over this link and you will see that it changes its color to yellow. 33 Lectures 2.5 hours Anadi Sharma 26 Lectures 2.5 hours Frahaan Hussain 44 Lectures 4.5 hours DigiFisk (Programming Is Fun) 21 Lectures 2.5 hours DigiFisk (Programming Is Fun) 51 Lectures 7.5 hours DigiFisk (Programming Is Fun) 52 Lectures 4 hours DigiFisk (Programming Is Fun) Print Add Notes Bookmark this page
[ { "code": null, "e": 2718, "s": 2626, "text": "The :hover pseudo-class is used to add special effect to an element when you mouse over it." }, { "code": null, "e": 2819, "s": 2718, "text": "While defining pseudo-classes in a <style>...</style> block, following points should be taken care −" }, { "code": null, "e": 2912, "s": 2819, "text": "a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective." }, { "code": null, "e": 3005, "s": 2912, "text": "a:hover MUST come after a:link and a:visited in the CSS definition in order to be effective." }, { "code": null, "e": 3086, "s": 3005, "text": "a:active MUST come after a:hover in the CSS definition in order to be effective." }, { "code": null, "e": 3167, "s": 3086, "text": "a:active MUST come after a:hover in the CSS definition in order to be effective." }, { "code": null, "e": 3210, "s": 3167, "text": "Pseudo-class names are not case-sensitive." }, { "code": null, "e": 3253, "s": 3210, "text": "Pseudo-class names are not case-sensitive." }, { "code": null, "e": 3323, "s": 3253, "text": "Pseudo-class are different from CSS classes but they can be combined." }, { "code": null, "e": 3393, "s": 3323, "text": "Pseudo-class are different from CSS classes but they can be combined." }, { "code": null, "e": 3424, "s": 3393, "text": "color − Any valid color value." }, { "code": null, "e": 3455, "s": 3424, "text": "color − Any valid color value." }, { "code": null, "e": 3478, "s": 3455, "text": "Anchor / Link element." }, { "code": null, "e": 3618, "s": 3478, "text": "Following is the example which demonstrates how use :hover class to change the color of links when we bring a mouse pointer over that link." }, { "code": null, "e": 3813, "s": 3618, "text": "<html>\n <head>\n <style type = \"text/css\">\n a:hover {color: #FFCC00}\n </style>\n </head>\n\n <body>\n <a href = \"/html/index.htm\">Bring Mouse Here</a>\n </body>\n</html> " }, { "code": null, "e": 3941, "s": 3813, "text": "This will produce following link. Now you bring your mouse over this link and you will see that it changes its color to yellow." }, { "code": null, "e": 3976, "s": 3941, "text": "\n 33 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3990, "s": 3976, "text": " Anadi Sharma" }, { "code": null, "e": 4025, "s": 3990, "text": "\n 26 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4042, "s": 4025, "text": " Frahaan Hussain" }, { "code": null, "e": 4077, "s": 4042, "text": "\n 44 Lectures \n 4.5 hours \n" }, { "code": null, "e": 4108, "s": 4077, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 4143, "s": 4108, "text": "\n 21 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4174, "s": 4143, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 4209, "s": 4174, "text": "\n 51 Lectures \n 7.5 hours \n" }, { "code": null, "e": 4240, "s": 4209, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 4273, "s": 4240, "text": "\n 52 Lectures \n 4 hours \n" }, { "code": null, "e": 4304, "s": 4273, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 4311, "s": 4304, "text": " Print" }, { "code": null, "e": 4322, "s": 4311, "text": " Add Notes" } ]
Generics in Java
It would be nice if we could write a single sort method that could sort the elements in an Integer array, a String array, or an array of any type that supports ordering. Java Generic methods and generic classes enable programmers to specify, with a single method declaration, a set of related methods, or with a single class declaration, a set of related types, respectively. Generics also provide compile-time type safety that allows programmers to catch invalid types at compile time. Using Java Generic concept, we might write a generic method for sorting an array of objects, then invoke the generic method with Integer arrays, Double arrays, String arrays and so on, to sort the array elements. You can write a single generic method declaration that can be called with arguments of different types. Based on the types of the arguments passed to the generic method, the compiler handles each method call appropriately. Following are the rules to define Generic Methods − All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example). All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example). Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name. Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name. The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments. The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments. A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char). A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char). Following example illustrates how we can print an array of different type using a single Generic method − public class GenericMethodTest { // generic method printArray public static < E > void printArray( E[] inputArray ) { // Display array elements for(E element : inputArray) { System.out.printf("%s ", element); } System.out.println(); } public static void main(String args[]) { // Create arrays of Integer, Double and Character Integer[] intArray = { 1, 2, 3, 4, 5 }; Double[] doubleArray = { 1.1, 2.2, 3.3, 4.4 }; Character[] charArray = { 'H', 'E', 'L', 'L', 'O' }; System.out.println("Array integerArray contains:"); printArray(intArray); // pass an Integer array System.out.println("\nArray doubleArray contains:"); printArray(doubleArray); // pass a Double array System.out.println("\nArray characterArray contains:"); printArray(charArray); // pass a Character array } } This will produce the following result − Array integerArray contains: 1 2 3 4 5 Array doubleArray contains: 1.1 2.2 3.3 4.4 Array characterArray contains: H E L L O There may be times when you'll want to restrict the kinds of types that are allowed to be passed to a type parameter. For example, a method that operates on numbers might only want to accept instances of Number or its subclasses. This is what bounded type parameters are for. To declare a bounded type parameter, list the type parameter's name, followed by the extends keyword, followed by its upper bound. Following example illustrates how extends is used in a general sense to mean either "extends" (as in classes) or "implements" (as in interfaces). This example is a Generic method to return the largest of three Comparable objects − public class MaximumTest { // determines the largest of three Comparable objects public static <T extends Comparable<T>> T maximum(T x, T y, T z) { T max = x; // assume x is initially the largest if(y.compareTo(max) > 0) { max = y; // y is the largest so far } if(z.compareTo(max) > 0) { max = z; // z is the largest now } return max; // returns the largest object } public static void main(String args[]) { System.out.printf("Max of %d, %d and %d is %d\n\n", 3, 4, 5, maximum( 3, 4, 5 )); System.out.printf("Max of %.1f,%.1f and %.1f is %.1f\n\n", 6.6, 8.8, 7.7, maximum( 6.6, 8.8, 7.7 )); System.out.printf("Max of %s, %s and %s is %s\n","pear", "apple", "orange", maximum("pear", "apple", "orange")); } } This will produce the following result − Max of 3, 4 and 5 is 5 Max of 6.6,8.8 and 7.7 is 8.8 Max of pear, apple and orange is pear A generic class declaration looks like a non-generic class declaration, except that the class name is followed by a type parameter section. As with generic methods, the type parameter section of a generic class can have one or more type parameters separated by commas. These classes are known as parameterized classes or parameterized types because they accept one or more parameters. Following example illustrates how we can define a generic class − public class Box<T> { private T t; public void add(T t) { this.t = t; } public T get() { return t; } public static void main(String[] args) { Box<Integer> integerBox = new Box<Integer>(); Box<String> stringBox = new Box<String>(); integerBox.add(new Integer(10)); stringBox.add(new String("Hello World")); System.out.printf("Integer Value :%d\n\n", integerBox.get()); System.out.printf("String Value :%s\n", stringBox.get()); } } This will produce the following result − Integer Value :10 String Value :Hello World
[ { "code": null, "e": 1232, "s": 1062, "text": "It would be nice if we could write a single sort method that could sort the elements in an Integer array, a String array, or an array of any type that supports ordering." }, { "code": null, "e": 1438, "s": 1232, "text": "Java Generic methods and generic classes enable programmers to specify, with a single method declaration, a set of related methods, or with a single class declaration, a set of related types, respectively." }, { "code": null, "e": 1549, "s": 1438, "text": "Generics also provide compile-time type safety that allows programmers to catch invalid types at compile time." }, { "code": null, "e": 1762, "s": 1549, "text": "Using Java Generic concept, we might write a generic method for sorting an array of objects, then invoke the generic method with Integer arrays, Double arrays, String arrays and so on, to sort the array elements." }, { "code": null, "e": 2037, "s": 1762, "text": "You can write a single generic method declaration that can be called with arguments of different types. Based on the types of the arguments passed to the generic method, the compiler handles each method call appropriately. Following are the rules to define Generic Methods −" }, { "code": null, "e": 2206, "s": 2037, "text": "All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example)." }, { "code": null, "e": 2375, "s": 2206, "text": "All generic method declarations have a type parameter section delimited by angle brackets (< and >) that precedes the method's return type ( < E > in the next example)." }, { "code": null, "e": 2563, "s": 2375, "text": "Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name." }, { "code": null, "e": 2751, "s": 2563, "text": "Each type parameter section contains one or more type parameters separated by commas. A type parameter, also known as a type variable, is an identifier that specifies a generic type name." }, { "code": null, "e": 2937, "s": 2751, "text": "The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments." }, { "code": null, "e": 3123, "s": 2937, "text": "The type parameters can be used to declare the return type and act as placeholders for the types of the arguments passed to the generic method, which are known as actual type arguments." }, { "code": null, "e": 3301, "s": 3123, "text": "A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char)." }, { "code": null, "e": 3479, "s": 3301, "text": "A generic method's body is declared like that of any other method. Note that type parameters can represent only reference types, not primitive types (like int, double and char)." }, { "code": null, "e": 3585, "s": 3479, "text": "Following example illustrates how we can print an array of different type using a single Generic method −" }, { "code": null, "e": 4365, "s": 3585, "text": "public class GenericMethodTest {\n// generic method printArray\npublic static < E > void printArray( E[] inputArray ) {\n// Display array elements\nfor(E element : inputArray) {\nSystem.out.printf(\"%s \", element);\n}\nSystem.out.println();\n}\n\npublic static void main(String args[]) {\n\n// Create arrays of Integer, Double and Character\nInteger[] intArray = { 1, 2, 3, 4, 5 };\nDouble[] doubleArray = { 1.1, 2.2, 3.3, 4.4 };\nCharacter[] charArray = { 'H', 'E', 'L', 'L', 'O' };\n\nSystem.out.println(\"Array integerArray contains:\");\nprintArray(intArray); // pass an Integer array\n\nSystem.out.println(\"\\nArray doubleArray contains:\");\nprintArray(doubleArray); // pass a Double array\n\nSystem.out.println(\"\\nArray characterArray contains:\");\nprintArray(charArray); // pass a Character array\n}\n}" }, { "code": null, "e": 4406, "s": 4365, "text": "This will produce the following result −" }, { "code": null, "e": 4532, "s": 4406, "text": "Array integerArray contains:\n1 2 3 4 5\n\nArray doubleArray contains:\n1.1 2.2 3.3 4.4\n\nArray characterArray contains:\nH E L L O" }, { "code": null, "e": 4808, "s": 4532, "text": "There may be times when you'll want to restrict the kinds of types that are allowed to be passed to a type parameter. For example, a method that operates on numbers might only want to accept instances of Number or its subclasses. This is what bounded type parameters are for." }, { "code": null, "e": 4939, "s": 4808, "text": "To declare a bounded type parameter, list the type parameter's name, followed by the extends keyword, followed by its upper bound." }, { "code": null, "e": 5170, "s": 4939, "text": "Following example illustrates how extends is used in a general sense to mean either \"extends\" (as in classes) or \"implements\" (as in interfaces). This example is a Generic method to return the largest of three Comparable objects −" }, { "code": null, "e": 5990, "s": 5170, "text": "public class MaximumTest {\n // determines the largest of three Comparable objects\n\n public static <T extends Comparable<T>> T maximum(T x, T y, T z) {\n T max = x; // assume x is initially the largest\n\n if(y.compareTo(max) > 0) {\n max = y; // y is the largest so far\n }\n\n if(z.compareTo(max) > 0) {\n max = z; // z is the largest now\n }\n return max; // returns the largest object\n }\n\n public static void main(String args[]) {\n\n System.out.printf(\"Max of %d, %d and %d is %d\\n\\n\",\n 3, 4, 5, maximum( 3, 4, 5 ));\n\n System.out.printf(\"Max of %.1f,%.1f and %.1f is %.1f\\n\\n\",\n 6.6, 8.8, 7.7, maximum( 6.6, 8.8, 7.7 ));\n\n System.out.printf(\"Max of %s, %s and %s is %s\\n\",\"pear\",\n \"apple\", \"orange\", maximum(\"pear\", \"apple\", \"orange\"));\n }\n}" }, { "code": null, "e": 6031, "s": 5990, "text": "This will produce the following result −" }, { "code": null, "e": 6124, "s": 6031, "text": "Max of 3, 4 and 5 is 5\n\nMax of 6.6,8.8 and 7.7 is 8.8\n\nMax of pear, apple and orange is pear" }, { "code": null, "e": 6264, "s": 6124, "text": "A generic class declaration looks like a non-generic class declaration, except that the class name is followed by a type parameter section." }, { "code": null, "e": 6509, "s": 6264, "text": "As with generic methods, the type parameter section of a generic class can have one or more type parameters separated by commas. These classes are known as parameterized classes or parameterized types because they accept one or more parameters." }, { "code": null, "e": 6575, "s": 6509, "text": "Following example illustrates how we can define a generic class −" }, { "code": null, "e": 7080, "s": 6575, "text": "public class Box<T> {\n private T t;\n\n public void add(T t) {\n this.t = t;\n }\n\n public T get() {\n return t;\n }\n\n public static void main(String[] args) {\n\n Box<Integer> integerBox = new Box<Integer>();\n Box<String> stringBox = new Box<String>();\n\n integerBox.add(new Integer(10));\n stringBox.add(new String(\"Hello World\"));\n\n System.out.printf(\"Integer Value :%d\\n\\n\", integerBox.get());\n System.out.printf(\"String Value :%s\\n\", stringBox.get());\n }\n}" }, { "code": null, "e": 7121, "s": 7080, "text": "This will produce the following result −" }, { "code": null, "e": 7165, "s": 7121, "text": "Integer Value :10\nString Value :Hello World" } ]
Memcached - Connection
To connect to a Memcached server, you need to use the telnet command on HOST and PORT names. The basic syntax of Memcached telnet command is as shown below − $telnet HOST PORT Here, HOST and PORT are machine IP and port number respectively, on which the Memcached server is executing. The following example shows how to connect to a Memcached server and execute a simple set and get command. Assume that the Memcached server is running on host 127.0.0.1 and port 11211. $telnet 127.0.0.1 11211 Trying 127.0.0.1... Connected to 127.0.0.1. Escape character is '^]'. // now store some data and get it from memcached server set tutorialspoint 0 900 9 memcached STORED get tutorialspoint VALUE tutorialspoint 0 9 memcached END To connect the Memcached server from your java program, you need to add the Memcached jar into your classpath as shown in the previous chapter. Assume that the Memcached server is running on host 127.0.0.1 and port 11211. − import net.spy.memcached.MemcachedClient; public class MemcachedJava { public static void main(String[] args) { // Connecting to Memcached server on localhost MemcachedClient mcc = new MemcachedClient(new InetSocketAddress("127.0.0.1", 11211)); System.out.println("Connection to server sucessfully"); //not set data into memcached server System.out.println("set status:"+mcc.set("tutorialspoint", 900, "memcached").done); //Get value from cache System.out.println("Get from Cache:"+mcc.get("tutorialspoint")); } } On compiling and executing the program, you get to see the following output − Connection to server successfully set status:true Get from Cache:memcached. The terminal may show few informational messages too, those can be ignored. Print Add Notes Bookmark this page
[ { "code": null, "e": 2198, "s": 2105, "text": "To connect to a Memcached server, you need to use the telnet command on HOST and PORT names." }, { "code": null, "e": 2263, "s": 2198, "text": "The basic syntax of Memcached telnet command is as shown below −" }, { "code": null, "e": 2282, "s": 2263, "text": "$telnet HOST PORT\n" }, { "code": null, "e": 2391, "s": 2282, "text": "Here, HOST and PORT are machine IP and port number respectively, on which the Memcached server is executing." }, { "code": null, "e": 2576, "s": 2391, "text": "The following example shows how to connect to a Memcached server and execute a simple set and get command. Assume that the Memcached server is running on host 127.0.0.1 and port 11211." }, { "code": null, "e": 2829, "s": 2576, "text": "$telnet 127.0.0.1 11211\nTrying 127.0.0.1...\nConnected to 127.0.0.1.\nEscape character is '^]'.\n// now store some data and get it from memcached server\nset tutorialspoint 0 900 9\nmemcached\nSTORED\nget tutorialspoint\nVALUE tutorialspoint 0 9\nmemcached\nEND\n" }, { "code": null, "e": 3053, "s": 2829, "text": "To connect the Memcached server from your java program, you need to add the Memcached jar into your classpath as shown in the previous chapter. Assume that the Memcached server is running on host 127.0.0.1 and port 11211. −" }, { "code": null, "e": 3642, "s": 3053, "text": "import net.spy.memcached.MemcachedClient;\npublic class MemcachedJava {\n public static void main(String[] args) {\n \n // Connecting to Memcached server on localhost\n MemcachedClient mcc = new MemcachedClient(new\n InetSocketAddress(\"127.0.0.1\", 11211));\n System.out.println(\"Connection to server sucessfully\");\n \n //not set data into memcached server\n System.out.println(\"set status:\"+mcc.set(\"tutorialspoint\", 900, \"memcached\").done);\n \n //Get value from cache\n System.out.println(\"Get from Cache:\"+mcc.get(\"tutorialspoint\"));\n }\n}" }, { "code": null, "e": 3720, "s": 3642, "text": "On compiling and executing the program, you get to see the following output −" }, { "code": null, "e": 3797, "s": 3720, "text": "Connection to server successfully\nset status:true\nGet from Cache:memcached.\n" }, { "code": null, "e": 3873, "s": 3797, "text": "The terminal may show few informational messages too, those can be ignored." }, { "code": null, "e": 3880, "s": 3873, "text": " Print" }, { "code": null, "e": 3891, "s": 3880, "text": " Add Notes" } ]
Python - Plotting scatter charts in excel sheet using XlsxWriter module
A scatter plot is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded, one additional variable can be displayed. # import xlsxwriter module import xlsxwriter # Workbook() takes one, non-optional, argument which is the filename #that we want to create. workbook = xlsxwriter.Workbook('chart_scatter.xlsx') # The workbook object is then used to add new worksheet via the #add_worksheet() method. worksheet = workbook.add_worksheet() # Create a new Format object to formats cells in worksheets using #add_format() method . # here we create bold format object . bold = workbook.add_format({'bold': 1}) # create a data list . headings = ['Number', 'Batch 1', 'Batch 2'] data = [ [2, 3, 4, 5, 6, 7], [80, 80, 100, 60, 50, 100], [60, 50, 60, 20, 10, 20], ] # Write a row of data starting from 'A1' with bold format . worksheet.write_row('A1', headings, bold) # Write a column of data starting from 'A2', 'B2', 'C2' respectively worksheet.write_column('A2', data[0]) worksheet.write_column('B2', data[1]) worksheet.write_column('C2', data[2]) # Create a chart object that can be added to a worksheet using #add_chart() method. # here we create a scatter chart object . chart1 = workbook.add_chart({'type': 'scatter'}) # Add a data series to a chart using add_series method. # Configure the first series. # = Sheet1 !$A$1 is equivalent to ['Sheet1', 0, 0]. # note : spaces is not inserted in b / w # = and Sheet1, Sheet1 and ! # if space is inserted it throws warning. chart1.add_series({ 'name': '= Sheet1 !$B$1', 'categories': '= Sheet1 !$A$2:$A$7', 'values': '= Sheet1 !$B$2:$B$7', }) # Configure a second series. # Note use of alternative syntax to define ranges. # [sheetname, first_row, first_col, last_row, last_col]. chart1.add_series({ 'name': ['Sheet1', 0, 2], 'categories': ['Sheet1', 1, 0, 6, 0], 'values': ['Sheet1', 1, 2, 6, 2], }) # Add a chart title chart1.set_title ({'name': 'Results of data analysis'}) # Add x-axis label chart1.set_x_axis({'name': 'Test number'}) # Add y-axis label chart1.set_y_axis({'name': 'Data length (mm)'}) # Set an Excel chart style. chart1.set_style(11) # add chart to the worksheet the top-left corner of a chart # is anchored to cell E2 . worksheet.insert_chart('E2', chart1) # Finally, close the Excel file via the close() method. workbook.close()
[ { "code": null, "e": 1279, "s": 1062, "text": "A scatter plot is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded, one additional variable can be displayed." }, { "code": null, "e": 3567, "s": 1279, "text": "# import xlsxwriter module\nimport xlsxwriter\n# Workbook() takes one, non-optional, argument which is the filename #that we want to create.\nworkbook = xlsxwriter.Workbook('chart_scatter.xlsx')\n# The workbook object is then used to add new worksheet via the #add_worksheet() method. \nworksheet = workbook.add_worksheet() \n# Create a new Format object to formats cells in worksheets using #add_format() method . \n# here we create bold format object .\nbold = workbook.add_format({'bold': 1}) \n# create a data list .\nheadings = ['Number', 'Batch 1', 'Batch 2'] \ndata = [\n [2, 3, 4, 5, 6, 7],\n [80, 80, 100, 60, 50, 100],\n [60, 50, 60, 20, 10, 20],\n] \n# Write a row of data starting from 'A1' with bold format .\nworksheet.write_row('A1', headings, bold) \n# Write a column of data starting from 'A2', 'B2', 'C2' respectively\nworksheet.write_column('A2', data[0])\nworksheet.write_column('B2', data[1])\nworksheet.write_column('C2', data[2]) \n# Create a chart object that can be added to a worksheet using #add_chart() method. \n# here we create a scatter chart object .\nchart1 = workbook.add_chart({'type': 'scatter'}) \n# Add a data series to a chart using add_series method. \n# Configure the first series.\n# = Sheet1 !$A$1 is equivalent to ['Sheet1', 0, 0]. \n# note : spaces is not inserted in b / w\n# = and Sheet1, Sheet1 and !\n# if space is inserted it throws warning.\nchart1.add_series({\n 'name': '= Sheet1 !$B$1',\n 'categories': '= Sheet1 !$A$2:$A$7',\n 'values': '= Sheet1 !$B$2:$B$7',\n}) \n# Configure a second series.\n# Note use of alternative syntax to define ranges.\n# [sheetname, first_row, first_col, last_row, last_col].\nchart1.add_series({\n 'name': ['Sheet1', 0, 2],\n 'categories': ['Sheet1', 1, 0, 6, 0],\n 'values': ['Sheet1', 1, 2, 6, 2],\n}) \n# Add a chart title \nchart1.set_title ({'name': 'Results of data analysis'})\n# Add x-axis label\nchart1.set_x_axis({'name': 'Test number'})\n# Add y-axis label\nchart1.set_y_axis({'name': 'Data length (mm)'})\n# Set an Excel chart style.\nchart1.set_style(11)\n# add chart to the worksheet the top-left corner of a chart \n# is anchored to cell E2 . \nworksheet.insert_chart('E2', chart1)\n# Finally, close the Excel file via the close() method. \nworkbook.close()" } ]
MySQL Select IN range?
You cannot do select IN range. For the same result, use BETWEEN. Let us see an example − IN(start,end): It means that the intermediate value between start and end won’t get displayed. For the above logic, you can use BETWEEN. BETWEEN clause is inclusive, for example, suppose there are 1,2,3,4,5,6 numbers. If you want to display numbers from 2 to 6 inclusively, then using BETWEEN the numbers 2 and 6 will also get displayed. Let us create a table − mysql> create table SelectInWithBetweenDemo -> ( -> PortalId int -> ); Query OK, 0 rows affected (0.77 sec) Insert some records with the help of batch insert. The query is as follows − mysql> insert into SelectInWithBetweenDemo values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15); Query OK, 15 rows affected (0.19 sec) Records: 15 Duplicates: 0 Warnings: 0 Display all records with the help of select statement. The query is as follows − mysql> select *from SelectInWithBetweenDemo; Here is the output − +----------+ | PortalId | +----------+ | 1 | | 2 | | 3 | | 4 | | 5 | | 6 | | 7 | | 8 | | 9 | | 10 | | 11 | | 12 | | 13 | | 14 | | 15 | +----------+ 15 rows in set (0.00 sec) Let us now check the select IN range. The query is as follows − mysql> select PortalId from SelectInWithBetweenDemo where PortalId IN(4,10); The following is the output − +----------+ | PortalId | +----------+ | 4 | | 10 | +----------+ 2 rows in set (0.00 sec) Look at the above output, we are getting only 4 and 10, whereas we want the value 4,5,6,7,8,9,10. Now we will use BETWEEN clause. It will give the result as we want with inclusion. The query is as follows − mysql> select PortalId from SelectInWithBetweenDemo where PortalId Between 4 and 10; The following is the output − +----------+ | PortalId | +----------+ | 4 | | 5 | | 6 | | 7 | | 8 | | 9 | | 10 | +----------+ 7 rows in set (0.09 sec) Suppose if you want exclusive property then you can use > and <. The query is as follows − mysql> select PortalId from SelectInWithBetweenDemo where PortalId > 4 and PortalId < 10; Here is the output − +----------+ | PortalId | +----------+ | 5 | | 6 | | 7 | | 8 | | 9 | +----------+ 5 rows in set (0.00 sec)
[ { "code": null, "e": 1151, "s": 1062, "text": "You cannot do select IN range. For the same result, use BETWEEN. Let us see an example −" }, { "code": null, "e": 1288, "s": 1151, "text": "IN(start,end): It means that the intermediate value between start and end won’t get displayed. For the above logic, you can use BETWEEN." }, { "code": null, "e": 1489, "s": 1288, "text": "BETWEEN clause is inclusive, for example, suppose there are 1,2,3,4,5,6 numbers. If you want to display numbers from 2 to 6 inclusively, then using BETWEEN the numbers 2 and 6 will also get displayed." }, { "code": null, "e": 1513, "s": 1489, "text": "Let us create a table −" }, { "code": null, "e": 1630, "s": 1513, "text": "mysql> create table SelectInWithBetweenDemo\n -> (\n -> PortalId int\n -> );\nQuery OK, 0 rows affected (0.77 sec)" }, { "code": null, "e": 1707, "s": 1630, "text": "Insert some records with the help of batch insert. The query is as follows −" }, { "code": null, "e": 1899, "s": 1707, "text": "mysql> insert into SelectInWithBetweenDemo values(1),(2),(3),(4),(5),(6),(7),(8),(9),(10),(11),(12),(13),(14),(15);\nQuery OK, 15 rows affected (0.19 sec)\nRecords: 15 Duplicates: 0 Warnings: 0" }, { "code": null, "e": 1980, "s": 1899, "text": "Display all records with the help of select statement. The query is as follows −" }, { "code": null, "e": 2025, "s": 1980, "text": "mysql> select *from SelectInWithBetweenDemo;" }, { "code": null, "e": 2046, "s": 2025, "text": "Here is the output −" }, { "code": null, "e": 2319, "s": 2046, "text": "+----------+\n| PortalId |\n+----------+\n| 1 |\n| 2 |\n| 3 |\n| 4 |\n| 5 |\n| 6 |\n| 7 |\n| 8 |\n| 9 |\n| 10 |\n| 11 |\n| 12 |\n| 13 |\n| 14 |\n| 15 |\n+----------+\n15 rows in set (0.00 sec)" }, { "code": null, "e": 2383, "s": 2319, "text": "Let us now check the select IN range. The query is as follows −" }, { "code": null, "e": 2460, "s": 2383, "text": "mysql> select PortalId from SelectInWithBetweenDemo where PortalId IN(4,10);" }, { "code": null, "e": 2490, "s": 2460, "text": "The following is the output −" }, { "code": null, "e": 2593, "s": 2490, "text": "+----------+\n| PortalId |\n+----------+\n| 4 |\n| 10 |\n+----------+\n2 rows in set (0.00 sec)" }, { "code": null, "e": 2691, "s": 2593, "text": "Look at the above output, we are getting only 4 and 10, whereas we want the value 4,5,6,7,8,9,10." }, { "code": null, "e": 2774, "s": 2691, "text": "Now we will use BETWEEN clause. It will give the result as we want with inclusion." }, { "code": null, "e": 2800, "s": 2774, "text": "The query is as follows −" }, { "code": null, "e": 2885, "s": 2800, "text": "mysql> select PortalId from SelectInWithBetweenDemo where PortalId Between 4 and 10;" }, { "code": null, "e": 2915, "s": 2885, "text": "The following is the output −" }, { "code": null, "e": 3083, "s": 2915, "text": "+----------+\n| PortalId |\n+----------+\n| 4 |\n| 5 |\n| 6 |\n| 7 |\n| 8 |\n| 9 |\n| 10 |\n+----------+\n7 rows in set (0.09 sec)" }, { "code": null, "e": 3174, "s": 3083, "text": "Suppose if you want exclusive property then you can use > and <. The query is as follows −" }, { "code": null, "e": 3264, "s": 3174, "text": "mysql> select PortalId from SelectInWithBetweenDemo where PortalId > 4 and PortalId < 10;" }, { "code": null, "e": 3285, "s": 3264, "text": "Here is the output −" }, { "code": null, "e": 3427, "s": 3285, "text": "+----------+\n| PortalId |\n+----------+\n| 5 |\n| 6 |\n| 7 |\n| 8 |\n| 9 |\n+----------+\n5 rows in set (0.00 sec)" } ]
C library function - fgets()
The C library function char *fgets(char *str, int n, FILE *stream) reads a line from the specified stream and stores it into the string pointed to by str. It stops when either (n-1) characters are read, the newline character is read, or the end-of-file is reached, whichever comes first. Following is the declaration for fgets() function. char *fgets(char *str, int n, FILE *stream) str − This is the pointer to an array of chars where the string read is stored. str − This is the pointer to an array of chars where the string read is stored. n − This is the maximum number of characters to be read (including the final null-character). Usually, the length of the array passed as str is used. n − This is the maximum number of characters to be read (including the final null-character). Usually, the length of the array passed as str is used. stream − This is the pointer to a FILE object that identifies the stream where characters are read from. stream − This is the pointer to a FILE object that identifies the stream where characters are read from. On success, the function returns the same str parameter. If the End-of-File is encountered and no characters have been read, the contents of str remain unchanged and a null pointer is returned. If an error occurs, a null pointer is returned. The following example shows the usage of fgets() function. #include <stdio.h> int main () { FILE *fp; char str[60]; /* opening file for reading */ fp = fopen("file.txt" , "r"); if(fp == NULL) { perror("Error opening file"); return(-1); } if( fgets (str, 60, fp)!=NULL ) { /* writing content to stdout */ puts(str); } fclose(fp); return(0); } Let us assume, we have a text file file.txt, which has the following content. This file will be used as an input for our example program − We are in 2012 Now, let us compile and run the above program that will produce the following result − We are in 2012 12 Lectures 2 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain 12 Lectures 2 hours Richa Maheshwari 20 Lectures 3.5 hours Vandana Annavaram 44 Lectures 1 hours Amit Diwan Print Add Notes Bookmark this page
[ { "code": null, "e": 2295, "s": 2007, "text": "The C library function char *fgets(char *str, int n, FILE *stream) reads a line from the specified stream and stores it into the string pointed to by str. It stops when either (n-1) characters are read, the newline character is read, or the end-of-file is reached, whichever comes first." }, { "code": null, "e": 2346, "s": 2295, "text": "Following is the declaration for fgets() function." }, { "code": null, "e": 2390, "s": 2346, "text": "char *fgets(char *str, int n, FILE *stream)" }, { "code": null, "e": 2470, "s": 2390, "text": "str − This is the pointer to an array of chars where the string read is stored." }, { "code": null, "e": 2550, "s": 2470, "text": "str − This is the pointer to an array of chars where the string read is stored." }, { "code": null, "e": 2700, "s": 2550, "text": "n − This is the maximum number of characters to be read (including the final null-character). Usually, the length of the array passed as str is used." }, { "code": null, "e": 2850, "s": 2700, "text": "n − This is the maximum number of characters to be read (including the final null-character). Usually, the length of the array passed as str is used." }, { "code": null, "e": 2955, "s": 2850, "text": "stream − This is the pointer to a FILE object that identifies the stream where characters are read from." }, { "code": null, "e": 3060, "s": 2955, "text": "stream − This is the pointer to a FILE object that identifies the stream where characters are read from." }, { "code": null, "e": 3254, "s": 3060, "text": "On success, the function returns the same str parameter. If the End-of-File is encountered and no characters have been read, the contents of str remain unchanged and a null pointer is returned." }, { "code": null, "e": 3302, "s": 3254, "text": "If an error occurs, a null pointer is returned." }, { "code": null, "e": 3361, "s": 3302, "text": "The following example shows the usage of fgets() function." }, { "code": null, "e": 3704, "s": 3361, "text": "#include <stdio.h>\n\nint main () {\n FILE *fp;\n char str[60];\n\n /* opening file for reading */\n fp = fopen(\"file.txt\" , \"r\");\n if(fp == NULL) {\n perror(\"Error opening file\");\n return(-1);\n }\n if( fgets (str, 60, fp)!=NULL ) {\n /* writing content to stdout */\n puts(str);\n }\n fclose(fp);\n \n return(0);\n}" }, { "code": null, "e": 3843, "s": 3704, "text": "Let us assume, we have a text file file.txt, which has the following content. This file will be used as an input for our example program −" }, { "code": null, "e": 3858, "s": 3843, "text": "We are in 2012" }, { "code": null, "e": 3945, "s": 3858, "text": "Now, let us compile and run the above program that will produce the following result −" }, { "code": null, "e": 3961, "s": 3945, "text": "We are in 2012\n" }, { "code": null, "e": 3994, "s": 3961, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 4009, "s": 3994, "text": " Nishant Malik" }, { "code": null, "e": 4044, "s": 4009, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4059, "s": 4044, "text": " Nishant Malik" }, { "code": null, "e": 4094, "s": 4059, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 4108, "s": 4094, "text": " Asif Hussain" }, { "code": null, "e": 4141, "s": 4108, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 4159, "s": 4141, "text": " Richa Maheshwari" }, { "code": null, "e": 4194, "s": 4159, "text": "\n 20 Lectures \n 3.5 hours \n" }, { "code": null, "e": 4213, "s": 4194, "text": " Vandana Annavaram" }, { "code": null, "e": 4246, "s": 4213, "text": "\n 44 Lectures \n 1 hours \n" }, { "code": null, "e": 4258, "s": 4246, "text": " Amit Diwan" }, { "code": null, "e": 4265, "s": 4258, "text": " Print" }, { "code": null, "e": 4276, "s": 4265, "text": " Add Notes" } ]
SQL HackerRank Solutions. A complete solution for SQL problems on... | by Rahul Pathak | Towards Data Science
Structured Query Language is one of the most important languages used in the industry. It’s one of the most sought languages desired by the employers as the volume of data is increasing, in order to access the humongous data from respective databases, it is important to know this skill which would help you retrieve, update and manipulate data. In this post, we will be covering all the solutions to SQL on the HackerRank platform. HackerRank is a platform for competitive coding. It is very important that you all first give it a try & brainstorm yourselves before having a look at the solutions. Let us code and find answers to our given problems. I. Revising the Select Query 1 Query all columns for all American cities in CITY with populations larger than 100000. The CountryCode for America is USA. Input Format The CITY table is described as follows: SELECT * FROM CITY WHERE COUNTRYCODE = ‘USA’ AND POPULATION > 100000; II. Revising the Select Query 2 Query the names of all American cities in CITY with populations larger than 120000. The CountryCode for America is USA. Input Format The CITY table is described as follows: SELECT NAME FROM CITY WHERE COUNTRYCODE = ‘USA’ AND POPULATION > 120000; III. Select All Query all columns (attributes) for every row in the CITY table. Input Format SELECT * FROM CITY; IV. Select By ID Query all columns for a city in CITY with the ID 1661. Input Format SELECT * FROM CITY WHERE ID = 1661; V. Japanese Cities’ Attributes Query all attributes of every Japanese city in the CITY table. The COUNTRYCODE for Japan is JPN. Input Format SELECT * FROM CITY WHERE COUNTRYCODE = ‘JPN’; VI. Japanese Cities’ Names Query the names of all the Japanese cities in the CITY table. The COUNTRYCODE for Japan is JPN. Input Format SELECT NAME FROM CITY WHERE COUNTRYCODE = ‘JPN’; VII. Weather Observation Station 1 Query a list of CITY and STATE from the STATION table. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT CITY, STATE FROM STATION; VIII. Weather Observation Station 3 Query a list of CITY names from STATION with even ID numbers only. You may print the results in any order but must exclude duplicates from your answer. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT CITY FROM STATION WHERE MOD(ID, 2) = 0; IX. Weather Observation Station 4 Let N be the number of CITY entries in STATION, and let N’ be the number of distinct CITY names in STATION; query the value of N-N’ from STATION. In other words, find the difference between the total number of CITY entries in the table and the number of distinct CITY entries in the table. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT COUNT(CITY) — COUNT(DISTINCT CITY) FROM STATION ; X. Weather Observation Station 5 Query the two cities in STATION with the shortest and longest CITY names, as well as their respective lengths (i.e.: number of characters in the name). If there is more than one smallest or largest city, choose the one that comes first when ordered alphabetically. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT * FROM (SELECT DISTINCT city, LENGTH(city) FROM station ORDER BY LENGTH(city) ASC, city ASC) WHERE ROWNUM = 1 UNIONSELECT * FROM (SELECT DISTINCT city, LENGTH(city) FROM station ORDER BY LENGTH(city) DESC, city ASC) WHERE ROWNUM = 1; XI. Weather Observation Station 6 Query the list of CITY names starting with vowels (i.e., a, e, i, o, or u) from STATION. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’; XII. Weather Observation Station 7 Query the list of CITY names ending with vowels (a, e, i, o, u) from STATION. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’; XIII. Weather Observation Station 8 Query the list of CITY names from STATION which have vowels (i.e., a, e, i, o, and u) as both their first and last characters. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM (SELECT DISTINCT city FROM station WHERE city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’) WHERE city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’; XIV. Weather Observation Station 9 Query the list of CITY names from STATION that does not start with vowels. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE NOT (city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’); XV. Weather Observation Station 10 Query the list of CITY names from STATION that do not end with vowels. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE NOT (city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’); XVI. Weather Observation Station 11 Query the list of CITY names from STATION that either do not start with vowels or do not end with vowels. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE(NOT (city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’)OR NOT(city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’)); XVII. Weather Observation Station 12 Query the list of CITY names from STATION that do not start with vowels and do not end with vowels. Your result cannot contain duplicates. Input Format The STATION table is described as follows: where LAT_N is the northern latitude and LONG_W is the western longitude. SELECT DISTINCT city FROM station WHERE NOT((city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’)OR (city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’)); XVIII. Higher Than 75 Marks Query the Name of any student in STUDENTS who scored higher than Marks. Order your output by the last three characters of each name. If two or more students both have names ending in the same last three characters (i.e.: Bobby, Robby, etc.), secondary sort them by ascending ID. Input Format The STUDENTS table is described as follows: The Name column only contains uppercase (A-Z) and lowercase (a-z) letters. SELECT name FROM students WHERE marks > 75 ORDER BY SUBSTR(name, LENGTH(name)-2, 3), id; XIX. Employee Names Write a query that prints a list of employee names (i.e.: the name attribute) from the Employee table in alphabetical order. Input Format The Employee table containing employee data for a company is described as follows: where employee_id is an employee’s ID number, the name is their name, months is the total number of months they’ve been working for the company, and salary is their monthly salary. SELECT name FROM employee ORDER BY name; XX. Employee Attributes Write a query that prints a list of employee names (i.e.: the name attribute) for employees in Employee having a salary greater than per month who have been employees for less than months. Sort your result by ascending employee_id. Input Format The Employee table containing employee data for a company is described as follows: where employee_id is an employee’s ID number, the name is their name, months is the total number of months they’ve been working for the company, and salary is their monthly salary. SELECT name FROM employee WHERE salary > 2000 AND months < 10 ORDER BY employee_id; XXI. Types of Triangles Write a query identifying the type of each record in the TRIANGLES table using its three side lengths. Output one of the following statements for each record in the table: Equilateral: It’s a triangle with 3 sides of equal length. Isosceles: It’s a triangle with 2 sides of equal length. Scalene: It’s a triangle with 3 sides of differing lengths. Not A Triangle: The given values of A, B, and C don’t form a triangle. Input Format The TRIANGLES table is described as follows: Each row in the table denotes the lengths of each of a triangle’s three sides. select if(A+B<=C or B+C<=A or A+C<=B,’Not A Triangle’,if(A=B and B=C,’Equilateral’,if(A=B or B=C or A=C,’Isosceles’,’Scalene’)))from TRIANGLES as T;VII. The PADS XXII. The PADS Generate the following two result sets: Query an alphabetically ordered list of all names in OCCUPATIONS, immediately followed by the first letter of each profession as a parenthetical (i.e.: enclosed in parentheses). For example: AnActorName(A), ADoctorName(D), AProfessorName(P), and ASingerName(S). Query the number of occurrences of each occupation in OCCUPATIONS. Sort the occurrences in ascending order, and output them in the following format: There are a total of [occupation_count] [occupation]s. where [occupation_count] is the number of occurrences of occupation in OCCUPATIONS and [occupation] is the lowercase occupation name. If more than one Occupation has the same [occupation_count], they should be ordered alphabetically. Note: There will be at least two entries in the table for each type of occupation. Input Format The OCCUPATIONS table is described as follows: The occupation will only contain one of the following values: Doctor, Professor, Singer, or Actor. SELECT concat(NAME,concat(“(“,concat(substr(OCCUPATION,1,1),”)”))) FROM OCCUPATIONS ORDER BY NAME ASC;SELECT “There are a total of “, count(OCCUPATION), concat(lower(occupation),”s.”) FROM OCCUPATIONS GROUP BY OCCUPATION ORDER BY count(OCCUPATION), OCCUPATION ASC XXIII. Occupations Pivot the Occupation column in OCCUPATIONS so that each Name is sorted alphabetically and displayed underneath its corresponding Occupation. The output column headers should be Doctor, Professor, Singer, and Actor, respectively. Note: Print NULL when there are no more names corresponding to an occupation. Input Format The OCCUPATIONS table is described as follows: The occupation will only contain one of the following values: Doctor, Professor, Singer, or Actor. set @r1=0, @r2=0, @r3=0, @r4=0;select min(Doctor), min(Professor), min(Singer), min(Actor)from(select case when Occupation=’Doctor’ then (@r1:=@r1+1) when Occupation=’Professor’ then (@r2:=@r2+1) when Occupation=’Singer’ then (@r3:=@r3+1) when Occupation=’Actor’ then (@r4:=@r4+1) end as RowNumber,case when Occupation=’Doctor’ then Name end as Doctor,case when Occupation=’Professor’ then Name end as Professor,case when Occupation=’Singer’ then Name end as Singer,case when Occupation=’Actor’ then Name end as Acto from OCCUPATIONS order by Name) Temp group by RowNumber; XXIV. Binary Tree Nodes You are given a table, BST, containing two columns: N and P, where N represents the value of a node in Binary Tree, and P is the parent of N. Write a query to find the node type of Binary Tree ordered by the value of the node. Output one of the following for each node: Root: If node is root node. Leaf: If node is leaf node. Inner: If node is neither root nor leaf node. SELECT N, IF(P IS NULL,’Root’,IF((SELECT COUNT(*) FROM BST WHERE P=B.N)>0,’Inner’,’Leaf’)) FROM BST AS B ORDER BY N; XXV. New Companies Amber’s conglomerate corporation just acquired some new companies. Each of the companies follows this hierarchy: Given the table schemas below, write a query to print the company_code, founder name, total number of lead managers, total number of senior managers, total number of managers, and total number of employees. Order your output by ascending company_code. Note: The tables may contain duplicate records. The company_code is string, so the sorting should not be numeric. For example, if the company_codes are C_1, C_2, and C_10, then the ascending company_codes will be C_1, C_10, and C_2. Input Format The following tables contain company data: Company: The company_code is the code of the company and founder is the founder of the company. Lead_Manager: The lead_manager_code is the code of the lead manager, and the company_code is the code of the working company. Senior_Manager: The senior_manager_code is the code of the senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company. Manager: The manager_code is the code of the manager, the senior_manager_code is the code of its senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company. Employee: The employee_code is the code of the employee, the manager_code is the code of its manager, the senior_manager_code is the code of its senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company. select c.company_code, c.founder, count(distinct lm.lead_manager_code), count(distinct sm.senior_manager_code), count(distinct m.manager_code), count(distinct e.employee_code) from Company c, Lead_Manager lm, Senior_Manager sm, Manager m, Employee ewhere c.company_code = lm.company_code and lm.lead_manager_code = sm.lead_manager_code and sm.senior_manager_code = m.senior_manager_code and m.manager_code = e.manager_code group by c.company_code, c.founderorder by c.company_code XXVI. Draw The Triangle 2 P(R) represents a pattern drawn by Julia in R rows. The following pattern represents P(5): * * * * * * * * * * * * * * * Write a query to print the pattern P(20). set @row := 0;select repeat(‘* ‘, @row := @row + 1) from information_schema.tables where @row < 20
[ { "code": null, "e": 518, "s": 172, "text": "Structured Query Language is one of the most important languages used in the industry. It’s one of the most sought languages desired by the employers as the volume of data is increasing, in order to access the humongous data from respective databases, it is important to know this skill which would help you retrieve, update and manipulate data." }, { "code": null, "e": 823, "s": 518, "text": "In this post, we will be covering all the solutions to SQL on the HackerRank platform. HackerRank is a platform for competitive coding. It is very important that you all first give it a try & brainstorm yourselves before having a look at the solutions. Let us code and find answers to our given problems." }, { "code": null, "e": 854, "s": 823, "text": "I. Revising the Select Query 1" }, { "code": null, "e": 977, "s": 854, "text": "Query all columns for all American cities in CITY with populations larger than 100000. The CountryCode for America is USA." }, { "code": null, "e": 990, "s": 977, "text": "Input Format" }, { "code": null, "e": 1030, "s": 990, "text": "The CITY table is described as follows:" }, { "code": null, "e": 1100, "s": 1030, "text": "SELECT * FROM CITY WHERE COUNTRYCODE = ‘USA’ AND POPULATION > 100000;" }, { "code": null, "e": 1132, "s": 1100, "text": "II. Revising the Select Query 2" }, { "code": null, "e": 1252, "s": 1132, "text": "Query the names of all American cities in CITY with populations larger than 120000. The CountryCode for America is USA." }, { "code": null, "e": 1265, "s": 1252, "text": "Input Format" }, { "code": null, "e": 1305, "s": 1265, "text": "The CITY table is described as follows:" }, { "code": null, "e": 1378, "s": 1305, "text": "SELECT NAME FROM CITY WHERE COUNTRYCODE = ‘USA’ AND POPULATION > 120000;" }, { "code": null, "e": 1394, "s": 1378, "text": "III. Select All" }, { "code": null, "e": 1458, "s": 1394, "text": "Query all columns (attributes) for every row in the CITY table." }, { "code": null, "e": 1471, "s": 1458, "text": "Input Format" }, { "code": null, "e": 1491, "s": 1471, "text": "SELECT * FROM CITY;" }, { "code": null, "e": 1508, "s": 1491, "text": "IV. Select By ID" }, { "code": null, "e": 1563, "s": 1508, "text": "Query all columns for a city in CITY with the ID 1661." }, { "code": null, "e": 1576, "s": 1563, "text": "Input Format" }, { "code": null, "e": 1612, "s": 1576, "text": "SELECT * FROM CITY WHERE ID = 1661;" }, { "code": null, "e": 1643, "s": 1612, "text": "V. Japanese Cities’ Attributes" }, { "code": null, "e": 1740, "s": 1643, "text": "Query all attributes of every Japanese city in the CITY table. The COUNTRYCODE for Japan is JPN." }, { "code": null, "e": 1753, "s": 1740, "text": "Input Format" }, { "code": null, "e": 1799, "s": 1753, "text": "SELECT * FROM CITY WHERE COUNTRYCODE = ‘JPN’;" }, { "code": null, "e": 1826, "s": 1799, "text": "VI. Japanese Cities’ Names" }, { "code": null, "e": 1922, "s": 1826, "text": "Query the names of all the Japanese cities in the CITY table. The COUNTRYCODE for Japan is JPN." }, { "code": null, "e": 1935, "s": 1922, "text": "Input Format" }, { "code": null, "e": 1984, "s": 1935, "text": "SELECT NAME FROM CITY WHERE COUNTRYCODE = ‘JPN’;" }, { "code": null, "e": 2019, "s": 1984, "text": "VII. Weather Observation Station 1" }, { "code": null, "e": 2074, "s": 2019, "text": "Query a list of CITY and STATE from the STATION table." }, { "code": null, "e": 2087, "s": 2074, "text": "Input Format" }, { "code": null, "e": 2130, "s": 2087, "text": "The STATION table is described as follows:" }, { "code": null, "e": 2204, "s": 2130, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 2237, "s": 2204, "text": "SELECT CITY, STATE FROM STATION;" }, { "code": null, "e": 2273, "s": 2237, "text": "VIII. Weather Observation Station 3" }, { "code": null, "e": 2425, "s": 2273, "text": "Query a list of CITY names from STATION with even ID numbers only. You may print the results in any order but must exclude duplicates from your answer." }, { "code": null, "e": 2438, "s": 2425, "text": "Input Format" }, { "code": null, "e": 2481, "s": 2438, "text": "The STATION table is described as follows:" }, { "code": null, "e": 2555, "s": 2481, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 2611, "s": 2555, "text": "SELECT DISTINCT CITY FROM STATION WHERE MOD(ID, 2) = 0;" }, { "code": null, "e": 2645, "s": 2611, "text": "IX. Weather Observation Station 4" }, { "code": null, "e": 2935, "s": 2645, "text": "Let N be the number of CITY entries in STATION, and let N’ be the number of distinct CITY names in STATION; query the value of N-N’ from STATION. In other words, find the difference between the total number of CITY entries in the table and the number of distinct CITY entries in the table." }, { "code": null, "e": 2948, "s": 2935, "text": "Input Format" }, { "code": null, "e": 2991, "s": 2948, "text": "The STATION table is described as follows:" }, { "code": null, "e": 3065, "s": 2991, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 3122, "s": 3065, "text": "SELECT COUNT(CITY) — COUNT(DISTINCT CITY) FROM STATION ;" }, { "code": null, "e": 3155, "s": 3122, "text": "X. Weather Observation Station 5" }, { "code": null, "e": 3420, "s": 3155, "text": "Query the two cities in STATION with the shortest and longest CITY names, as well as their respective lengths (i.e.: number of characters in the name). If there is more than one smallest or largest city, choose the one that comes first when ordered alphabetically." }, { "code": null, "e": 3433, "s": 3420, "text": "Input Format" }, { "code": null, "e": 3476, "s": 3433, "text": "The STATION table is described as follows:" }, { "code": null, "e": 3550, "s": 3476, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 3791, "s": 3550, "text": "SELECT * FROM (SELECT DISTINCT city, LENGTH(city) FROM station ORDER BY LENGTH(city) ASC, city ASC) WHERE ROWNUM = 1 UNIONSELECT * FROM (SELECT DISTINCT city, LENGTH(city) FROM station ORDER BY LENGTH(city) DESC, city ASC) WHERE ROWNUM = 1;" }, { "code": null, "e": 3825, "s": 3791, "text": "XI. Weather Observation Station 6" }, { "code": null, "e": 3953, "s": 3825, "text": "Query the list of CITY names starting with vowels (i.e., a, e, i, o, or u) from STATION. Your result cannot contain duplicates." }, { "code": null, "e": 3966, "s": 3953, "text": "Input Format" }, { "code": null, "e": 4009, "s": 3966, "text": "The STATION table is described as follows:" }, { "code": null, "e": 4083, "s": 4009, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 4211, "s": 4083, "text": "SELECT DISTINCT city FROM station WHERE city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’;" }, { "code": null, "e": 4246, "s": 4211, "text": "XII. Weather Observation Station 7" }, { "code": null, "e": 4363, "s": 4246, "text": "Query the list of CITY names ending with vowels (a, e, i, o, u) from STATION. Your result cannot contain duplicates." }, { "code": null, "e": 4376, "s": 4363, "text": "Input Format" }, { "code": null, "e": 4419, "s": 4376, "text": "The STATION table is described as follows:" }, { "code": null, "e": 4493, "s": 4419, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 4621, "s": 4493, "text": "SELECT DISTINCT city FROM station WHERE city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’;" }, { "code": null, "e": 4657, "s": 4621, "text": "XIII. Weather Observation Station 8" }, { "code": null, "e": 4823, "s": 4657, "text": "Query the list of CITY names from STATION which have vowels (i.e., a, e, i, o, and u) as both their first and last characters. Your result cannot contain duplicates." }, { "code": null, "e": 4836, "s": 4823, "text": "Input Format" }, { "code": null, "e": 4879, "s": 4836, "text": "The STATION table is described as follows:" }, { "code": null, "e": 4953, "s": 4879, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 5202, "s": 4953, "text": "SELECT DISTINCT city FROM (SELECT DISTINCT city FROM station WHERE city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’) WHERE city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’;" }, { "code": null, "e": 5237, "s": 5202, "text": "XIV. Weather Observation Station 9" }, { "code": null, "e": 5351, "s": 5237, "text": "Query the list of CITY names from STATION that does not start with vowels. Your result cannot contain duplicates." }, { "code": null, "e": 5364, "s": 5351, "text": "Input Format" }, { "code": null, "e": 5407, "s": 5364, "text": "The STATION table is described as follows:" }, { "code": null, "e": 5481, "s": 5407, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 5615, "s": 5481, "text": "SELECT DISTINCT city FROM station WHERE NOT (city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’);" }, { "code": null, "e": 5650, "s": 5615, "text": "XV. Weather Observation Station 10" }, { "code": null, "e": 5760, "s": 5650, "text": "Query the list of CITY names from STATION that do not end with vowels. Your result cannot contain duplicates." }, { "code": null, "e": 5773, "s": 5760, "text": "Input Format" }, { "code": null, "e": 5816, "s": 5773, "text": "The STATION table is described as follows:" }, { "code": null, "e": 5890, "s": 5816, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 6024, "s": 5890, "text": "SELECT DISTINCT city FROM station WHERE NOT (city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’);" }, { "code": null, "e": 6060, "s": 6024, "text": "XVI. Weather Observation Station 11" }, { "code": null, "e": 6205, "s": 6060, "text": "Query the list of CITY names from STATION that either do not start with vowels or do not end with vowels. Your result cannot contain duplicates." }, { "code": null, "e": 6218, "s": 6205, "text": "Input Format" }, { "code": null, "e": 6261, "s": 6218, "text": "The STATION table is described as follows:" }, { "code": null, "e": 6335, "s": 6261, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 6564, "s": 6335, "text": "SELECT DISTINCT city FROM station WHERE(NOT (city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’)OR NOT(city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’));" }, { "code": null, "e": 6601, "s": 6564, "text": "XVII. Weather Observation Station 12" }, { "code": null, "e": 6740, "s": 6601, "text": "Query the list of CITY names from STATION that do not start with vowels and do not end with vowels. Your result cannot contain duplicates." }, { "code": null, "e": 6753, "s": 6740, "text": "Input Format" }, { "code": null, "e": 6796, "s": 6753, "text": "The STATION table is described as follows:" }, { "code": null, "e": 6870, "s": 6796, "text": "where LAT_N is the northern latitude and LONG_W is the western longitude." }, { "code": null, "e": 7096, "s": 6870, "text": "SELECT DISTINCT city FROM station WHERE NOT((city LIKE ‘A%’ OR city LIKE ‘E%’ OR city LIKE ‘I%’ OR city LIKE ‘O%’ OR city LIKE ‘U%’)OR (city LIKE ‘%a’ OR city LIKE ‘%e’ OR city LIKE ‘%i’ OR city LIKE ‘%o’ OR city LIKE ‘%u’));" }, { "code": null, "e": 7124, "s": 7096, "text": "XVIII. Higher Than 75 Marks" }, { "code": null, "e": 7403, "s": 7124, "text": "Query the Name of any student in STUDENTS who scored higher than Marks. Order your output by the last three characters of each name. If two or more students both have names ending in the same last three characters (i.e.: Bobby, Robby, etc.), secondary sort them by ascending ID." }, { "code": null, "e": 7416, "s": 7403, "text": "Input Format" }, { "code": null, "e": 7460, "s": 7416, "text": "The STUDENTS table is described as follows:" }, { "code": null, "e": 7535, "s": 7460, "text": "The Name column only contains uppercase (A-Z) and lowercase (a-z) letters." }, { "code": null, "e": 7624, "s": 7535, "text": "SELECT name FROM students WHERE marks > 75 ORDER BY SUBSTR(name, LENGTH(name)-2, 3), id;" }, { "code": null, "e": 7644, "s": 7624, "text": "XIX. Employee Names" }, { "code": null, "e": 7769, "s": 7644, "text": "Write a query that prints a list of employee names (i.e.: the name attribute) from the Employee table in alphabetical order." }, { "code": null, "e": 7782, "s": 7769, "text": "Input Format" }, { "code": null, "e": 7865, "s": 7782, "text": "The Employee table containing employee data for a company is described as follows:" }, { "code": null, "e": 8046, "s": 7865, "text": "where employee_id is an employee’s ID number, the name is their name, months is the total number of months they’ve been working for the company, and salary is their monthly salary." }, { "code": null, "e": 8087, "s": 8046, "text": "SELECT name FROM employee ORDER BY name;" }, { "code": null, "e": 8111, "s": 8087, "text": "XX. Employee Attributes" }, { "code": null, "e": 8343, "s": 8111, "text": "Write a query that prints a list of employee names (i.e.: the name attribute) for employees in Employee having a salary greater than per month who have been employees for less than months. Sort your result by ascending employee_id." }, { "code": null, "e": 8356, "s": 8343, "text": "Input Format" }, { "code": null, "e": 8439, "s": 8356, "text": "The Employee table containing employee data for a company is described as follows:" }, { "code": null, "e": 8620, "s": 8439, "text": "where employee_id is an employee’s ID number, the name is their name, months is the total number of months they’ve been working for the company, and salary is their monthly salary." }, { "code": null, "e": 8704, "s": 8620, "text": "SELECT name FROM employee WHERE salary > 2000 AND months < 10 ORDER BY employee_id;" }, { "code": null, "e": 8728, "s": 8704, "text": "XXI. Types of Triangles" }, { "code": null, "e": 8900, "s": 8728, "text": "Write a query identifying the type of each record in the TRIANGLES table using its three side lengths. Output one of the following statements for each record in the table:" }, { "code": null, "e": 8959, "s": 8900, "text": "Equilateral: It’s a triangle with 3 sides of equal length." }, { "code": null, "e": 9016, "s": 8959, "text": "Isosceles: It’s a triangle with 2 sides of equal length." }, { "code": null, "e": 9076, "s": 9016, "text": "Scalene: It’s a triangle with 3 sides of differing lengths." }, { "code": null, "e": 9147, "s": 9076, "text": "Not A Triangle: The given values of A, B, and C don’t form a triangle." }, { "code": null, "e": 9160, "s": 9147, "text": "Input Format" }, { "code": null, "e": 9205, "s": 9160, "text": "The TRIANGLES table is described as follows:" }, { "code": null, "e": 9284, "s": 9205, "text": "Each row in the table denotes the lengths of each of a triangle’s three sides." }, { "code": null, "e": 9446, "s": 9284, "text": "select if(A+B<=C or B+C<=A or A+C<=B,’Not A Triangle’,if(A=B and B=C,’Equilateral’,if(A=B or B=C or A=C,’Isosceles’,’Scalene’)))from TRIANGLES as T;VII. The PADS" }, { "code": null, "e": 9461, "s": 9446, "text": "XXII. The PADS" }, { "code": null, "e": 9501, "s": 9461, "text": "Generate the following two result sets:" }, { "code": null, "e": 9763, "s": 9501, "text": "Query an alphabetically ordered list of all names in OCCUPATIONS, immediately followed by the first letter of each profession as a parenthetical (i.e.: enclosed in parentheses). For example: AnActorName(A), ADoctorName(D), AProfessorName(P), and ASingerName(S)." }, { "code": null, "e": 9912, "s": 9763, "text": "Query the number of occurrences of each occupation in OCCUPATIONS. Sort the occurrences in ascending order, and output them in the following format:" }, { "code": null, "e": 9967, "s": 9912, "text": "There are a total of [occupation_count] [occupation]s." }, { "code": null, "e": 10201, "s": 9967, "text": "where [occupation_count] is the number of occurrences of occupation in OCCUPATIONS and [occupation] is the lowercase occupation name. If more than one Occupation has the same [occupation_count], they should be ordered alphabetically." }, { "code": null, "e": 10284, "s": 10201, "text": "Note: There will be at least two entries in the table for each type of occupation." }, { "code": null, "e": 10297, "s": 10284, "text": "Input Format" }, { "code": null, "e": 10344, "s": 10297, "text": "The OCCUPATIONS table is described as follows:" }, { "code": null, "e": 10443, "s": 10344, "text": "The occupation will only contain one of the following values: Doctor, Professor, Singer, or Actor." }, { "code": null, "e": 10707, "s": 10443, "text": "SELECT concat(NAME,concat(“(“,concat(substr(OCCUPATION,1,1),”)”))) FROM OCCUPATIONS ORDER BY NAME ASC;SELECT “There are a total of “, count(OCCUPATION), concat(lower(occupation),”s.”) FROM OCCUPATIONS GROUP BY OCCUPATION ORDER BY count(OCCUPATION), OCCUPATION ASC" }, { "code": null, "e": 10726, "s": 10707, "text": "XXIII. Occupations" }, { "code": null, "e": 10955, "s": 10726, "text": "Pivot the Occupation column in OCCUPATIONS so that each Name is sorted alphabetically and displayed underneath its corresponding Occupation. The output column headers should be Doctor, Professor, Singer, and Actor, respectively." }, { "code": null, "e": 11033, "s": 10955, "text": "Note: Print NULL when there are no more names corresponding to an occupation." }, { "code": null, "e": 11046, "s": 11033, "text": "Input Format" }, { "code": null, "e": 11093, "s": 11046, "text": "The OCCUPATIONS table is described as follows:" }, { "code": null, "e": 11192, "s": 11093, "text": "The occupation will only contain one of the following values: Doctor, Professor, Singer, or Actor." }, { "code": null, "e": 11766, "s": 11192, "text": "set @r1=0, @r2=0, @r3=0, @r4=0;select min(Doctor), min(Professor), min(Singer), min(Actor)from(select case when Occupation=’Doctor’ then (@r1:=@r1+1) when Occupation=’Professor’ then (@r2:=@r2+1) when Occupation=’Singer’ then (@r3:=@r3+1) when Occupation=’Actor’ then (@r4:=@r4+1) end as RowNumber,case when Occupation=’Doctor’ then Name end as Doctor,case when Occupation=’Professor’ then Name end as Professor,case when Occupation=’Singer’ then Name end as Singer,case when Occupation=’Actor’ then Name end as Acto from OCCUPATIONS order by Name) Temp group by RowNumber;" }, { "code": null, "e": 11790, "s": 11766, "text": "XXIV. Binary Tree Nodes" }, { "code": null, "e": 11932, "s": 11790, "text": "You are given a table, BST, containing two columns: N and P, where N represents the value of a node in Binary Tree, and P is the parent of N." }, { "code": null, "e": 12060, "s": 11932, "text": "Write a query to find the node type of Binary Tree ordered by the value of the node. Output one of the following for each node:" }, { "code": null, "e": 12088, "s": 12060, "text": "Root: If node is root node." }, { "code": null, "e": 12116, "s": 12088, "text": "Leaf: If node is leaf node." }, { "code": null, "e": 12162, "s": 12116, "text": "Inner: If node is neither root nor leaf node." }, { "code": null, "e": 12279, "s": 12162, "text": "SELECT N, IF(P IS NULL,’Root’,IF((SELECT COUNT(*) FROM BST WHERE P=B.N)>0,’Inner’,’Leaf’)) FROM BST AS B ORDER BY N;" }, { "code": null, "e": 12298, "s": 12279, "text": "XXV. New Companies" }, { "code": null, "e": 12411, "s": 12298, "text": "Amber’s conglomerate corporation just acquired some new companies. Each of the companies follows this hierarchy:" }, { "code": null, "e": 12663, "s": 12411, "text": "Given the table schemas below, write a query to print the company_code, founder name, total number of lead managers, total number of senior managers, total number of managers, and total number of employees. Order your output by ascending company_code." }, { "code": null, "e": 12669, "s": 12663, "text": "Note:" }, { "code": null, "e": 12711, "s": 12669, "text": "The tables may contain duplicate records." }, { "code": null, "e": 12896, "s": 12711, "text": "The company_code is string, so the sorting should not be numeric. For example, if the company_codes are C_1, C_2, and C_10, then the ascending company_codes will be C_1, C_10, and C_2." }, { "code": null, "e": 12909, "s": 12896, "text": "Input Format" }, { "code": null, "e": 12952, "s": 12909, "text": "The following tables contain company data:" }, { "code": null, "e": 13048, "s": 12952, "text": "Company: The company_code is the code of the company and founder is the founder of the company." }, { "code": null, "e": 13174, "s": 13048, "text": "Lead_Manager: The lead_manager_code is the code of the lead manager, and the company_code is the code of the working company." }, { "code": null, "e": 13361, "s": 13174, "text": "Senior_Manager: The senior_manager_code is the code of the senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company." }, { "code": null, "e": 13586, "s": 13361, "text": "Manager: The manager_code is the code of the manager, the senior_manager_code is the code of its senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company." }, { "code": null, "e": 13859, "s": 13586, "text": "Employee: The employee_code is the code of the employee, the manager_code is the code of its manager, the senior_manager_code is the code of its senior manager, the lead_manager_code is the code of its lead manager, and the company_code is the code of the working company." }, { "code": null, "e": 14340, "s": 13859, "text": "select c.company_code, c.founder, count(distinct lm.lead_manager_code), count(distinct sm.senior_manager_code), count(distinct m.manager_code), count(distinct e.employee_code) from Company c, Lead_Manager lm, Senior_Manager sm, Manager m, Employee ewhere c.company_code = lm.company_code and lm.lead_manager_code = sm.lead_manager_code and sm.senior_manager_code = m.senior_manager_code and m.manager_code = e.manager_code group by c.company_code, c.founderorder by c.company_code" }, { "code": null, "e": 14366, "s": 14340, "text": "XXVI. Draw The Triangle 2" }, { "code": null, "e": 14457, "s": 14366, "text": "P(R) represents a pattern drawn by Julia in R rows. The following pattern represents P(5):" }, { "code": null, "e": 14459, "s": 14457, "text": "*" }, { "code": null, "e": 14463, "s": 14459, "text": "* *" }, { "code": null, "e": 14469, "s": 14463, "text": "* * *" }, { "code": null, "e": 14477, "s": 14469, "text": "* * * *" }, { "code": null, "e": 14487, "s": 14477, "text": "* * * * *" }, { "code": null, "e": 14529, "s": 14487, "text": "Write a query to print the pattern P(20)." } ]
Example of embedded CSS
Place your CSS rules into an HTML document using the <style> element that is called embedded CSS. This tag is placed inside <head>...</head> tags. Rules defined using this syntax will be applied to all the elements available in the document. Following is the example of embed CSS based on the above syntax: <!DOCTYPE html> <html> <head> <style media = "all"> body { background-color: orange; } h1 { color: yellow; margin-left: 30px; } </style> </head> <body> <h1>This is a heading</h1> <p>This is a paragraph.</p> </body> </html> The following is the attribute:
[ { "code": null, "e": 1304, "s": 1062, "text": "Place your CSS rules into an HTML document using the <style> element that is called embedded CSS. This tag is placed inside <head>...</head> tags. Rules defined using this syntax will be applied to all the elements available in the document." }, { "code": null, "e": 1369, "s": 1304, "text": "Following is the example of embed CSS based on the above syntax:" }, { "code": null, "e": 1700, "s": 1369, "text": "<!DOCTYPE html>\n<html>\n <head>\n <style media = \"all\">\n body {\n background-color: orange;\n }\n h1 {\n color: yellow;\n margin-left: 30px;\n }\n </style>\n </head>\n <body>\n <h1>This is a heading</h1>\n <p>This is a paragraph.</p>\n </body>\n</html>" }, { "code": null, "e": 1732, "s": 1700, "text": "The following is the attribute:" } ]
Water Collection | Practice | GeeksforGeeks
It is raining in Geek City. The height of the buildings in the city is given in an array. Calculate the amount of water that can be collected between all the buildings. Example 1: Input: N = 5 Arr[] = {3, 0, 2, 0, 4} Output: 7 Explanation: Geek city looks like We can trap "3 units" of water between 3 and 2, "1 unit" on top of bar 2 and "3 units" between 2 and 4. Example 2: Input: N = 12 Arr[] = [0,1,0,2,1,0,1,3,2,1,2,1] Output: 6 Explanation: The structure is like below Trap "1 unit" between first 1 and 2, "4 units" between first 2 and 3 and "1 unit" between second last 1 and last 2 Your Task: You don't need to read input or print anything. Your task is to complete the function maxWater() which takes the array of integers arr[] and n as input parameters and returns the amount of water collected. Expected Time Complexity: O(N) Expected Auxiliary Space: O(1) Constraints: 1 ≤ N ≤ 105 1 ≤ Arr[i] ≤ 103 0 dangrio1 month ago int left=0, right=n-1; int water=0, maxLeft=0, maxRight=0; while(left<=right){ if(arr[left]<=arr[right]){ if(arr[left]>=maxLeft) maxLeft = arr[left]; else water += maxLeft-arr[left]; left++; }else{ if(arr[right]>=maxRight) maxRight = arr[right]; else water += maxRight-arr[right]; right--; } } return water; 0 akkeshri140420013 months ago int maxWater(int arr[], int n) { // code here vector<int>left(n,0),right(n,0); left[0]=arr[0]; right[n-1]=arr[n-1]; for(int i=1;i<=n-1;i++){ left[i]=max(left[i-1],arr[i]); right[n-1-i]=max(right[n-i],arr[n-1-i]); } int water=0; for(int i=0;i<n;i++){ water+=min(left[i],right[i])-arr[i]; } return water; } 0 vishalja77193 months ago /////c++ code int left=0,right=n-1; int result=0,maxleft=0,maxright=0; while(left <= right){ if(arr[left] <= arr[right])//// { if(arr[left] >= maxleft) { maxleft=arr[left]; } else { result=result+maxleft-arr[left]; } left++; } else { if(arr[right] >= maxright) { maxright=arr[right]; } else { result=result+maxright-arr[right]; } right--; } } return result; } 0 arpita biswal3 months ago Time taken:0.7/3.8 Javascript class Solution { int maxWater(int arr[], int n) { // code here int leftmax=arr[0],rightmax=arr[n-1]; int mina[]=new int[n]; int maxa[] = new int[n]; mina[0]=leftmax; mina[n-1]=rightmax; maxa[0]=leftmax; maxa[n-1]=rightmax; for(int i=1;i<n-1;i++){ if(arr[i]>leftmax){ mina[i]=arr[i]; leftmax=arr[i]; } else{ mina[i]=leftmax; }} for(int i=n-2;i>=1;i--){ if(arr[i]>rightmax){ maxa[i]=arr[i]; rightmax=arr[i]; } else{ maxa[i]=rightmax; }}int trappedwater=0;for(int i=1;i<n-1;i++){ int waterlevel = Math.min(mina[i-1],maxa[i+1]); int waterCollect = (waterlevel-arr[i]); if(waterCollect >0){ trappedwater=trappedwater+waterCollect; } }return trappedwater;}} 0 paulshankhdeep3 months ago SIMPLE JAVA SOLUTION class Solution { int maxWater(int arr[], int n) { // code here int left[] = new int [n]; int right[] = new int [n]; left[0] = arr[0]; for (int i = 1; i<n; i++) { left[i] = Math.max(arr[i],left[i-1]); } right[n-1] = arr[n-1]; for (int i=n-2; i>=0; i--) { right[i] = Math.max(arr[i],right[i+1]); } int ans = 0; for (int i = 0; i<n; i++) { ans = ans + (Math.min(left[i],right[i])-arr[i]); } return ans; }} 0 tahabasra925 months ago this one is same as trapping rain water problem available on gfg time required 0.2 class Solution {public: int maxWater(int a[], int n) { int left[n]; int right[n]; left[0]=a[0]; for(int i=1;i<n;i++){ left[i]=max(a[i],left[i-1]); } right[n-1]=a[n-1]; for(int i=n-2;i>=0;i--){ right[i]=max(a[i],right[i+1]); } long int ans=0; for(int i=0;i<n;i++){ ans=ans+(min(left[i],right[i])-a[i]); } return ans; }}; 0 imranwahid6 months ago Easy C++ solution 0 wajid945716 months ago int maxWater(int arr[], int n){ int ar1[n]; int ar2[n]; int wt = 0; ar1[0] = arr[0]; for (int i = 1; i < n; i++) ar1[i] = max(ar1[i - 1], arr[i]); ar2[n - 1] = arr[n - 1]; for (int i = n - 2; i >= 0; i--) ar2[i] = max(ar2[i + 1], arr[i]); for (int i = 0; i < n; i++) wt += min(ar1[i], ar2[i]) - arr[i]; return wt;} 0 mohdabuzaid0132 This comment was deleted. 0 Mohit Kesharwani8 months ago Mohit Kesharwani class Solution {public: int maxWater(int arr[], int n) { int sum=0; int left[n]={0}; int right[n]={0}; left[0]=arr[0]; for(int i=1;i<n;i++){ left[i]="{max(left[i-1],arr[i])};" }="" right[n-1]="arr[n-1];" for(int="" i="n-2;i">=0;i--){ right[i]={max(right[i+1],arr[i])}; } for(int i=0;i<n;i++){ sum="sum+(min(left[i],right[i])-arr[i]);" }="" return="" sum;="" }=""> We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 396, "s": 226, "text": "It is raining in Geek City. The height of the buildings in the city is given in an array. Calculate the amount of water that can be collected between all the buildings." }, { "code": null, "e": 409, "s": 398, "text": "Example 1:" }, { "code": null, "e": 598, "s": 409, "text": "Input: \nN = 5\nArr[] = {3, 0, 2, 0, 4}\nOutput: 7\nExplanation:\nGeek city looks like\n\nWe can trap \"3 units\" of water between\n3 and 2, \"1 unit\" on top of bar 2 and\n\"3 units\" between 2 and 4. " }, { "code": null, "e": 611, "s": 600, "text": "Example 2:" }, { "code": null, "e": 829, "s": 611, "text": "Input: \nN = 12\nArr[] = [0,1,0,2,1,0,1,3,2,1,2,1]\nOutput: 6\nExplanation:\nThe structure is like below\n\nTrap \"1 unit\" between first 1 and 2,\n\"4 units\" between first 2 and 3 and\n\"1 unit\" between second last 1 and last 2 \n" }, { "code": null, "e": 1114, "s": 831, "text": "Your Task: \nYou don't need to read input or print anything. Your task is to complete the function maxWater() which takes the array of integers arr[] and n as input parameters and returns the amount of water collected.\n\n\nExpected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)" }, { "code": null, "e": 1158, "s": 1116, "text": "Constraints:\n1 ≤ N ≤ 105\n1 ≤ Arr[i] ≤ 103" }, { "code": null, "e": 1162, "s": 1160, "text": "0" }, { "code": null, "e": 1181, "s": 1162, "text": "dangrio1 month ago" }, { "code": null, "e": 1679, "s": 1181, "text": "int left=0, right=n-1; int water=0, maxLeft=0, maxRight=0; while(left<=right){ if(arr[left]<=arr[right]){ if(arr[left]>=maxLeft) maxLeft = arr[left]; else water += maxLeft-arr[left]; left++; }else{ if(arr[right]>=maxRight) maxRight = arr[right]; else water += maxRight-arr[right]; right--; } } return water;" }, { "code": null, "e": 1681, "s": 1679, "text": "0" }, { "code": null, "e": 1710, "s": 1681, "text": "akkeshri140420013 months ago" }, { "code": null, "e": 2171, "s": 1710, "text": "int maxWater(int arr[], int n) { // code here vector<int>left(n,0),right(n,0); left[0]=arr[0]; right[n-1]=arr[n-1]; for(int i=1;i<=n-1;i++){ left[i]=max(left[i-1],arr[i]); right[n-1-i]=max(right[n-i],arr[n-1-i]); } int water=0; for(int i=0;i<n;i++){ water+=min(left[i],right[i])-arr[i]; } return water; }" }, { "code": null, "e": 2175, "s": 2173, "text": "0" }, { "code": null, "e": 2200, "s": 2175, "text": "vishalja77193 months ago" }, { "code": null, "e": 2214, "s": 2200, "text": "/////c++ code" }, { "code": null, "e": 2786, "s": 2214, "text": "int left=0,right=n-1; int result=0,maxleft=0,maxright=0; while(left <= right){ if(arr[left] <= arr[right])//// { if(arr[left] >= maxleft) { maxleft=arr[left]; } else { result=result+maxleft-arr[left]; } left++; } else { if(arr[right] >= maxright) { maxright=arr[right]; } else { result=result+maxright-arr[right]; } right--; } } return result; } " }, { "code": null, "e": 2788, "s": 2786, "text": "0" }, { "code": null, "e": 2814, "s": 2788, "text": "arpita biswal3 months ago" }, { "code": null, "e": 2833, "s": 2814, "text": "Time taken:0.7/3.8" }, { "code": null, "e": 2844, "s": 2833, "text": "Javascript" }, { "code": null, "e": 3245, "s": 2846, "text": "class Solution { int maxWater(int arr[], int n) { // code here int leftmax=arr[0],rightmax=arr[n-1]; int mina[]=new int[n]; int maxa[] = new int[n]; mina[0]=leftmax; mina[n-1]=rightmax; maxa[0]=leftmax; maxa[n-1]=rightmax; for(int i=1;i<n-1;i++){ if(arr[i]>leftmax){ mina[i]=arr[i]; leftmax=arr[i]; } else{ mina[i]=leftmax; }}" }, { "code": null, "e": 3583, "s": 3245, "text": "for(int i=n-2;i>=1;i--){ if(arr[i]>rightmax){ maxa[i]=arr[i]; rightmax=arr[i]; } else{ maxa[i]=rightmax; }}int trappedwater=0;for(int i=1;i<n-1;i++){ int waterlevel = Math.min(mina[i-1],maxa[i+1]); int waterCollect = (waterlevel-arr[i]); if(waterCollect >0){ trappedwater=trappedwater+waterCollect; } }return trappedwater;}}" }, { "code": null, "e": 3585, "s": 3583, "text": "0" }, { "code": null, "e": 3612, "s": 3585, "text": "paulshankhdeep3 months ago" }, { "code": null, "e": 3633, "s": 3612, "text": "SIMPLE JAVA SOLUTION" }, { "code": null, "e": 4158, "s": 3635, "text": "class Solution { int maxWater(int arr[], int n) { // code here int left[] = new int [n]; int right[] = new int [n]; left[0] = arr[0]; for (int i = 1; i<n; i++) { left[i] = Math.max(arr[i],left[i-1]); } right[n-1] = arr[n-1]; for (int i=n-2; i>=0; i--) { right[i] = Math.max(arr[i],right[i+1]); } int ans = 0; for (int i = 0; i<n; i++) { ans = ans + (Math.min(left[i],right[i])-arr[i]); } return ans; }}" }, { "code": null, "e": 4160, "s": 4158, "text": "0" }, { "code": null, "e": 4184, "s": 4160, "text": "tahabasra925 months ago" }, { "code": null, "e": 4250, "s": 4184, "text": "this one is same as trapping rain water problem available on gfg " }, { "code": null, "e": 4271, "s": 4252, "text": "time required 0.2" }, { "code": null, "e": 4703, "s": 4273, "text": "class Solution {public: int maxWater(int a[], int n) { int left[n]; int right[n]; left[0]=a[0]; for(int i=1;i<n;i++){ left[i]=max(a[i],left[i-1]); } right[n-1]=a[n-1]; for(int i=n-2;i>=0;i--){ right[i]=max(a[i],right[i+1]); } long int ans=0; for(int i=0;i<n;i++){ ans=ans+(min(left[i],right[i])-a[i]); } return ans; }};" }, { "code": null, "e": 4709, "s": 4707, "text": "0" }, { "code": null, "e": 4732, "s": 4709, "text": "imranwahid6 months ago" }, { "code": null, "e": 4750, "s": 4732, "text": "Easy C++ solution" }, { "code": null, "e": 4752, "s": 4750, "text": "0" }, { "code": null, "e": 4775, "s": 4752, "text": "wajid945716 months ago" }, { "code": null, "e": 4835, "s": 4775, "text": "int maxWater(int arr[], int n){ int ar1[n]; int ar2[n];" }, { "code": null, "e": 5041, "s": 4835, "text": " int wt = 0; ar1[0] = arr[0]; for (int i = 1; i < n; i++) ar1[i] = max(ar1[i - 1], arr[i]); ar2[n - 1] = arr[n - 1]; for (int i = n - 2; i >= 0; i--) ar2[i] = max(ar2[i + 1], arr[i]);" }, { "code": null, "e": 5128, "s": 5041, "text": " for (int i = 0; i < n; i++) wt += min(ar1[i], ar2[i]) - arr[i]; return wt;}" }, { "code": null, "e": 5130, "s": 5128, "text": "0" }, { "code": null, "e": 5146, "s": 5130, "text": "mohdabuzaid0132" }, { "code": null, "e": 5172, "s": 5146, "text": "This comment was deleted." }, { "code": null, "e": 5174, "s": 5172, "text": "0" }, { "code": null, "e": 5203, "s": 5174, "text": "Mohit Kesharwani8 months ago" }, { "code": null, "e": 5220, "s": 5203, "text": "Mohit Kesharwani" }, { "code": null, "e": 5359, "s": 5220, "text": "class Solution {public: int maxWater(int arr[], int n) { int sum=0; int left[n]={0}; int right[n]={0};" }, { "code": null, "e": 5580, "s": 5359, "text": " left[0]=arr[0]; for(int i=1;i<n;i++){ left[i]=\"{max(left[i-1],arr[i])};\" }=\"\" right[n-1]=\"arr[n-1];\" for(int=\"\" i=\"n-2;i\">=0;i--){ right[i]={max(right[i+1],arr[i])}; }" }, { "code": null, "e": 5685, "s": 5580, "text": " for(int i=0;i<n;i++){ sum=\"sum+(min(left[i],right[i])-arr[i]);\" }=\"\" return=\"\" sum;=\"\" }=\"\">" }, { "code": null, "e": 5831, "s": 5685, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 5867, "s": 5831, "text": " Login to access your submissions. " }, { "code": null, "e": 5877, "s": 5867, "text": "\nProblem\n" }, { "code": null, "e": 5887, "s": 5877, "text": "\nContest\n" }, { "code": null, "e": 5950, "s": 5887, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 6098, "s": 5950, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 6306, "s": 6098, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 6412, "s": 6306, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
BabelJS - Working with BabelJS and Gulp
In this chapter, we will create project setup using babel and gulp. Gulp is a task runner that uses Node.js as a platform. Gulp will run the tasks that will transpile JavaScript files from es6 to es5 and once done will start the server to test the changes. We have used babel 6 in the project setup. In case you want to switch to babel 7, install the required packages of babel using @babel/babel-package-name. We will create the project first using npm commands and install the required packages to start with. npm init We have created a folder called gulpbabel. Further, we will install gulp and other required dependencies. npm install gulp --save-dev npm install gulp-babel --save-dev npm install gulp-connect --save-dev npm install babel-preset-env --save-dev npm install babel-core --save-dev We will add the Preset environment details to .babelrc file as follows gulpfile.js var gulp =require('gulp'); var babel =require('gulp-babel'); var connect = require("gulp-connect"); gulp.task('build', () => { gulp.src('src/./*.js') .pipe(babel()) .pipe(gulp.dest('./dev')) }); gulp.task('watch', () => { gulp.watch('./*.js', ['build']); }); gulp.task("connect", function () { connect.server({ root: ".", livereload: true }); }); gulp.task('start', ['build', 'watch', 'connect']); We have created three task in gulp, [‘build’,’watch’,’connect’]. All the js files available in src folder will be converted to es5 using babel as follows − gulp.task('build', () => { gulp.src('src/./*.js') .pipe(babel()) .pipe(gulp.dest('./dev')) }); The final changes are stored in the dev folder. Babel uses presets details from .babelrc. In case you want to change to some other preset, you can change the details in .babelrc file. Now will create a .js file in src folder using es6 javascript and run gulp start command to execute the changes. src/main.js class Person { constructor(fname, lname, age, address) { this.fname = fname; this.lname = lname; this.age = age; this.address = address; } get fullname() { return this.fname +"-"+this.lname; } } Command: gulp start dev/main.js This is transpiled using babel − "use strict"; var _createClass = function () { function defineProperties(target, props) { for (var i = 0; i <props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }(); function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } } var Person = function () { function Person(fname, lname, age, address) { _classCallCheck(this, Person); this.fname = fname; this.lname = lname; this.age = age; this.address = address; } _createClass(Person, [{ key: "fullname", get: function get() { return this.fname + "-" + this.lname; } }]); return Person; }(); Index.html This is done using transpiled dev/main.js − <html> <head></head> <body> <script type="text/javascript" src="dev/main.js"></script> <h1 id="displayname"></h1> <script type="text/javascript"> var a = new Student("Siya", "Kapoor", "15", "Mumbai"); var studentdet = a.fullname; document.getElementById("displayname").innerHTML = studentdet; </script> </body> </html> Output Print Add Notes Bookmark this page
[ { "code": null, "e": 2506, "s": 2095, "text": "In this chapter, we will create project setup using babel and gulp. Gulp is a task runner that uses Node.js as a platform. Gulp will run the tasks that will transpile JavaScript files from es6 to es5 and once done will start the server to test the changes. We have used babel 6 in the project setup. In case you want to switch to babel 7, install the required packages of babel using @babel/babel-package-name." }, { "code": null, "e": 2607, "s": 2506, "text": "We will create the project first using npm commands and install the required packages to start with." }, { "code": null, "e": 2617, "s": 2607, "text": "npm init\n" }, { "code": null, "e": 2723, "s": 2617, "text": "We have created a folder called gulpbabel. Further, we will install gulp and other required dependencies." }, { "code": null, "e": 2896, "s": 2723, "text": "npm install gulp --save-dev\nnpm install gulp-babel --save-dev\nnpm install gulp-connect --save-dev\nnpm install babel-preset-env --save-dev\nnpm install babel-core --save-dev\n" }, { "code": null, "e": 2967, "s": 2896, "text": "We will add the Preset environment details to .babelrc file as follows" }, { "code": null, "e": 2979, "s": 2967, "text": "gulpfile.js" }, { "code": null, "e": 3416, "s": 2979, "text": "var gulp =require('gulp');\nvar babel =require('gulp-babel');\nvar connect = require(\"gulp-connect\");\ngulp.task('build', () => {\n gulp.src('src/./*.js')\n .pipe(babel())\n .pipe(gulp.dest('./dev'))\n});\n\ngulp.task('watch', () => {\n gulp.watch('./*.js', ['build']);\n});\n\ngulp.task(\"connect\", function () {\n connect.server({\n root: \".\",\n livereload: true\n });\n});\n\ngulp.task('start', ['build', 'watch', 'connect']);" }, { "code": null, "e": 3572, "s": 3416, "text": "We have created three task in gulp, [‘build’,’watch’,’connect’]. All the js files available in src folder will be converted to es5 using babel as follows −" }, { "code": null, "e": 3682, "s": 3572, "text": "gulp.task('build', () => {\n gulp.src('src/./*.js')\n .pipe(babel())\n .pipe(gulp.dest('./dev'))\n});" }, { "code": null, "e": 3866, "s": 3682, "text": "The final changes are stored in the dev folder. Babel uses presets details from .babelrc. In case you want to change to some other preset, you can change the details in .babelrc file." }, { "code": null, "e": 3979, "s": 3866, "text": "Now will create a .js file in src folder using es6 javascript and run gulp start command to execute the changes." }, { "code": null, "e": 3991, "s": 3979, "text": "src/main.js" }, { "code": null, "e": 4229, "s": 3991, "text": "class Person {\n constructor(fname, lname, age, address) {\n this.fname = fname;\n this.lname = lname;\n this.age = age;\n this.address = address;\n }\n\n get fullname() {\n return this.fname +\"-\"+this.lname;\n }\n}" }, { "code": null, "e": 4249, "s": 4229, "text": "Command: gulp start" }, { "code": null, "e": 4261, "s": 4249, "text": "dev/main.js" }, { "code": null, "e": 4294, "s": 4261, "text": "This is transpiled using babel −" }, { "code": null, "e": 5532, "s": 4294, "text": "\"use strict\";\n\nvar _createClass = function () {\n function defineProperties(target, props) { \n for (var i = 0; i <props.length; i++) { \n var descriptor = props[i]; \n descriptor.enumerable = descriptor.enumerable || false; \n descriptor.configurable = true; \n if (\"value\" in descriptor) descriptor.writable = true; \n Object.defineProperty(target, descriptor.key, descriptor); \n } \n } \n return function (Constructor, protoProps, staticProps) { \n if (protoProps) defineProperties(Constructor.prototype, protoProps); \n if (staticProps) defineProperties(Constructor, staticProps); \n return Constructor; \n }; \n}();\n\nfunction _classCallCheck(instance, Constructor) {\n if (!(instance instanceof Constructor)) { \n throw new TypeError(\"Cannot call a class as a function\"); \n } \n}\n\nvar Person = function () {\n function Person(fname, lname, age, address) {\n _classCallCheck(this, Person);\n this.fname = fname;\n this.lname = lname;\n this.age = age;\n this.address = address;\n }\n _createClass(Person, [{\n key: \"fullname\",\n get: function get() {\n return this.fname + \"-\" + this.lname;\n }\n }]);\n\n return Person;\n}();" }, { "code": null, "e": 5543, "s": 5532, "text": "Index.html" }, { "code": null, "e": 5587, "s": 5543, "text": "This is done using transpiled dev/main.js −" }, { "code": null, "e": 5966, "s": 5587, "text": "<html>\n <head></head>\n <body>\n <script type=\"text/javascript\" src=\"dev/main.js\"></script>\n <h1 id=\"displayname\"></h1>\n <script type=\"text/javascript\">\n var a = new Student(\"Siya\", \"Kapoor\", \"15\", \"Mumbai\");\n var studentdet = a.fullname;\n document.getElementById(\"displayname\").innerHTML = studentdet;\n </script>\n </body>\n</html>" }, { "code": null, "e": 5973, "s": 5966, "text": "Output" }, { "code": null, "e": 5980, "s": 5973, "text": " Print" }, { "code": null, "e": 5991, "s": 5980, "text": " Add Notes" } ]
ROS Autonomous SLAM using Rapidly Exploring Random Tree (RRT) | by Mohamed Fazil | Towards Data Science
The Robot Operating System(ROS) has become a game-changer in the field of modern Robotics which has an extremely high potential in this field which is yet to be explored with the upcoming generation of more skilled open-source robotics communities. One of the greatest advantages of ROS is it allows seamless integration of different processes aka nodes in a robot system which can also be reused in other systems without any fuss with the support of the opensource community. One of the most popular applications of ROS is SLAM(Simultaneous Localization and Mapping). The objective of the SLAM in mobile robotics is constructing and updating the map of an unexplored environment with help of the available sensors attached to the robot which is will be used for exploring. In this demo, we will see the implementation of the prebuilt ROS packages i.e Navigation stack and Gmapping to map an unexplored house using SLAM autonomously using the Randomly Exploring Random Tree (RRT) algorithm to make the robot cover all the regions of the unknown environment. A Rapidly-exploring Random Tree (RRT) is a data structure and algorithm that is designed for efficiently searching nonconvex high-dimensional spaces. RRTs are constructed incrementally in a way that quickly reduces the expected distance of a randomly-chosen point to the tree. RRTs are particularly suited for path planning problems that involve obstacles and differential constraints (nonholonomic or kinodynamic). source We will be using the RRT algorithm for the robot to path plan to all the reachable distant endpoints within the sensor’s vicinity aka frontier points which in return makes the robot map new regions continuously using SLAM as it tries to reach its new distant endpoints. Applying RRT to mobile robots in such a way enables us to create a self-exploring autonomous robot with no human interventions required. The nature of the algorithm tends to be biased towards unexplored regions which become very beneficial to environment exploring tasks. More in-depth information and flow of this strategy can found in this publication: Autonomous robotic exploration based on multiple rapidly-exploring randomized trees. The Gazebo Simulator is a well-designed standalone robot simulator which can be used to rapidly test algorithms, design robots, perform regression testing, and train AI system using realistic scenarios. We will be using the Gazebo environment with a house model for our robot to explore and produce a map of the house. For this demo feel free to download my pre-built ROS package ros_autonomous_slam from my Github repository. This repository consists of a ROS package that uses the Navigation Stack to autonomously explore an unknown environment with help of GMAPPING and constructs a map of the explored environment. Finally, a path planning algorithm from the Navigation stack is used in the newly generated map to reach the goal. The Gazebo simulator is used for the simulation of the Turtlebot3 Waffle Pi robot. Various algorithms have been integrated for Autonomously exploring the region and constructing the map with help of the 360-degree Lidar sensor. Different environments can be swapped within launch files to generate the required map of the environment. The current most efficient algorithm used for autonomous exploration is the Rapidly Exploring Random Tree (RRT) algorithm. The RRT algorithm is implemented using the package from rrt_exploration which was created to support the Kobuki robots which I further modified the source files and built it for the Turtlebot3 robots in this package. Step 1: Place the Robot in the Environment within Gazebo Step 2: Perform Autonomous exploration of the environment and generate the Map Step 3: Perform path planning and go to goal in the environment Before starting to execute the steps please make sure you have the prerequisites and setup for this project demo completed successfully. There are three setup parts (Gazebo ROS Installation, Turtlebot3 Packages, and Navigation Stack Installation). I used Ubuntu 18 OS with ROS Melodic Version. Check the ROS official documentation for the Installation ROS Installation The main Gazebo Simulator which is a stand-alone application must be installed. Go through the documentation Gazebo Installation. Test the working of Gazebo and its version with gazebowhich gzserverwhich gzclient After Installing the Gazebo, the Gazebo ROS Package must be installed separately sudo apt-get install ros-melodic-gazebo-ros-pkgs ros-melodic-gazebo-ros-control Replace melodic with your version of ROS everywhere in this tutorial. The Turtlebot3 ROS packages can be either downloaded and built from source files in your workspace or else directly installed from the Linux terminal. Either way works, I would recommend doing both as it installs all the missing dependencies required automatically. Direct Installation source /opt/ros/melodic/setup.bashsudo apt-get install ros-melodic-turtlebot3-msgssudo apt-get install ros-melodic-turtlebot3 Building the packages cd catkin_ws/srcgit clone -b melodic-devel https://github.com/ROBOTIS-GIT/turtlebot3git clone -b melodic-devel https://github.com/ROBOTIS-GIT/turtlebot3_simulationscd ..catkin_makesource /devel/setup.bash The Navigation stack can also be downloaded as source files to your workspace and built. sudo apt-get install ros-melodic-navigationcd catkin_ws/srcgit clone -b melodic-devel https://github.com/ros-planning/navigationcd ..catkin_makesource /devel/setup.bash Set your environment variable to the model robot to be used. export TURTLEBOT3_MODEL=waffle_pisource ~/.bashrc Execute the given launch to open Gazebo with the given world file and place the robot Turtlebot3 Waffle pi model in it. roslaunch ros_autonomous_slam turtlebot3_world.launch Keep this process running always and execute other commands in a different terminal. roslaunch ros_autonomous_slam autonomous_explorer.launch Run the Autonomous Explorer launch file which executes two tasks for us at the same time. It starts the SLAM node in the Navigation stack with a custom modified RVIZ file to monitor the mapping of the environment.It simultaneously starts the Autonomous explorer which is a Python-based controller to move around the robot grazing all the areas which help the SLAM Node to complete the mapping. The default algorithm used for exploration is the RRT algorithm. I have also created an explorer method that uses Bug Wall following algorithm for exploration which can be tested by adding explorer the argument to the launch which takes [RRT,BUG_WALLFOLLOW]. It starts the SLAM node in the Navigation stack with a custom modified RVIZ file to monitor the mapping of the environment. It simultaneously starts the Autonomous explorer which is a Python-based controller to move around the robot grazing all the areas which help the SLAM Node to complete the mapping. The default algorithm used for exploration is the RRT algorithm. I have also created an explorer method that uses Bug Wall following algorithm for exploration which can be tested by adding explorer the argument to the launch which takes [RRT,BUG_WALLFOLLOW]. The RRT exploration requires a rectangular region around the robot to be defined in the RVIZ window using four points and a starting point for exploration within the known region of the robot. The total five points must be defined in the exact sequence given below using the RVIZ Publish Points option. Monitor the Mapping process in the RVIZ window and sit back and relax until our robot finishes mapping XD. Once you are satisfied with the constructed map, Save the map. rosrun map_server map_saver -f my_map The my_map.pgm and my_map.yaml gets saved in your home directory. Move these files to the package’s maps folder (catkin_ws\src\ros_autonomous_slam\maps).Now your new map which is basically an occupancy grid is constructed! Incase of Autonomous Fails you can manually control the robot in the environment using the keyboard with a separate launch execution given below. You can also manually explore and construct the map like a game. roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch We will be using the Navigation stack of the ROS to perform the path planning and go to goal using /move_base/goal actions. The given blow launch execution opens up an RVIZ window which shows the Robot location within the previously constructed map. roslaunch ros_autonomous_slam turtlebot3_navigation.launch The RVIZ Window shows the robot’s local map construction using its Laser sensors with respect to the Global Map previously constructed in Step 2 with help of a cost map. First, estimate the initial Pose i.e locating the real robot location with respect to the Map. This can be set in the RVIZ window itself using the 2D Pose Estimate and pointing and dragging the arrow in the current robot’s location and orientation. A GOAL point can be set in the RVIZ window itself using the 2D Nav Goal option which will be available in the top window tab. This allows you to set a goal point in the map within the RVIZ environment, then the robot automatically performs the path planning and starts to move in its path. ROS Navigation Stack requires tuning its parameters which work differently for different environment types to get the Optimal SLAM and Pathplanning performance. Here is ROS’s Navigation Stack parameter tuning guide for Turtlebot3. Turtlebot3 Navigation Parameter Tuning Guide
[ { "code": null, "e": 648, "s": 171, "text": "The Robot Operating System(ROS) has become a game-changer in the field of modern Robotics which has an extremely high potential in this field which is yet to be explored with the upcoming generation of more skilled open-source robotics communities. One of the greatest advantages of ROS is it allows seamless integration of different processes aka nodes in a robot system which can also be reused in other systems without any fuss with the support of the opensource community." }, { "code": null, "e": 945, "s": 648, "text": "One of the most popular applications of ROS is SLAM(Simultaneous Localization and Mapping). The objective of the SLAM in mobile robotics is constructing and updating the map of an unexplored environment with help of the available sensors attached to the robot which is will be used for exploring." }, { "code": null, "e": 1229, "s": 945, "text": "In this demo, we will see the implementation of the prebuilt ROS packages i.e Navigation stack and Gmapping to map an unexplored house using SLAM autonomously using the Randomly Exploring Random Tree (RRT) algorithm to make the robot cover all the regions of the unknown environment." }, { "code": null, "e": 1652, "s": 1229, "text": "A Rapidly-exploring Random Tree (RRT) is a data structure and algorithm that is designed for efficiently searching nonconvex high-dimensional spaces. RRTs are constructed incrementally in a way that quickly reduces the expected distance of a randomly-chosen point to the tree. RRTs are particularly suited for path planning problems that involve obstacles and differential constraints (nonholonomic or kinodynamic). source" }, { "code": null, "e": 2362, "s": 1652, "text": "We will be using the RRT algorithm for the robot to path plan to all the reachable distant endpoints within the sensor’s vicinity aka frontier points which in return makes the robot map new regions continuously using SLAM as it tries to reach its new distant endpoints. Applying RRT to mobile robots in such a way enables us to create a self-exploring autonomous robot with no human interventions required. The nature of the algorithm tends to be biased towards unexplored regions which become very beneficial to environment exploring tasks. More in-depth information and flow of this strategy can found in this publication: Autonomous robotic exploration based on multiple rapidly-exploring randomized trees." }, { "code": null, "e": 2681, "s": 2362, "text": "The Gazebo Simulator is a well-designed standalone robot simulator which can be used to rapidly test algorithms, design robots, perform regression testing, and train AI system using realistic scenarios. We will be using the Gazebo environment with a house model for our robot to explore and produce a map of the house." }, { "code": null, "e": 2789, "s": 2681, "text": "For this demo feel free to download my pre-built ROS package ros_autonomous_slam from my Github repository." }, { "code": null, "e": 3771, "s": 2789, "text": "This repository consists of a ROS package that uses the Navigation Stack to autonomously explore an unknown environment with help of GMAPPING and constructs a map of the explored environment. Finally, a path planning algorithm from the Navigation stack is used in the newly generated map to reach the goal. The Gazebo simulator is used for the simulation of the Turtlebot3 Waffle Pi robot. Various algorithms have been integrated for Autonomously exploring the region and constructing the map with help of the 360-degree Lidar sensor. Different environments can be swapped within launch files to generate the required map of the environment. The current most efficient algorithm used for autonomous exploration is the Rapidly Exploring Random Tree (RRT) algorithm. The RRT algorithm is implemented using the package from rrt_exploration which was created to support the Kobuki robots which I further modified the source files and built it for the Turtlebot3 robots in this package." }, { "code": null, "e": 3828, "s": 3771, "text": "Step 1: Place the Robot in the Environment within Gazebo" }, { "code": null, "e": 3907, "s": 3828, "text": "Step 2: Perform Autonomous exploration of the environment and generate the Map" }, { "code": null, "e": 3971, "s": 3907, "text": "Step 3: Perform path planning and go to goal in the environment" }, { "code": null, "e": 4219, "s": 3971, "text": "Before starting to execute the steps please make sure you have the prerequisites and setup for this project demo completed successfully. There are three setup parts (Gazebo ROS Installation, Turtlebot3 Packages, and Navigation Stack Installation)." }, { "code": null, "e": 4340, "s": 4219, "text": "I used Ubuntu 18 OS with ROS Melodic Version. Check the ROS official documentation for the Installation ROS Installation" }, { "code": null, "e": 4518, "s": 4340, "text": "The main Gazebo Simulator which is a stand-alone application must be installed. Go through the documentation Gazebo Installation. Test the working of Gazebo and its version with" }, { "code": null, "e": 4553, "s": 4518, "text": "gazebowhich gzserverwhich gzclient" }, { "code": null, "e": 4634, "s": 4553, "text": "After Installing the Gazebo, the Gazebo ROS Package must be installed separately" }, { "code": null, "e": 4714, "s": 4634, "text": "sudo apt-get install ros-melodic-gazebo-ros-pkgs ros-melodic-gazebo-ros-control" }, { "code": null, "e": 4784, "s": 4714, "text": "Replace melodic with your version of ROS everywhere in this tutorial." }, { "code": null, "e": 5050, "s": 4784, "text": "The Turtlebot3 ROS packages can be either downloaded and built from source files in your workspace or else directly installed from the Linux terminal. Either way works, I would recommend doing both as it installs all the missing dependencies required automatically." }, { "code": null, "e": 5070, "s": 5050, "text": "Direct Installation" }, { "code": null, "e": 5196, "s": 5070, "text": "source /opt/ros/melodic/setup.bashsudo apt-get install ros-melodic-turtlebot3-msgssudo apt-get install ros-melodic-turtlebot3" }, { "code": null, "e": 5218, "s": 5196, "text": "Building the packages" }, { "code": null, "e": 5423, "s": 5218, "text": "cd catkin_ws/srcgit clone -b melodic-devel https://github.com/ROBOTIS-GIT/turtlebot3git clone -b melodic-devel https://github.com/ROBOTIS-GIT/turtlebot3_simulationscd ..catkin_makesource /devel/setup.bash" }, { "code": null, "e": 5512, "s": 5423, "text": "The Navigation stack can also be downloaded as source files to your workspace and built." }, { "code": null, "e": 5681, "s": 5512, "text": "sudo apt-get install ros-melodic-navigationcd catkin_ws/srcgit clone -b melodic-devel https://github.com/ros-planning/navigationcd ..catkin_makesource /devel/setup.bash" }, { "code": null, "e": 5742, "s": 5681, "text": "Set your environment variable to the model robot to be used." }, { "code": null, "e": 5792, "s": 5742, "text": "export TURTLEBOT3_MODEL=waffle_pisource ~/.bashrc" }, { "code": null, "e": 5912, "s": 5792, "text": "Execute the given launch to open Gazebo with the given world file and place the robot Turtlebot3 Waffle pi model in it." }, { "code": null, "e": 5966, "s": 5912, "text": "roslaunch ros_autonomous_slam turtlebot3_world.launch" }, { "code": null, "e": 6051, "s": 5966, "text": "Keep this process running always and execute other commands in a different terminal." }, { "code": null, "e": 6108, "s": 6051, "text": "roslaunch ros_autonomous_slam autonomous_explorer.launch" }, { "code": null, "e": 6198, "s": 6108, "text": "Run the Autonomous Explorer launch file which executes two tasks for us at the same time." }, { "code": null, "e": 6761, "s": 6198, "text": "It starts the SLAM node in the Navigation stack with a custom modified RVIZ file to monitor the mapping of the environment.It simultaneously starts the Autonomous explorer which is a Python-based controller to move around the robot grazing all the areas which help the SLAM Node to complete the mapping. The default algorithm used for exploration is the RRT algorithm. I have also created an explorer method that uses Bug Wall following algorithm for exploration which can be tested by adding explorer the argument to the launch which takes [RRT,BUG_WALLFOLLOW]." }, { "code": null, "e": 6885, "s": 6761, "text": "It starts the SLAM node in the Navigation stack with a custom modified RVIZ file to monitor the mapping of the environment." }, { "code": null, "e": 7325, "s": 6885, "text": "It simultaneously starts the Autonomous explorer which is a Python-based controller to move around the robot grazing all the areas which help the SLAM Node to complete the mapping. The default algorithm used for exploration is the RRT algorithm. I have also created an explorer method that uses Bug Wall following algorithm for exploration which can be tested by adding explorer the argument to the launch which takes [RRT,BUG_WALLFOLLOW]." }, { "code": null, "e": 7628, "s": 7325, "text": "The RRT exploration requires a rectangular region around the robot to be defined in the RVIZ window using four points and a starting point for exploration within the known region of the robot. The total five points must be defined in the exact sequence given below using the RVIZ Publish Points option." }, { "code": null, "e": 7735, "s": 7628, "text": "Monitor the Mapping process in the RVIZ window and sit back and relax until our robot finishes mapping XD." }, { "code": null, "e": 7798, "s": 7735, "text": "Once you are satisfied with the constructed map, Save the map." }, { "code": null, "e": 7836, "s": 7798, "text": "rosrun map_server map_saver -f my_map" }, { "code": null, "e": 8059, "s": 7836, "text": "The my_map.pgm and my_map.yaml gets saved in your home directory. Move these files to the package’s maps folder (catkin_ws\\src\\ros_autonomous_slam\\maps).Now your new map which is basically an occupancy grid is constructed!" }, { "code": null, "e": 8270, "s": 8059, "text": "Incase of Autonomous Fails you can manually control the robot in the environment using the keyboard with a separate launch execution given below. You can also manually explore and construct the map like a game." }, { "code": null, "e": 8327, "s": 8270, "text": "roslaunch turtlebot3_teleop turtlebot3_teleop_key.launch" }, { "code": null, "e": 8577, "s": 8327, "text": "We will be using the Navigation stack of the ROS to perform the path planning and go to goal using /move_base/goal actions. The given blow launch execution opens up an RVIZ window which shows the Robot location within the previously constructed map." }, { "code": null, "e": 8636, "s": 8577, "text": "roslaunch ros_autonomous_slam turtlebot3_navigation.launch" }, { "code": null, "e": 8806, "s": 8636, "text": "The RVIZ Window shows the robot’s local map construction using its Laser sensors with respect to the Global Map previously constructed in Step 2 with help of a cost map." }, { "code": null, "e": 9055, "s": 8806, "text": "First, estimate the initial Pose i.e locating the real robot location with respect to the Map. This can be set in the RVIZ window itself using the 2D Pose Estimate and pointing and dragging the arrow in the current robot’s location and orientation." }, { "code": null, "e": 9345, "s": 9055, "text": "A GOAL point can be set in the RVIZ window itself using the 2D Nav Goal option which will be available in the top window tab. This allows you to set a goal point in the map within the RVIZ environment, then the robot automatically performs the path planning and starts to move in its path." } ]
How to clone an element using jQuery?
To clone an element using jQuery, use the jQuery.clone() method. The clone() method clones matched DOM Elements and select the clones. This is useful for moving copies of the elements to another location in the DOM. You can try to run the following code to learn how to clone an element using jQuery: Live Demo <html> <head> <title>jQuery clone() method</title> <script src = "https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js"></script> <script> $(document).ready(function() { $("div").click(function () { $(this).clone().insertAfter(this); }); }); </script> <style> .div { margin:10px; padding:12px; border:2px solid #666; width:60px; } </style> </head> <body> <p>Click on any square below to see the result:</p> <div class = "div" style = "background-color:blue;"></div> <div class = "div" style = "background-color:green;"></div> <div class = "div" style = "background-color:red;"></div> </body> </html>
[ { "code": null, "e": 1197, "s": 1062, "text": "To clone an element using jQuery, use the jQuery.clone() method. The clone() method clones matched DOM Elements and select the clones." }, { "code": null, "e": 1278, "s": 1197, "text": "This is useful for moving copies of the elements to another location in the DOM." }, { "code": null, "e": 1363, "s": 1278, "text": "You can try to run the following code to learn how to clone an element using jQuery:" }, { "code": null, "e": 1373, "s": 1363, "text": "Live Demo" }, { "code": null, "e": 2229, "s": 1373, "text": "<html>\n\n <head>\n <title>jQuery clone() method</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js\"></script>\n \n <script>\n $(document).ready(function() {\n $(\"div\").click(function () {\n $(this).clone().insertAfter(this);\n });\n });\n </script>\n \n <style>\n .div {\n margin:10px;\n padding:12px;\n border:2px solid #666;\n width:60px;\n }\n </style>\n </head>\n \n <body>\n \n <p>Click on any square below to see the result:</p>\n \n <div class = \"div\" style = \"background-color:blue;\"></div>\n <div class = \"div\" style = \"background-color:green;\"></div>\n <div class = \"div\" style = \"background-color:red;\"></div>\n \n </body> \n</html>" } ]
Why Weight? The Importance of Training on Balanced Datasets | by Anna D'Angela | Towards Data Science
Imagine being asked the familiar riddle — “Which weighs more: a pound of lead or a pound of feathers?” As you prepare to assertively announce that they weigh the same, you realize the inquirer has already stolen your wallet from your back pocket. In supervised machine learning, it is important to train an estimator on balanced data so the model is equally informed on all classes. Setting weights is estimator specific. Many Scikit-Learn classifiers have a class_weights parameter that can be set to ‘balance’ or given a custom dictionary to declare how to rank the importance of imbalanced data. In this method, it is similar to oversampling. Instead of actually oversampling (using a larger dataset would be computationally more expensive) to balance the classes, we can inform the estimator to adjusts how it calculates loss. Using weights, we can force as estimator to learn based on more or less importance (‘weight’) given to a particular class. Weights scale the loss function. As the model trains on each point, the error will be multiplied by the weight of the point. The estimator will try to minimize error on the more heavily weighted classes, because they will have a greater effect on error, sending a stronger signal. Without weights set, the model treats each point as equally important. “Balance is not something you find, it’s something you create” ― Jana Kingsford This sample data set is pulled from a text classification project of mine. I set out to classify hotel reviews by rating, see the full details on my GitHub. The data strongly favors positive reviews (or else hotels would need to seriously re-examine their business model). Class Distribution (%)1 7.4319612 8.6950453 17.5296584 33.0914175 33.251919 Scikit-Learn has functions to calculate class weight and sample weight form their .utils library. Custom weights can also be input as a dictionary with format {class_label: weight} . I calculated balanced weights for the above case: Class Weights: 5 classes{1: 2.691079812206573, 2: 2.3001605136436596, 3: 1.140923566878981, 4: 0.6043863348797975, 5: 0.6014690451206716} As you can see, heavier weights are applied to the minority classes, indicating the model must give more importance to these classes. Lower weights to the majority classes so they have less importance. A weight of 0 would mean no effect or importance (if you needed to mute a class). (Table at left) I’ve combined the normalized distribution of classes and the calculated weight. The ‘balanced’ column is the weight multiplied by the distribution. We see the same number for each class, adding up to 1. This is equivalent to an equal probability of seeing any class (1/5 = 0.2). Balanced class weights can be automatically calculated within the sample weight function. Set class_weight = 'balanced' to automatically adjust weights inversely proportional to class frequencies in the input data (as shown in the above table). from sklearn.utils import class_weightsample_weights = compute_sample_weight(class_weight = 'balanced', y = y_train) The sample weights are returned as an array with the class weight mapped to each sample in the target data (y_train). Example: Sample Weights: 14330 samplesarray([0.60146905, 2.30016051, 0.60438633, ..., 0.60438633, 1.14092357, 1.14092357]) To use the sample weights in a Scikit-Learn Multinomial Naive Bayes pipeline, the weights must be added in the fit step. For this demo I will not explore NLP, this is just a comparison of the singular effect of weighting samples. So don’t focus on overall performance. pipeline = Pipeline(steps=[("NLP", TfidfVectorizer(), ("MNB", MultinomialNB()) ])pipeline.fit(X_train, y_train, **{'MNB__sample_weight': sample_weights}) Comparing results of the above model trained without sample weights: The unweighted model reached 55% accuracy. Predictions heavily favor the majority classes. This model almost completely ignores the minority classes. The exact same model was trained with the addition of balanced sample weights in the fit step. This model reached 58% accuracy. Along the True Positive diagonal (top-left to bottom-right), we can see that the model is much better fit to predict the minority classes. There is only a 3% difference in accuracy between the models, but vastly different predictive abilities. Accuracy is skewed because the test class has the same distribution of as the training data. So the model is just guessing across with the same proportions and hitting the mark enough times with the majority classes. This is why accuracy alone is not a good metric for model success! But that is a conversation for a different day, or you can check out this article about the Failure of Classification Accuracy for Imbalanced Class Distributions. It is important to train models on balanced data sets (unless there is a particular application to weight a certain class with more importance) to avoid distribution bias in predictive ability. Some Scikit-Learn models can automatically balance input classes with class_weights = 'balance'. The Bayesian models require an array of sample weights, which can be calculated with compute_sample_weight() . Thank you for reading! You can view the text-classification project this blog was developed from on my GitHub.
[ { "code": null, "e": 419, "s": 172, "text": "Imagine being asked the familiar riddle — “Which weighs more: a pound of lead or a pound of feathers?” As you prepare to assertively announce that they weigh the same, you realize the inquirer has already stolen your wallet from your back pocket." }, { "code": null, "e": 555, "s": 419, "text": "In supervised machine learning, it is important to train an estimator on balanced data so the model is equally informed on all classes." }, { "code": null, "e": 771, "s": 555, "text": "Setting weights is estimator specific. Many Scikit-Learn classifiers have a class_weights parameter that can be set to ‘balance’ or given a custom dictionary to declare how to rank the importance of imbalanced data." }, { "code": null, "e": 1126, "s": 771, "text": "In this method, it is similar to oversampling. Instead of actually oversampling (using a larger dataset would be computationally more expensive) to balance the classes, we can inform the estimator to adjusts how it calculates loss. Using weights, we can force as estimator to learn based on more or less importance (‘weight’) given to a particular class." }, { "code": null, "e": 1478, "s": 1126, "text": "Weights scale the loss function. As the model trains on each point, the error will be multiplied by the weight of the point. The estimator will try to minimize error on the more heavily weighted classes, because they will have a greater effect on error, sending a stronger signal. Without weights set, the model treats each point as equally important." }, { "code": null, "e": 1541, "s": 1478, "text": "“Balance is not something you find, it’s something you create”" }, { "code": null, "e": 1558, "s": 1541, "text": "― Jana Kingsford" }, { "code": null, "e": 1831, "s": 1558, "text": "This sample data set is pulled from a text classification project of mine. I set out to classify hotel reviews by rating, see the full details on my GitHub. The data strongly favors positive reviews (or else hotels would need to seriously re-examine their business model)." }, { "code": null, "e": 1924, "s": 1831, "text": "Class Distribution (%)1 7.4319612 8.6950453 17.5296584 33.0914175 33.251919" }, { "code": null, "e": 2157, "s": 1924, "text": "Scikit-Learn has functions to calculate class weight and sample weight form their .utils library. Custom weights can also be input as a dictionary with format {class_label: weight} . I calculated balanced weights for the above case:" }, { "code": null, "e": 2295, "s": 2157, "text": "Class Weights: 5 classes{1: 2.691079812206573, 2: 2.3001605136436596, 3: 1.140923566878981, 4: 0.6043863348797975, 5: 0.6014690451206716}" }, { "code": null, "e": 2579, "s": 2295, "text": "As you can see, heavier weights are applied to the minority classes, indicating the model must give more importance to these classes. Lower weights to the majority classes so they have less importance. A weight of 0 would mean no effect or importance (if you needed to mute a class)." }, { "code": null, "e": 2874, "s": 2579, "text": "(Table at left) I’ve combined the normalized distribution of classes and the calculated weight. The ‘balanced’ column is the weight multiplied by the distribution. We see the same number for each class, adding up to 1. This is equivalent to an equal probability of seeing any class (1/5 = 0.2)." }, { "code": null, "e": 3119, "s": 2874, "text": "Balanced class weights can be automatically calculated within the sample weight function. Set class_weight = 'balanced' to automatically adjust weights inversely proportional to class frequencies in the input data (as shown in the above table)." }, { "code": null, "e": 3286, "s": 3119, "text": "from sklearn.utils import class_weightsample_weights = compute_sample_weight(class_weight = 'balanced', y = y_train)" }, { "code": null, "e": 3413, "s": 3286, "text": "The sample weights are returned as an array with the class weight mapped to each sample in the target data (y_train). Example:" }, { "code": null, "e": 3527, "s": 3413, "text": "Sample Weights: 14330 samplesarray([0.60146905, 2.30016051, 0.60438633, ..., 0.60438633, 1.14092357, 1.14092357])" }, { "code": null, "e": 3796, "s": 3527, "text": "To use the sample weights in a Scikit-Learn Multinomial Naive Bayes pipeline, the weights must be added in the fit step. For this demo I will not explore NLP, this is just a comparison of the singular effect of weighting samples. So don’t focus on overall performance." }, { "code": null, "e": 4027, "s": 3796, "text": "pipeline = Pipeline(steps=[(\"NLP\", TfidfVectorizer(), (\"MNB\", MultinomialNB()) ])pipeline.fit(X_train, y_train, **{'MNB__sample_weight': sample_weights})" }, { "code": null, "e": 4139, "s": 4027, "text": "Comparing results of the above model trained without sample weights: The unweighted model reached 55% accuracy." }, { "code": null, "e": 4246, "s": 4139, "text": "Predictions heavily favor the majority classes. This model almost completely ignores the minority classes." }, { "code": null, "e": 4374, "s": 4246, "text": "The exact same model was trained with the addition of balanced sample weights in the fit step. This model reached 58% accuracy." }, { "code": null, "e": 4513, "s": 4374, "text": "Along the True Positive diagonal (top-left to bottom-right), we can see that the model is much better fit to predict the minority classes." }, { "code": null, "e": 5065, "s": 4513, "text": "There is only a 3% difference in accuracy between the models, but vastly different predictive abilities. Accuracy is skewed because the test class has the same distribution of as the training data. So the model is just guessing across with the same proportions and hitting the mark enough times with the majority classes. This is why accuracy alone is not a good metric for model success! But that is a conversation for a different day, or you can check out this article about the Failure of Classification Accuracy for Imbalanced Class Distributions." }, { "code": null, "e": 5467, "s": 5065, "text": "It is important to train models on balanced data sets (unless there is a particular application to weight a certain class with more importance) to avoid distribution bias in predictive ability. Some Scikit-Learn models can automatically balance input classes with class_weights = 'balance'. The Bayesian models require an array of sample weights, which can be calculated with compute_sample_weight() ." } ]
TabPy: Combining Python and Tableau | by Bima Putra Pratama | Towards Data Science
Can we integrate the power of Python calculation with a Tableau? That question was encourage me to start exploring the possibility of using Python calculation in Tableau, and I ended up with a TabPy. So, What is TabPy? How can we use TabPy to integrating Python and Tableau? In this article, I will introduce TabPy and go through an example of how we can use it. TabPy is an Analytics Extension from Tableau which enables us as a user to execute Python scripts and saved functions using Tableau. Using TabPy, Tableau can run Python script on the fly and display the results as a Visualization. Users can control data being sent to TabPy by interacting in their Tableau worksheet, dashboard, or stories using parameters. You can read more about TabPy in the official Github Repository: github.com I assume you already have Python installed in your system. If you don’t, you can install it first by going to https://www.python.org/ to download the python installation. Then you can install it in your system. Next, we can install TabPy as a python package by using pip: pip install tabpy Once the installation success, we can run the services using the following command: tabpy If all goes well, you should see this: By default, this service will be running in your localhost on port 9004. You can also verify it by open it in your web browser. Now, let’s go to our Tableau and set up the service. I am using Tableau Desktop version 2020.3.0. However, there will be no difference in the previous version as well. First, go to Help, then choose Settings and Performance and select Manage Analytics Extension Connection. Then, you can set up the Server and Port. You can leave Sign in with a username and password blank, as we don’t set up credentials in our TabPy service. Once done, click the Test Connection. If successful, you will see this message: Congratulations!! Now, our Tableau is already connected with TabPy and ready to use. There are two ways that we can use to do Python calculation: Write code directly as Tableau calculated fields. The code then will be immediately executed on the fly in the TabPy server. Deploy a function into the TabPy server that can be reachable as a REST API endpoint. In this article, I will only show how to do the first method, which we will write code directly as Tableau calculated fields. As an example, we will perform clustering to the Airbnb dataset that publicly available through the Tableau site, and you can download it using this link. We will cluster each zipcode based on their housing characteristics using several popular clustering algorithms. In the first step, let’s import our data set to Tableau. This dataset has 13 columns. As our primary goal is to see how we use TabPy, We will not focus on making the best possible model. Thus, we will only use the following variables in this dataset to perform clustering: The median number of beds in each zip code The average price in each zip code The median number of ratings in each zip code We need to create two parameters that will be used to select our clustering method and number of clusters, which are: Cluster Numbers Clustering Algorithm We will create a python script as a calculated field in Tableau. You can then insert the following script in a calculated field. This code is wrapped in SCRIPT_REAL() function from Tableau and will do the following: Import required Python libraries. Scaling features with Standard Scaler Combine Scaled Features and handling null values Conditional to check which algorithm to use and do the following Return clustering results as a list. Then we will convert the results into String data type to make it as categorical data. One more thing to notice is we need to do the Table Calculation in Zipcode. So we need to change the Default Table Calculation to Zipcode to make this code works. Now, it’s time to visualize the results. I use a Zipcode to create a Map to visualize the clustering results. We can use the parameter to change the number of clusters. Let’s celebrate coming up to this point! If you follow the step, you have been successfully integrating Python and Tableau. This integration is a beginning step for a more advanced use case using Tableau and Python. I’m looking forward to seeing what you build with this integration! Bima is a Data Scientist with Tableau Desktop Specialist Certification, who always eager to expand his knowledge and skills. He was graduated as a Mining Engineer and began his Data Science journey through various online programs from HardvardX, IBM, Udacity, etc. Currently, he is making impacts together with DANA Indonesia in building a cashless society in Indonesia. If you have any feedback or any topics to be discussed, please reach out to Bima via LinkedIn. I’m happy to connect with you! https://tableaumagic.com/tableau-and-python-an-introduction/ https://github.com/tableau/TabPy https://public.tableau.com/en-us/s/resources https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916 https://www.tableau.com/about/blog/2016/11/leverage-power-python-tableau-tabpy-62077 https://towardsdatascience.com/tableau-python-tabpy-and-geographical-clustering-219b0583ded3 Some rights reserved
[ { "code": null, "e": 237, "s": 172, "text": "Can we integrate the power of Python calculation with a Tableau?" }, { "code": null, "e": 372, "s": 237, "text": "That question was encourage me to start exploring the possibility of using Python calculation in Tableau, and I ended up with a TabPy." }, { "code": null, "e": 447, "s": 372, "text": "So, What is TabPy? How can we use TabPy to integrating Python and Tableau?" }, { "code": null, "e": 535, "s": 447, "text": "In this article, I will introduce TabPy and go through an example of how we can use it." }, { "code": null, "e": 892, "s": 535, "text": "TabPy is an Analytics Extension from Tableau which enables us as a user to execute Python scripts and saved functions using Tableau. Using TabPy, Tableau can run Python script on the fly and display the results as a Visualization. Users can control data being sent to TabPy by interacting in their Tableau worksheet, dashboard, or stories using parameters." }, { "code": null, "e": 957, "s": 892, "text": "You can read more about TabPy in the official Github Repository:" }, { "code": null, "e": 968, "s": 957, "text": "github.com" }, { "code": null, "e": 1179, "s": 968, "text": "I assume you already have Python installed in your system. If you don’t, you can install it first by going to https://www.python.org/ to download the python installation. Then you can install it in your system." }, { "code": null, "e": 1240, "s": 1179, "text": "Next, we can install TabPy as a python package by using pip:" }, { "code": null, "e": 1258, "s": 1240, "text": "pip install tabpy" }, { "code": null, "e": 1342, "s": 1258, "text": "Once the installation success, we can run the services using the following command:" }, { "code": null, "e": 1348, "s": 1342, "text": "tabpy" }, { "code": null, "e": 1387, "s": 1348, "text": "If all goes well, you should see this:" }, { "code": null, "e": 1515, "s": 1387, "text": "By default, this service will be running in your localhost on port 9004. You can also verify it by open it in your web browser." }, { "code": null, "e": 1683, "s": 1515, "text": "Now, let’s go to our Tableau and set up the service. I am using Tableau Desktop version 2020.3.0. However, there will be no difference in the previous version as well." }, { "code": null, "e": 1789, "s": 1683, "text": "First, go to Help, then choose Settings and Performance and select Manage Analytics Extension Connection." }, { "code": null, "e": 1942, "s": 1789, "text": "Then, you can set up the Server and Port. You can leave Sign in with a username and password blank, as we don’t set up credentials in our TabPy service." }, { "code": null, "e": 2022, "s": 1942, "text": "Once done, click the Test Connection. If successful, you will see this message:" }, { "code": null, "e": 2107, "s": 2022, "text": "Congratulations!! Now, our Tableau is already connected with TabPy and ready to use." }, { "code": null, "e": 2168, "s": 2107, "text": "There are two ways that we can use to do Python calculation:" }, { "code": null, "e": 2293, "s": 2168, "text": "Write code directly as Tableau calculated fields. The code then will be immediately executed on the fly in the TabPy server." }, { "code": null, "e": 2379, "s": 2293, "text": "Deploy a function into the TabPy server that can be reachable as a REST API endpoint." }, { "code": null, "e": 2505, "s": 2379, "text": "In this article, I will only show how to do the first method, which we will write code directly as Tableau calculated fields." }, { "code": null, "e": 2773, "s": 2505, "text": "As an example, we will perform clustering to the Airbnb dataset that publicly available through the Tableau site, and you can download it using this link. We will cluster each zipcode based on their housing characteristics using several popular clustering algorithms." }, { "code": null, "e": 2859, "s": 2773, "text": "In the first step, let’s import our data set to Tableau. This dataset has 13 columns." }, { "code": null, "e": 3046, "s": 2859, "text": "As our primary goal is to see how we use TabPy, We will not focus on making the best possible model. Thus, we will only use the following variables in this dataset to perform clustering:" }, { "code": null, "e": 3089, "s": 3046, "text": "The median number of beds in each zip code" }, { "code": null, "e": 3124, "s": 3089, "text": "The average price in each zip code" }, { "code": null, "e": 3170, "s": 3124, "text": "The median number of ratings in each zip code" }, { "code": null, "e": 3288, "s": 3170, "text": "We need to create two parameters that will be used to select our clustering method and number of clusters, which are:" }, { "code": null, "e": 3304, "s": 3288, "text": "Cluster Numbers" }, { "code": null, "e": 3325, "s": 3304, "text": "Clustering Algorithm" }, { "code": null, "e": 3390, "s": 3325, "text": "We will create a python script as a calculated field in Tableau." }, { "code": null, "e": 3454, "s": 3390, "text": "You can then insert the following script in a calculated field." }, { "code": null, "e": 3541, "s": 3454, "text": "This code is wrapped in SCRIPT_REAL() function from Tableau and will do the following:" }, { "code": null, "e": 3575, "s": 3541, "text": "Import required Python libraries." }, { "code": null, "e": 3613, "s": 3575, "text": "Scaling features with Standard Scaler" }, { "code": null, "e": 3662, "s": 3613, "text": "Combine Scaled Features and handling null values" }, { "code": null, "e": 3727, "s": 3662, "text": "Conditional to check which algorithm to use and do the following" }, { "code": null, "e": 3764, "s": 3727, "text": "Return clustering results as a list." }, { "code": null, "e": 3851, "s": 3764, "text": "Then we will convert the results into String data type to make it as categorical data." }, { "code": null, "e": 4014, "s": 3851, "text": "One more thing to notice is we need to do the Table Calculation in Zipcode. So we need to change the Default Table Calculation to Zipcode to make this code works." }, { "code": null, "e": 4183, "s": 4014, "text": "Now, it’s time to visualize the results. I use a Zipcode to create a Map to visualize the clustering results. We can use the parameter to change the number of clusters." }, { "code": null, "e": 4399, "s": 4183, "text": "Let’s celebrate coming up to this point! If you follow the step, you have been successfully integrating Python and Tableau. This integration is a beginning step for a more advanced use case using Tableau and Python." }, { "code": null, "e": 4467, "s": 4399, "text": "I’m looking forward to seeing what you build with this integration!" }, { "code": null, "e": 4838, "s": 4467, "text": "Bima is a Data Scientist with Tableau Desktop Specialist Certification, who always eager to expand his knowledge and skills. He was graduated as a Mining Engineer and began his Data Science journey through various online programs from HardvardX, IBM, Udacity, etc. Currently, he is making impacts together with DANA Indonesia in building a cashless society in Indonesia." }, { "code": null, "e": 4964, "s": 4838, "text": "If you have any feedback or any topics to be discussed, please reach out to Bima via LinkedIn. I’m happy to connect with you!" }, { "code": null, "e": 5025, "s": 4964, "text": "https://tableaumagic.com/tableau-and-python-an-introduction/" }, { "code": null, "e": 5058, "s": 5025, "text": "https://github.com/tableau/TabPy" }, { "code": null, "e": 5103, "s": 5058, "text": "https://public.tableau.com/en-us/s/resources" }, { "code": null, "e": 5198, "s": 5103, "text": "https://www.tableau.com/about/blog/2017/1/building-advanced-analytics-applications-tabpy-64916" }, { "code": null, "e": 5283, "s": 5198, "text": "https://www.tableau.com/about/blog/2016/11/leverage-power-python-tableau-tabpy-62077" }, { "code": null, "e": 5376, "s": 5283, "text": "https://towardsdatascience.com/tableau-python-tabpy-and-geographical-clustering-219b0583ded3" } ]
Array elements that appear more than once in C?
Array is a container of elements of Same data types length needs to be defined beforehand. And an element can appear in any order and any number of times in an array. so in this program we will find elements that appear more than once in an array. Problem description − We have given an array arr[] in which we have to find which of the element are repeating in the array an app to print them. Let’s take an example to understand this better. Input: arr[] = {5, 11, 11, 2, 1, 4, 2} Output: 11 2 We have an array arr which contains some element firstly we would compare the element from the next element in the duplicate function which is used to find the repeated element in the array. In duplicate function we are using the loop to find the duplicate elements in the given array we will use if else condition to check the count of array elements from the array element occurred one time then the count will be 1 if occur more than one time then count will be incremented respectively if the count is more than 1 then the element will be printed on the screen. Input : arr[], n the length of array. Step 1 : For i -> 0 to n, Follow step 2, Step 2 : For each element of the array. Do : Step 2.1 : For j -> i to n repeat step 2.2 - 2.3. Step 2.2 : if (arr[i] == arr[j]) -> print arr[i] Step 2.3 : else {// do nothing} #include <stdio.h> int main() { int arr[] = {21, 87, 212, 109, 41, 21}; int n=7; printf("The repeat elements of the array are : "); int *count = (int *)calloc(sizeof(int), (n - 2)); int i; for (i = 0; i < n; i++) { if (count[arr[i]] == 1) printf(" %d ", arr[i]); else count[arr[i]]++; } return 0; } The repeat elements of the array are : 21
[ { "code": null, "e": 1310, "s": 1062, "text": "Array is a container of elements of Same data types length needs to be defined beforehand. And an element can appear in any order and any number of times in an array. so in this program we will find elements that appear more than once in an array." }, { "code": null, "e": 1456, "s": 1310, "text": "Problem description − We have given an array arr[] in which we have to find which of the element are repeating in the array an app to print them." }, { "code": null, "e": 1505, "s": 1456, "text": "Let’s take an example to understand this better." }, { "code": null, "e": 1557, "s": 1505, "text": "Input: arr[] = {5, 11, 11, 2, 1, 4, 2}\nOutput: 11 2" }, { "code": null, "e": 2123, "s": 1557, "text": "We have an array arr which contains some element firstly we would compare the element from the next element in the duplicate function which is used to find the repeated element in the array. In duplicate function we are using the loop to find the duplicate elements in the given array we will use if else condition to check the count of array elements from the array element occurred one time then the count will be 1 if occur more than one time then count will be incremented respectively if the count is more than 1 then the element will be printed on the screen." }, { "code": null, "e": 2387, "s": 2123, "text": "Input : arr[], n the length of array.\nStep 1 : For i -> 0 to n, Follow step 2,\nStep 2 : For each element of the array. Do :\n Step 2.1 : For j -> i to n repeat step 2.2 - 2.3.\n Step 2.2 : if (arr[i] == arr[j]) -> print arr[i]\n Step 2.3 : else {// do nothing}" }, { "code": null, "e": 2740, "s": 2387, "text": "#include <stdio.h>\nint main() {\n int arr[] = {21, 87, 212, 109, 41, 21};\n int n=7;\n printf(\"The repeat elements of the array are : \");\n int *count = (int *)calloc(sizeof(int), (n - 2));\n int i;\n for (i = 0; i < n; i++) {\n if (count[arr[i]] == 1)\n printf(\" %d \", arr[i]);\n else\n count[arr[i]]++;\n }\n return 0;\n}" }, { "code": null, "e": 2782, "s": 2740, "text": "The repeat elements of the array are : 21" } ]
call() decorator in Python - GeeksforGeeks
29 Dec, 2019 Python Decorators are important features of the language that allow a programmer to modify the behavior of a class. These features are added functionally to the existing code. This is a type of metaprogramming when the program is modified at compile time. The decorators can be used to inject modified code in functions or classes. The decorators allow the program to be modified to add any code that has a special role to play. Decorators are called before the definition of the function you have decorated. The use of decorators could be explained with the following example. Suppose we write a program to “Decorate” a function using another function. the code goes like: # Code to explain Decoratorsdef decorating(function): def item(): print("The function was decorated.") function() return item def my_function(): print("This is my function.") my_function() decorate = decorating(my_function)decorate() Output This is my function. The function was decorated. This is my function. Firstly, “This is my function” appears because of the function call my_function(). The second set of output was because of the Decorating function. The same thing can also be done by using decorators. The following code explains that. Note that the decorating statement is defined above the function to be decorated. # Code to implement the usage# of decoratorsdef decorating(function): def item(): print("The function was decorated.") function() return item # using the "@" sign to signify # that a decorator is used. @decoratingdef my_function(): print("This is my function.") # Driver's Codemy_function() Output The function was decorated. This is my function. The call() decorator is used in place of the helper functions. In python, or in any other languages, we use helper functions for three major motives: To identify the purpose of the method.The helper function is removed as soon as its job is completed. AndThe purpose of helper function matches with that of decorator function. To identify the purpose of the method. The helper function is removed as soon as its job is completed. And The purpose of helper function matches with that of decorator function. The following example will illustrate the significance of the call decorator method. In this example, we would be building a list of the doubles of the first “n” numbers, using a helper function.The code is as follows: # Helper function to build a# list of numbersdef list_of_numbers(n): element = [] for i in range(n): element.append(i * 2) return element list_of_numbers = list_of_numbers(6) # Output commandprint(len(list_of_numbers), list_of_numbers[2]) Output 6, 4 The above code could also be written using the call() decorator: # Defining the decorator functiondef call(*argv, **kwargs): def call_fn(function): return function(*argv, **kwargs) return call_fn # Using the decorator function@call(6)def list_of_numbers(n): element = [] for i in range(n): element.append(i * 2) return element # Output commandprint(len(list_of_numbers), list_of_numbers[2]) Output 6, 4 As it is observed, that the output is same as before, this means that the call() decorator works almost exactly like helper functions. Python Decorators Python Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Enumerate() in Python Read a file line by line in Python Defaultdict in Python Different ways to create Pandas Dataframe sum() function in Python Iterate over a list in Python How to Install PIP on Windows ? Deque in Python Python String | replace()
[ { "code": null, "e": 23993, "s": 23965, "text": "\n29 Dec, 2019" }, { "code": null, "e": 24502, "s": 23993, "text": "Python Decorators are important features of the language that allow a programmer to modify the behavior of a class. These features are added functionally to the existing code. This is a type of metaprogramming when the program is modified at compile time. The decorators can be used to inject modified code in functions or classes. The decorators allow the program to be modified to add any code that has a special role to play. Decorators are called before the definition of the function you have decorated." }, { "code": null, "e": 24667, "s": 24502, "text": "The use of decorators could be explained with the following example. Suppose we write a program to “Decorate” a function using another function. the code goes like:" }, { "code": "# Code to explain Decoratorsdef decorating(function): def item(): print(\"The function was decorated.\") function() return item def my_function(): print(\"This is my function.\") my_function() decorate = decorating(my_function)decorate()", "e": 24927, "s": 24667, "text": null }, { "code": null, "e": 24934, "s": 24927, "text": "Output" }, { "code": null, "e": 25005, "s": 24934, "text": "This is my function.\nThe function was decorated.\nThis is my function.\n" }, { "code": null, "e": 25153, "s": 25005, "text": "Firstly, “This is my function” appears because of the function call my_function(). The second set of output was because of the Decorating function." }, { "code": null, "e": 25322, "s": 25153, "text": "The same thing can also be done by using decorators. The following code explains that. Note that the decorating statement is defined above the function to be decorated." }, { "code": "# Code to implement the usage# of decoratorsdef decorating(function): def item(): print(\"The function was decorated.\") function() return item # using the \"@\" sign to signify # that a decorator is used. @decoratingdef my_function(): print(\"This is my function.\") # Driver's Codemy_function()", "e": 25642, "s": 25322, "text": null }, { "code": null, "e": 25649, "s": 25642, "text": "Output" }, { "code": null, "e": 25699, "s": 25649, "text": "The function was decorated.\nThis is my function.\n" }, { "code": null, "e": 25849, "s": 25699, "text": "The call() decorator is used in place of the helper functions. In python, or in any other languages, we use helper functions for three major motives:" }, { "code": null, "e": 26026, "s": 25849, "text": "To identify the purpose of the method.The helper function is removed as soon as its job is completed. AndThe purpose of helper function matches with that of decorator function." }, { "code": null, "e": 26065, "s": 26026, "text": "To identify the purpose of the method." }, { "code": null, "e": 26133, "s": 26065, "text": "The helper function is removed as soon as its job is completed. And" }, { "code": null, "e": 26205, "s": 26133, "text": "The purpose of helper function matches with that of decorator function." }, { "code": null, "e": 26424, "s": 26205, "text": "The following example will illustrate the significance of the call decorator method. In this example, we would be building a list of the doubles of the first “n” numbers, using a helper function.The code is as follows:" }, { "code": "# Helper function to build a# list of numbersdef list_of_numbers(n): element = [] for i in range(n): element.append(i * 2) return element list_of_numbers = list_of_numbers(6) # Output commandprint(len(list_of_numbers), list_of_numbers[2])", "e": 26688, "s": 26424, "text": null }, { "code": null, "e": 26695, "s": 26688, "text": "Output" }, { "code": null, "e": 26701, "s": 26695, "text": "6, 4\n" }, { "code": null, "e": 26766, "s": 26701, "text": "The above code could also be written using the call() decorator:" }, { "code": "# Defining the decorator functiondef call(*argv, **kwargs): def call_fn(function): return function(*argv, **kwargs) return call_fn # Using the decorator function@call(6)def list_of_numbers(n): element = [] for i in range(n): element.append(i * 2) return element # Output commandprint(len(list_of_numbers), list_of_numbers[2])", "e": 27128, "s": 26766, "text": null }, { "code": null, "e": 27135, "s": 27128, "text": "Output" }, { "code": null, "e": 27141, "s": 27135, "text": "6, 4\n" }, { "code": null, "e": 27276, "s": 27141, "text": "As it is observed, that the output is same as before, this means that the call() decorator works almost exactly like helper functions." }, { "code": null, "e": 27294, "s": 27276, "text": "Python Decorators" }, { "code": null, "e": 27301, "s": 27294, "text": "Python" }, { "code": null, "e": 27320, "s": 27301, "text": "Technical Scripter" }, { "code": null, "e": 27418, "s": 27320, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27427, "s": 27418, "text": "Comments" }, { "code": null, "e": 27440, "s": 27427, "text": "Old Comments" }, { "code": null, "e": 27458, "s": 27440, "text": "Python Dictionary" }, { "code": null, "e": 27480, "s": 27458, "text": "Enumerate() in Python" }, { "code": null, "e": 27515, "s": 27480, "text": "Read a file line by line in Python" }, { "code": null, "e": 27537, "s": 27515, "text": "Defaultdict in Python" }, { "code": null, "e": 27579, "s": 27537, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 27604, "s": 27579, "text": "sum() function in Python" }, { "code": null, "e": 27634, "s": 27604, "text": "Iterate over a list in Python" }, { "code": null, "e": 27666, "s": 27634, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27682, "s": 27666, "text": "Deque in Python" } ]