sourceName
stringclasses 1
value | url
stringlengths 52
145
| action
stringclasses 1
value | body
stringlengths 0
60.5k
| format
stringclasses 1
value | metadata
dict | title
stringlengths 5
125
| updated
stringclasses 3
values |
---|---|---|---|---|---|---|---|
devcenter | https://www.mongodb.com/developer/products/mongodb/schema-design-anti-pattern-case-insensitive-query-index | created | # Case-Insensitive Queries Without Case-Insensitive Indexes
We've reached the sixth and final (at least for now) MongoDB schema design anti-pattern. In the first five posts in this series, we've covered the following anti-patterns.
- Massive arrays
- Massive number of collections
- Unnecessary indexes
- Bloated documents
- Separating data that is accessed together
Today, we'll explore the wonderful world of case-insensitive indexes. Not having a case-insensitive index can create surprising query results and/or slow queries...and make you hate everything.
Once you know the details of how case-insensitive queries work, the implementation is fairly simple. Let's dive in!
>
>
>:youtube]{vid=mHeP5IbozDU start=948}
>
>Check out the video above to see the case-insensitive queries and indexes in action.
>
>
## Case-Insensitive Queries Without Case-Insensitive Indexes
MongoDB supports three primary ways to run case-insensitive queries.
First, you can run a case-insensitive query using [$regex with the `i` option. These queries will give you the expected case-insensitive results. However, queries that use `$regex` cannot efficiently utilize case-insensitive indexes, so these queries can be very slow depending on how much data is in your collection.
Second, you can run a case-insensitive query by creating a case-insensitive index (meaning it has a collation strength of `1` or `2`) and running a query with the same collation as the index. A collation defines the language-specific rules that MongoDB will use for string comparison. Indexes can optionally have a collation with a strength that ranges from 1 to 5. Collation strengths of `1` and `2` both give you case-insensitivity. For more information on the differences in collation strengths, see the MongoDB docs. A query that is run with the same collation as a case-insensitive index will return case-insensitive results. Since these queries are covered by indexes, they execute very quickly.
Third, you can run a case-insensitive query by setting the default collation strength for queries and indexes to a strength of `1` or `2` when you create a collection. All queries and indexes in a collection automatically use the default collation unless you specify otherwise when you execute a query or create an index. Therefore, when you set the default collation to a strength of `1` or `2`, you'll get case-insensitive queries and indexes by default. See the `collation` option in the db.createCollection() section of the MongoDB Docs for more details.
>
>
>Warning for queries that do not use `$regex`: Your index must have a collation strength of `1` or `2` and your query must use the same collation as the index in order for your query to be case-insensitive.
>
>
You can use MongoDB Compass (MongoDB's desktop GUI) or the MongoDB Shell (MongoDB's command-line tool) to test if a query is returning the results you'd expect, see its execution time, and determine if it's using an index.
## Example
Let's revisit the example we saw in the Unnecessary Indexes Anti-Pattern and the Bloated Documents Anti-Pattern posts. Leslie is creating a website that features inspirational women. She has created a database with information about 4,700+ inspirational women. Below are three documents in her `InspirationalWomen` collection.
``` none
{
"_id": ObjectId("5ef20c5c7ff4160ed48d8f83"),
"first_name": "Harriet",
"last_name": "Tubman",
"quote": "I was the conductor of the Underground Railroad for eight years,
and I can say what most conductors can't say; I never ran my
train off the track and I never lost a passenger"
},
{
"_id": ObjectId("5ef20c797ff4160ed48d90ea"),
"first_name": "HARRIET",
"middle_name": "BEECHER",
"last_name": "STOWE",
"quote": "When you get into a tight place and everything goes against you,
till it seems as though you could not hang on a minute longer,
never give up then, for that is just the place and time that
the tide will turn."
},
{
"_id": ObjectId("5ef20c937ff4160ed48d9201"),
"first_name": "Bella",
"last_name": "Abzug",
"quote": "This woman's place is in the House—the House of Representatives."
}
```
Leslie decides to add a search feature to her website since the website is currently difficult to navigate. She begins implementing her search feature by creating an index on the `first_name` field. Then she starts testing a query that will search for women named "Harriet."
Leslie executes the following query in the MongoDB Shell:
``` sh
db.InspirationalWomen.find({first_name: "Harriet"})
```
She is surprised to only get one document returned since she has two Harriets in her database: Harriet Tubman and Harriet Beecher Stowe. She realizes that Harriet Beecher Stowe's name was input in all uppercase in her database. Her query is case-sensitive, because it is not using a case-insensitive index.
Leslie runs the same query with .explain("executionStats") to see what is happening.
``` sh
db.InspirationalWomen.find({first_name: "Harriet"}).explain("executionStats")
```
The Shell returns the following output.
``` javascript
{
"queryPlanner": {
...
"winningPlan": {
"stage": "FETCH",
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"first_name": 1
},
"indexName": "first_name_1",
...
"indexBounds": {
"first_name":
"[\"Harriet\", \"Harriet\"]"
]
}
}
},
"rejectedPlans": []
},
"executionStats": {
"executionSuccess": true,
"nReturned": 1,
"executionTimeMillis": 0,
"totalKeysExamined": 1,
"totalDocsExamined": 1,
"executionStages": {
...
}
}
},
...
}
```
She can see that the `winningPlan` is using an `IXSCAN` (index scan) with her `first_name_1` index. In the `executionStats`, she can see that only one index key was examined (`executionStats.totalKeysExamined`) and only one document was examined (`executionStats.totalDocsExamined`). For more information on how to interpret the output from `.explain()`, see [Analyze Query Performance.
Leslie opens Compass and sees similar results.
MongoDB Compass shows that the query is examining only one index key, examining only one document, and returning only one document. It also shows that the query used the first_name_1 index.
Leslie wants all Harriets—regardless of what lettercase is used—to be returned in her query. She updates her query to use `$regex` with option `i` to indicate the regular expression should be case-insensitive. She returns to the Shell and runs her new query:
``` sh
db.InspirationalWomen.find({first_name: { $regex: /Harriet/i} })
```
This time she gets the results she expects: documents for both Harriet Tubman and Harriet Beecher Stowe. Leslie is thrilled! She runs the query again with `.explain("executionStats")` to get details on her query execution. Below is what the Shell returns:
``` javascript
{
"queryPlanner": {
...
"winningPlan": {
"stage": "FETCH",
"inputStage": {
"stage": "IXSCAN",
"filter": {
"first_name": {
"$regex": "Harriet",
"$options": "i"
}
},
"keyPattern": {
"first_name": 1
},
"indexName": "first_name_1",
...
"indexBounds": {
"first_name":
"[\"\", {})",
"[/Harriet/i, /Harriet/i]"
]
}
}
},
"rejectedPlans": []
},
"executionStats": {
"executionSuccess": true,
"nReturned": 2,
"executionTimeMillis": 3,
"totalKeysExamined": 4704,
"totalDocsExamined": 2,
"executionStages": {
...
}
},
...
}
```
She can see that this query, like her previous one, uses an index (`IXSCAN`). However, since `$regex` queries cannot efficiently utilize case-insensitive indexes, she isn't getting the typical benefits of a query that is covered by an index. All 4,704 index keys (`executionStats.totalKeysExamined`) are being examined as part of this query, resulting in a slightly slower query (`executionStats.executionTimeMillis: 3`) than one that fully utilizes an index.
She runs the same query in Compass and sees similar results. The query is using her `first_name_1` index but examining every index key.
MongoDB Compass shows that the query is returning two documents as expected. The $regex query is using the first_name_1 index but examining every index key.
Leslie wants to ensure that her search feature runs as quickly as possible. She uses Compass to create a new case-insensitive index named `first_name-case_insensitive`. (She can easily create indexes using other tools as well like the Shell or [MongoDB Atlas or even programmatically.) Her index will be on the `first_name` field in ascending order and use a custom collation with a locale of `en` and a strength of `2`. Recall from the previous section that the collation strength must be set to `1` or `2` in order for the index to be case-insensitive.
Creating a new index in MongoDB Compass with a custom collation that has a locale of en and a strength of 2.
Leslie runs a query very similar to her original query in the Shell, but this time she specifies the collation that matches her newly-created index:
``` sh
db.InspirationalWomen.find({first_name: "Harriet"}).collation( { locale: 'en', strength: 2 } )
```
This time she gets both Harriet Tubman and Harriet Beecher Stowe. Success!
She runs the query with `.explain("executionStats")` to double check that the query is using her index:
``` sh
db.InspirationalWomen.find({first_name: "Harriet"}).collation( { locale: 'en', strength: 2 } ).explain("executionStats")
```
The Shell returns the following results.
``` javascript
{
"queryPlanner": {
...
"collation": {
"locale": "en",
...
"strength": 2,
...
},
"winningPlan": {
"stage": "FETCH",
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"first_name": 1
},
"indexName": "first_name-case_insensitive",
"collation": {
"locale": "en",
...
"strength": 2,
...
},
...
"indexBounds": {
"first_name":
"[\"7)KK91O\u0001\u000b\", \"7)KK91O\u0001\u000b\"]"
]
}
}
},
"rejectedPlans": []
},
"executionStats": {
"executionSuccess": true,
"nReturned": 2,
"executionTimeMillis": 0,
"totalKeysExamined": 2,
"totalDocsExamined": 2,
"executionStages": {
...
}
}
},
...
}
```
Leslie can see that the winning plan is executing an `IXSCAN` (index scan) that uses the case-insensitive index she just created. Two index keys (`executionStats.totalKeysExamined`) are being examined, and two documents (`executionStats.totalDocsExamined`) are being examined. The query is executing in 0 ms (`executionStats.executionTimeMillis: 0`). Now that's fast!
Leslie runs the same query in Compass and specifies the collation the query should use.
She can see that the query is using her case-insensitive index and the
query is executing in 0 ms. She's ready to implement her search feature.
Time to celebrate!
*Note:* Another option for Leslie would have been to set the default collation strength of her InspirationalWomen collection to `1` or `2` when she created her collection. Then all of her queries would have returned the expected, case-insensitive results, regardless of whether she had created an index or not. She would still want to create indexes to increase the performance of her queries.
## Summary
You have three primary options when you want to run a case-insensitive query:
1. Use `$regex` with the `i` option. Note that this option is not as performant because `$regex` cannot fully utilize case-insensitive indexes.
2. Create a case-insensitive index with a collation strength of `1` or `2`, and specify that your query uses the same collation.
3. Set the default collation strength of your collection to `1` or `2` when you create it, and do not specify a different collation in your queries and indexes.
Alternatively, [MongoDB Atlas Search can be used for more complex text searches.
This post is the final anti-pattern we'll cover in this series. But, don't be too sad—this is not the final post in this series. Be on the lookout for the next post where we'll summarize all of the anti-patterns and show you a brand new feature in MongoDB Atlas that will help you discover anti-patterns in your database. You won't want to miss it!
>
>
>When you're ready to build a schema in MongoDB, check out MongoDB Atlas, MongoDB's fully managed database-as-a-service. Atlas is the easiest way to get started with MongoDB and has a generous, forever-free tier.
>
>
## Related Links
Check out the following resources for more information:
- MongoDB Docs: Improve Case-Insensitive Regex Queries
- MongoDB Docs: Case-Insensitive Indexes
- MongoDB Docs: $regex
- MongoDB Docs: Collation
- MongoDB Docs: db.collection.explain()
- MongoDB Docs: Analyze Query Performance
- MongoDB University M201: MongoDB Performance
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Don't fall into the trap of this MongoDB Schema Design Anti-Pattern: Case-Insensitive Queries Without Case-Insensitive Indexes",
"contentType": "Article"
} | Case-Insensitive Queries Without Case-Insensitive Indexes | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/getting-started-kmm-flexiable-sync | created | # Getting Started Guide for Kotlin Multiplatform Mobile (KMM) with Flexible Sync
> This is an introductory article on how to build your first Kotlin Multiplatform Mobile using Atlas Device Sync.
## Introduction
Mobile development has evolved a lot in recent years and in this tutorial, we are going discuss Kotlin Multiplatform Mobile (KMM), one such platform which disrupted the development communities by its approach and thoughts on how to build mobile apps.
Traditional mobile apps, either built with a native or hybrid approach, have their tradeoffs from development time to performance. But with the Kotlin Multiplatform approach, we can have the best of both worlds.
## What is Kotlin Multiplatform Mobile (KMM)?
Kotlin Multiplatform is all about code sharing within apps for different environments (iOS, Android). Some common use cases for shared code are getting data from the network, saving it into the device, filtering or manipulating data, etc. This is different from other cross-development frameworks as this enforces developers to share only business logic code rather than complete code which often makes things complicated, especially when it comes to building different complex custom UI for each platform.
## Setting up your environment
If you are an Android developer, then you don't need to do much. The primary development of KMM apps is done using Android Studio. The only additional step for you is to install the KMM plugin via IDE plugin manager. One of the key benefits of this is it allows to you build and run the iOS app as well from Android Studio.
To enable iOS building and running via Android Studio, your system should have Xcode installed, which is development IDE for iOS development.
To verify all dependencies are installed correctly, we can use `kdoctor`, which can be installed using brew.
```shell
brew install kdoctor
```
## Building Hello World!
With our setup complete, it's time to get our hands dirty and build our first Hello World application.
Creating a KMM application is very easy. Open Android Studio and then select Kotlin Multiplatform App from the New Project template. Hit Next.
On the next screen, add the basic application details like the name of the application, location of the project, etc.
Finally, select the dependency manager for the iOS app, which is recommended for `Regular framework`, and then hit finish.
Once gradle sync is complete, we can run both iOS and Android app using the run button from the toolbar.
That will start the Android emulator or iOS simulator, where our app will run.
## Basics of the Kotlin Multiplatform
Now it's time to understand what's happening under the hood to grasp the basic concepts of KMM.
### Understanding project structure
Any KMM project can be split into three logic folders — i.e., `androidApp`, `iosApp`, and `shared` — and each of these folders has a specific purpose.
Since KMM is all about sharing business-/logic-related code, all the shared code is written under `shared` the folder. This code is then exposed as libs to `androidApp` and `iosApp` folders, allowing us to use shared logic by calling classes or functions and building a user interface on top of it.
### Writing platform-specific code
There can be a few use cases where you like to use platform-specific APIs for writing business logic like in the `Hello World!` app where we wanted to know the platform type and version. To handle such use cases, KMM has introduced the concept of `actual` and `expect`, which can be thought of as KMM's way of `interface` or `Protocols`.
In this concept, we define `expect` for the functionality to be exposed, and then we write its implementation `actual` for the different environments. Something like this:
```Kotlin
expect fun getPlatform(): String
```
```kotlin
actual fun getPlatform(): String = "Android ${android.os.Build.VERSION.SDK_INT}"
```
```kotlin
actual fun getPlatform(): String =
UIDevice.currentDevice.systemName() + " " + UIDevice.currentDevice.systemVersion
```
In the above example, you'll notice that we are using platform-specific APIs like `android.os` or `UIDevice` in `shared` folder. To keep this organised and readable, KMM has divided the `shared` folder into three subfolders: `commonMain`, `androidMain`, `iOSMain`.
With this, we covered the basics of KMM (and that small learning curve for KMM is especially for people coming from an `android` background) needed before building a complex and full-fledged real app.
## Building a more complex app
Now let's build our first real-world application, Querize, an app that helps you collect queries in real time during a session. Although this is a very simple app, it still covers all the basic use cases highlighting the benefits of the KMM app with a complex one, like accessing data in real time.
The tech stack for our app will be:
1. JetPack Compose for UI building.
2. Kotlin Multiplatform with Realm as a middle layer.
3. Atlas Flexible Device Sync from MongoDB,
serverless backend supporting our data sharing.
4. MongoDB Atlas, our cloud database.
We will be following a top to bottom approach in building the app, so let's start building the UI using Jetpack compose with `ViewModel`.
```kotlin
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
MaterialTheme {
Container()
}
}
}
}
@Preview
@Composable
fun Container() {
val viewModel = viewModel()
Scaffold(
topBar = {
CenterAlignedTopAppBar(
title = {
Text(
text = "Querize",
fontSize = 24.sp,
modifier = Modifier.padding(horizontal = 8.dp)
)
},
colors = TopAppBarDefaults.centerAlignedTopAppBarColors(MaterialTheme.colorScheme.primaryContainer),
navigationIcon = {
Icon(
painterResource(id = R.drawable.ic_baseline_menu_24),
contentDescription = ""
)
}
)
},
containerColor = (Color(0xffF9F9F9))
) {
Column(
modifier = Modifier
.fillMaxSize()
.padding(it),
) {
Row(modifier = Modifier.fillMaxWidth(), horizontalArrangement = Arrangement.Center) {
Image(
painter = painterResource(id = R.drawable.ic_realm_logo),
contentScale = ContentScale.Fit,
contentDescription = "App Logo",
modifier = Modifier
.width(200.dp)
.defaultMinSize(minHeight = 200.dp)
.padding(bottom = 20.dp),
)
}
AddQuery(viewModel)
Text(
"Queries",
modifier = Modifier
.fillMaxWidth()
.padding(bottom = 8.dp),
textAlign = TextAlign.Center,
fontSize = 24.sp
)
QueriesList(viewModel)
}
}
}
@Composable
fun AddQuery(viewModel: MainViewModel) {
val queryText = remember { mutableStateOf("") }
TextField(
modifier = Modifier
.fillMaxWidth()
.padding(8.dp),
placeholder = { Text(text = "Enter your query here") },
trailingIcon = {
Icon(
painterResource(id = R.drawable.ic_baseline_send_24),
contentDescription = "",
modifier = Modifier.clickable {
viewModel.saveQuery(queryText.value)
queryText.value = ""
})
},
value = queryText.value,
onValueChange = {
queryText.value = it
})
}
@Composable
fun QueriesList(viewModel: MainViewModel) {
val queries = viewModel.queries.observeAsState(initial = emptyList()).value
LazyColumn(
verticalArrangement = Arrangement.spacedBy(12.dp),
contentPadding = PaddingValues(8.dp),
content = {
items(items = queries, itemContent = { item: String ->
QueryItem(query = item)
})
})
}
@Preview
@Composable
fun QueryPreview() {
QueryItem(query = "Sample text")
}
@Composable
fun QueryItem(query: String) {
Row(
modifier = Modifier
.fillMaxWidth()
.background(Color.White)
.padding(8.dp)
.clip(RoundedCornerShape(8.dp))
) {
Text(text = query, modifier = Modifier.fillMaxWidth())
}
}
```
```kotlin
class MainViewModel : ViewModel() {
private val repo = RealmRepo()
val queries: LiveData> = liveData {
emitSource(repo.getAllData().flowOn(Dispatchers.IO).asLiveData(Dispatchers.Main))
}
fun saveQuery(query: String) {
viewModelScope.launch {
repo.saveInfo(query)
}
}
}
```
In our viewModel, we have a method `saveQuery` to capture the user queries and share them with the speaker. This information is then passed on to our logic layer, `RealmRepo`, which is built using Kotlin Multiplatform for Mobile (KMM) as we would like to reuse this for code when building an iOS app.
```kotlin
class RealmRepo {
suspend fun saveInfo(query: String) {
}
}
```
Now, to save and share this information, we need to integrate it with Atlas Device Sync, which will automatically save and share it with our clients in real time. To connect with Device Sync, we need to add `Realm` SDK first to our project, which provides us integration with Device Sync out of the box.
Realm is not just SDK for integration with Atlas Device Sync, but it's a very powerful object-oriented mobile database built using KMM. One of the key advantages of using this is it makes our app work offline without any effort.
### Adding Realm SDK
This step is broken down further for ease of understanding.
#### Adding Realm plugin
Open the `build.gradle` file under project root and add the `Realm` plugin.
From
```kotlin
plugins {
id("com.android.application").version("7.3.1").apply(false)
id("com.android.library").version("7.3.1").apply(false)
kotlin("android").version("1.7.10").apply(false)
kotlin("multiplatform").version("1.7.20").apply(false)
}
```
To
```kotlin
plugins {
id("com.android.application").version("7.3.1").apply(false)
id("com.android.library").version("7.3.1").apply(false)
kotlin("android").version("1.7.10").apply(false)
kotlin("multiplatform").version("1.7.20").apply(false)
// Added Realm plugin
id("io.realm.kotlin") version "0.10.0"
}
```
#### Enabling Realm plugin
Now let's enable the Realm plugin for our project. We should make corresponding changes to the `build.gradle` file under the `shared` module.
From
```kotlin
plugins {
kotlin("multiplatform")
kotlin("native.cocoapods")
id("com.android.library")
}
```
To
```kotlin
plugins {
kotlin("multiplatform")
kotlin("native.cocoapods")
id("com.android.library")
// Enabled Realm Plugin
id("io.realm.kotlin")
}
```
#### Adding dependencies
With the last step done, we are just one step away from completing the Realm setup. In this step, we add the Realm dependency to our project.
Since the `Realm` database will be shared across all platforms, we will be adding the Realm dependency to the common source `shared`. In the same `build.gradle` file, locate the `sourceSet` tag and update it to:
From
```kotlin
sourceSets {
val commonMain by getting {
dependencies {
}
}
// Other config
}
```
To
```kotlin
sourceSets {
val commonMain by getting {
dependencies {
implementation("io.realm.kotlin:library-sync:1.4.0")
}
}
}
```
With this, we have completed the `Realm` setup for our KMM project. If you would like to use any part of the SDK inside the Android module, you can add the dependency in Android Module `build.gradle` file.
```kotlin
dependencies {
compileOnly("io.realm.kotlin:library-sync:1.4.0")
}
```
Since Realm is an object-oriented database, we can save objects directly without getting into the hassle of converting them into different formats. To save any object into the `Realm` database, it should be derived from `RealmObject` class.
```kotlin
class QueryInfo : RealmObject {
@PrimaryKey
var _id: String = ""
var queries: String = ""
}
```
Now let's save our query into the local database, which will then be synced using Atlas Device Sync and saved into our cloud database, Atlas.
```kotlin
class RealmRepo {
suspend fun saveInfo(query: String) {
val info = QueryInfo().apply {
_id = RandomUUID().randomId
queries = query
}
realm.write {
copyToRealm(info)
}
}
}
```
The next step is to create a `Realm` instance, which we use to save the information. To create a `Realm`, an instance of `Configuration` is needed which in turn needs a list of classes that can be saved into the database.
```kotlin
val realm by lazy {
val config = RealmConfiguration.create(setOf(QueryInfo::class))
Realm.open(config)
}
```
This `Realm` instance is sufficient for saving data into the device but in our case, we need to integrate this with Atlas Device Sync to save and share our data into the cloud. To do this, we take four more steps:
1. Create a free MongoDB account.
2. Follow the setup wizard after signing up to create a free cluster.
3. Create an App with App Service UI to enable Atlas Device Sync.
4. Enable Atlas Device Sync using Flexible Sync. Select the App services tab and enable sync, as shown below.
Now let's connect our Realm and Atlas Device Sync. To do this, we need to modify our `Realm` instance creation. Instead of using `RealmConfiguration`, we need to use `SyncConfiguration`.
`SyncConfiguration` instance can be created using its builder, which needs a user instance and `initialSubscriptions` as additional information. Since our application doesn't have a user registration form, we can use anonymous sign-in provided by Atlas App Services to identify as user session. So our updated code looks like this:
```kotlin
private val appServiceInstance by lazy {
val configuration =
AppConfiguration.Builder("application-0-elgah").log(LogLevel.ALL).build()
App.create(configuration)
}
```
```kotlin
lateinit var realm: Realm
private suspend fun setupRealmSync() {
val user = appServiceInstance.login(Credentials.anonymous())
val config = SyncConfiguration
.Builder(user, setOf(QueryInfo::class))
.initialSubscriptions { realm ->
// information about the data that can be read or modified.
add(
query = realm.query(),
name = "subscription name",
updateExisting = true
)
}
.build()
realm = Realm.open(config)
}
```
```kotlin
suspend fun saveInfo(query: String) {
if (!this::realm.isInitialized) {
setupRealmSync()
}
val info = QueryInfo().apply {
_id = RandomUUID().randomId
queries = query
}
realm.write {
copyToRealm(info)
}
}
```
Now, the last step to complete our application is to write a read function to get all the queries and show it on UI.
```kotlin
suspend fun getAllData(): CommonFlow> {
if (!this::realm.isInitialized) {
setupRealmSync()
}
return realm.query().asFlow().map {
it.list.map { it.queries }
}.asCommonFlow()
}
```
Also, you can view or modify the data received via the `saveInfo` function using the `Atlas` UI.
With this done, our application is ready to send and receive data in real time. Yes, in real time. No additional implementation is required.
## Summary
Thank you for reading this article! I hope you find it informative. The complete source code of the app can be found on GitHub.
If you have any queries or comments, you can share them on
the MongoDB Realm forum or tweet me @codeWithMohit. | md | {
"tags": [
"Realm",
"Kotlin",
"Android",
"iOS"
],
"pageDescription": "This is an introductory article on how to build your first Kotlin Multiplatform Mobile using Atlas Device Sync.",
"contentType": "Tutorial"
} | Getting Started Guide for Kotlin Multiplatform Mobile (KMM) with Flexible Sync | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/media-management-integrating-nodejs-azure-blob-storage-mongodb | created | # Building a Scalable Media Management Back End: Integrating Node.js, Azure Blob Storage, and MongoDB
If your goal is to develop a multimedia platform, a robust content management system, or any type of application that requires storing substantial media files, the storage, retrieval, and management of these files are critical to delivering a seamless user experience. This is where a robust media management back end becomes an indispensable component of your tech stack. In this tutorial, we will guide you through the process of creating such a back end utilizing Node.js, Azure Blob Storage, and MongoDB.
Storing media files like images or videos directly in your MongoDB database may not be the most efficient approach. MongoDB has a BSON document size limit of 16MB, which is designed to prevent any single document from consuming too much RAM or bandwidth during transmission. Given the size of many media files, this limitation could be easily exceeded, presenting a significant challenge for storing large files directly in the database.
MongoDB's GridFS is a solution for storing large files beyond the BSON-document size limit by dividing them into chunks and storing these chunks across separate documents. While GridFS is a viable solution for certain scenarios, an efficient approach is to use a dedicated service for storing large media files. Azure Blob (**B**inary **L**arge **Ob**jects) Storage, for example, is optimized for the storage of substantial amounts of unstructured data, which includes binary data like media files. Unstructured data refers to data that does not adhere to a specific model or format.
We'll provide you with a blueprint to architect a backend system capable of handling large-scale media storage with ease, and show you how to post to it using cURL commands. By the end of this article, you'll have a clear understanding of how to leverage Azure Blob Storage for handling massive amounts of unstructured data and MongoDB for efficient data management, all orchestrated with a Node.js API that glues everything together.
installed. Node.js is the runtime environment required to run your JavaScript code server-side. npm is used to manage the dependencies.
- A MongoDB cluster deployed and configured. If you need help, check out our MongoDB Atlas tutorial on how to get started.
- An Azure account with an active subscription.
## Set up Azure Storage
For this tutorial, we will use the Microsoft Azure Portal to set up our Azure storage. Begin by logging into your Azure account and it will take you to the home page. Once there, use the search bar at the top of the page to search "Storage accounts."
.
Choose your preferred subscription and resource group, then assign a name to your storage account. While the selection of region, performance, and redundancy options will vary based on your application's requirements, the basic tiers will suffice for all the functionalities required in this tutorial.
In the networking section, opt to allow public access from all networks. While this setting is generally not recommended for production environments, it simplifies the process for this tutorial by eliminating the need to set up specific network access rules.
For the rest of the configuration settings, we can accept the default settings. Once your storage account is created, we’re going to navigate to the resource. You can do this by clicking “Go to resource,” or return to the home page and it will be listed under your resources.
Now, we'll proceed to create a container. Think of a container as akin to a directory in a file system, used for organizing blobs. You can have as many containers as you need in a storage account, and each container can hold numerous blobs. To do this, go to the left panel and click on the Containers tab, then choose the “plus container” option. This will open a dialog where you can name your container and, if necessary, alter the access level from the default private setting. Once that's done, you can go ahead and initiate your container.
To connect your application to Azure Storage, you'll need to create a `Shared Access Signature` (SAS). SAS provides detailed control over the ways your client can access data. From the menu on the left, select “Shared access signature” and set it up to permit the services and resource types you need. For the purposes of this tutorial, choose “Object” under allowed resource types, which is suitable for blob-level APIs and enables operations on individual blobs, such as upload, download, or delete.
You can leave the other settings at their default values. However, if you're interested in understanding which configurations are ideal for your application, Microsoft’s documentation offers comprehensive guidance. Once you've finalized your settings, click “Generate SAS and connection string.” This action will produce your SAS, displayed below the button.
and click connect. If you need help, check out our guide in the docs.
and takes a request listener function as an argument. In this case, `handleImageUpload` is passed as the request listener, which means that this function will be called every time the server receives an HTTP request.
```js
const server = http.createServer(handleImageUpload);
const port = 3000;
server.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
```
The `handleImageUpload` function is designed to process HTTP POST requests to the /api/upload endpoint, handling the uploading of an image and the storing of its associated metadata. It will call upon a couple of helper functions to achieve this. We’ll break down how these work as well.
```javascript
async function handleImageUpload(req, res) {
res.setHeader('Content-Type', 'application/json');
if (req.url === '/api/upload' && req.method === 'POST') {
try {
// Extract metadata from headers
const {fileName, caption, fileType } = await extractMetadata(req.headers);
// Upload the image as a to Azure Storage Blob as a stream
const imageUrl = await uploadImageStreamed(fileName, req);
// Store the metadata in MongoDB
await storeMetadata(fileName, caption, fileType, imageUrl);
res.writeHead(201);
res.end(JSON.stringify({ message: 'Image uploaded and metadata stored successfully', imageUrl }));
} catch (error) {
console.error('Error:', error);
res.writeHead(500);
res.end(JSON.stringify({ error: 'Internal Server Error' }));
}
} else {
res.writeHead(404);
res.end(JSON.stringify({ error: 'Not Found' }));
}
}
```
If the incoming request is a POST to the correct endpoint, it will call our `extractMetadata` method. This function takes in our header from the request and extracts the associated metadata.
```javascript
async function extractMetadata(headers) {
const contentType = headers'content-type'];
const fileType = contentType.split('/')[1];
const contentDisposition = headers['content-disposition'] || '';
const caption = headers['x-image-caption'] || 'No caption provided';
const matches = /filename="([^"]+)"/i.exec(contentDisposition);
const fileName = matches?.[1] || `image-${Date.now()}.${fileType}`;
return { fileName, caption, fileType };
}
```
It assumes that the 'content-type' header of the request will include the file type (like image/png or image/jpeg). It extracts this file type from the header. It then attempts to extract a filename from the content-disposition header, if provided. If no filename is given, it generates a default one using a timestamp.
Using the extracted or generated filename and file type, along with the rest of our metadata from the header, it calls `uploadImageStreamed`, which uploads the image as a stream directly from the request to Azure Blob Storage.
```javascript
async function uploadImageStreamed(blobName, dataStream) {
const blobClient = containerClient.getBlockBlobClient(blobName);
await blobClient.uploadStream(dataStream);
return blobClient.url;
}
```
In this method, we are creating our `blobClient`. The blobClient opens a connection to an Azure Storage blob and allows us to manipulate it. Here we upload our stream into our blob and finally return our blob URL to be stored in MongoDB.
Once we have our image stored in Azure Blob Storage, we are going to take the URL and store it in our database. The metadata you decide to store will depend on your application. In this example, I add a caption for the file, the name, and the URL, but you might also want information like who uploaded the image or when it was uploaded. This document is inserted into a MongoDB collection using the `storeMetadata` method.
```javascript
async function storeMetadata(name, caption, fileType, imageUrl) {
const collection = client.db("tutorial").collection('metadata');
await collection.insertOne({ name, caption, fileType, imageUrl });
}
```
Here we create and connect to our MongoClient, and insert our document into the metadata collection in the tutorial. Don’t worry if the database or collection don’t exist yet. As soon as you try to insert data, MongoDB will create it.
If the upload and metadata storage are successful, it sends back an HTTP 201 status code and a JSON response confirming the successful upload.
Now we have an API call to upload our image, along with some metadata for said image. Let's test what we built! Run your application by executing the `node app.mjs` command in a terminal that's open in your app's directory. If you’re following along, you’re going to want to substitute the path to the image below to your own path, and whatever you want the metadata to be.
```console
curl -X POST \
-H "Content-Type: image/png" \
-H "Content-Disposition: attachment; filename=\"mongodb-is-webscale.png\"" \
-H "X-Image-Caption: Your Image Caption Here" \
--data-binary @"/path/to/your/mongodb-is-webscale.png" \
http://localhost:3000/api/upload
```
There’s a couple of steps to our cURL command.
- `curl -X POST` initiates a curl request using the POST method, which is commonly used for submitting data to be processed to a specified resource.
- `-H "Content-Type: image/png"` includes a header in the request that tells the server what the type of the content being sent is. In this case, it indicates that the file being uploaded is a PNG image.
- `-H "Content-Disposition: attachment; filename=\"mongodb-is-webscale.png\""` header is used to specify information about the file. It tells the server the file should be treated as an attachment, meaning it should be downloaded or saved rather than displayed. The filename parameter is used to suggest a default filename to be used if the content is saved to a file. (Otherwise, our application will auto-generate one.)
- `-H "X-Image-Caption: Your Image Caption Here"` header is used to dictate our caption. Following the colon, include the message you wish to store in or MongoDB document.
- `--data-binary @"{Your-Path}/mongodb-is-webscale.png"` tells cURL to read data from a file and to preserve the binary format of the file data. The @ symbol is used to specify that what follows is a file name from which to read the data. {Your-Path} should be replaced with the actual path to the image file you're uploading.
- `http://localhost:3000/api/upload` is the URL where the request is being sent. It indicates that the server is running on localhost (the same machine from which the command is being run) on port 3000, and the specific API endpoint handling the upload is /api/upload.
Let’s see what this looks like in our storage. First, let's check our Azure Storage blob. You can view the `mongodb-is-webscale.png` image by accessing the container we created earlier. It confirms that the image has been successfully stored with the designated name.
![Microsoft portal showing our container and the image we transferred in.][5]
Now, how can we retrieve this image in our application? Let’s check our MongoDB database. You can do this through the MongoDB Atlas UI. Select the cluster and the collection you uploaded your metadata to. Here you can view your document.
![MongoDB Atlas showing our metadata document stored in the collection.][6]
You can see we’ve successfully stored our metadata! If you follow the URL, you will be taken to the image you uploaded, stored in your blob.
## Conclusion
Integrating Azure Blob Storage with MongoDB provides an optimal solution for storing large media files, such as images and videos, and provides a solid backbone for building your multimedia applications. Azure Blob Storage, a cloud-based service from Microsoft, excels in handling large quantities of unstructured data. This, combined with the efficient database management of MongoDB, creates a robust system. It not only simplifies the file upload process but also effectively manages relevant metadata, offering a comprehensive solution for data storage needs.
Through this tutorial, we've provided you with the steps to set up a MongoDB Atlas cluster and configure Azure Storage, and we demonstrated how to construct a Node.js API to seamlessly interact with both platforms.
If your goal is to develop a multimedia platform, a robust content management system, or any type of application that requires storing substantial media files, this guide offers a clear pathway to embark on that journey. Utilizing the powerful capabilities of Azure Blob Storage and MongoDB, along with a Node.js API, developers have the tools to create applications that are not only scalable and proficient but also robust enough to meet the demands of today's dynamic web environment.
Want to learn more about what you can do with Microsoft Azure and MongoDB? Check out some of our articles in [Developer Center, such as Building a Crypto News Website in C# Using the Microsoft Azure App Service and MongoDB Atlas, where you can learn how to build and deploy a website in just a few simple steps.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd9281432bdaca405/65797bf8177bfa1148f89ad7/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc26fafc9dc5d0ee6/65797bf87cf4a95dedf5d9cf/image2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltedfcb0b696b631af/65797bf82a3de30dcad708d1/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85d7416dc4785d29/65797bf856ca8605bfd9c50e/image5.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1c7d6af67a124be6/65797bf97ed7db1ef5c7da2f/image6.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6b50b6db724830a1/65797bf812bfab1ac0bc3a31/image1.png | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"Azure"
],
"pageDescription": "Learn to create your own media management backend, storing your media files in Azure Blob Storage, and associated metadata in MongoDB.",
"contentType": "Tutorial"
} | Building a Scalable Media Management Back End: Integrating Node.js, Azure Blob Storage, and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-java-server | created | # How to Build a Search Service in Java
We need to code our way from the search box to our search index. Performing a search and rendering the results in a presentable fashion, itself, is not a tricky endeavor: Send the user’s query to the search server, and translate the response data into some user interface technology. However, there are some important issues that need to be addressed, such as security, error handling, performance, and other concerns that deserve isolation and control.
A typical three-tier system has a presentation layer that sends user requests to a middle layer, or application server, which interfaces with backend data services. These tiers separate concerns so that each can focus on its own responsibilities.
.
This project was built using:
* Gradle 8.5
* Java 21
Standard Java and servlet APIs are used and should work as-is or port easily to later Java versions.
In order to run the examples provided here, the Atlas sample data needs to be loaded and a `movies_index`, as described below, created on the `sample_mflix.movies` collection. If you’re new to Atlas Search, a good starting point is Using Atlas Search from Java.
## Search service design
The front-end presentation layer provides a search box, renders search results, and supplies sorting, pagination, and filtering controls. A middle tier, via an HTTP request, validates and translates the search request parameters into an aggregation pipeline specification that is then sent to the data tier.
A search service needs to be fast, scalable, and handle these basic parameters:
* The query itself: This is what the user entered into the search box.
* Number of results to return: Often, only 10 or so results are needed at a time.
* Starting point of the search results: This allows the pagination of search results.
Also, a performant query should only search and return a small number of fields, though not necessarily the same fields searched need to be returned. For example, when searching movies, you might want to search the `fullplot` field but not return the potentially large text for presentation. Or, you may want to include the year the movie was released in the results but not search the `year` field.
Additionally, a search service must provide a way to constrain search results to, say, a specific category, genre, or cast member, without affecting the relevancy ordering of results. This filtering capability could also be used to enforce access control, and a service layer is an ideal place to add such constraints that the presentation tier can rely on rather than manage.
## Search service interface
Let’s now concretely define the service interface based on the design. Our goal is to support a request, such as _find “Music” genre movies for the query “purple rain” against the `title` and `plot` fields_, returning only five results at a time that only include the fields title, genres, plot, and year. That request from our presentation layer’s perspective is this HTTP GET request:
```
http://service_host:8080/search?q=purple%20rain&limit=5&skip=0&project=title,genres,plot,year&search=title,plot&filter=genres:Music
```
These parameters, along with a `debug` parameter, are detailed in the following table:
|parameter|description|
|-----------|-----------|
|`q`|This is a full-text query, typically the value entered by the user into a search box.|
|`search`|This is a comma-separated list of fields to search across using the query (`q`) parameter.|
|`limit`|Only return this maximum number of results, constrained to a maximum of 25 results.|
|`skip`|Return the results starting after this number of results (up to the `limit` number of results), with a maximum of 100 results skipped.|
|`project`|This is a comma-separated list of fields to return for each document. Add `_id` if that is needed. `_score` is a “pseudo-field” used to include the computed relevancy score.|
|`filter`|`:` syntax; supports zero or more `filter` parameters.|
|`debug`|If `true`, include the full aggregation pipeline .explain() output in the response as well.|
### Returned results
Given the specified request, let’s define the response JSON structure to return the requested (`project`) fields of the matching documents in a `docs` array. In addition, the search service returns a `request` section showing both the explicit and implicit parameters used to build the Atlas $search pipeline and a `meta` section that will return the total count of matching documents. This structure is entirely our design, not meant to be a direct pass-through of the aggregation pipeline response, allowing us to isolate, manipulate, and map the response as it best fits our presentation tier’s needs.
```
{
"request": {
"q": "purple rain",
"skip": 0,
"limit": 5,
"search": "title,plot",
"project": "title,genres,plot,year",
"filter":
"genres:Music"
]
},
"docs": [
{
"plot": "A young musician, tormented by an abusive situation at home, must contend with a rival singer, a burgeoning romance and his own dissatisfied band as his star begins to rise.",
"genres": [
"Drama",
"Music",
"Musical"
],
"title": "Purple Rain",
"year": 1984
},
{
"plot": "Graffiti Bridge is the unofficial sequel to Purple Rain. In this movie, The Kid and Morris Day are still competitors and each runs a club of his own. They make a bet about who writes the ...",
"genres": [
"Drama",
"Music",
"Musical"
],
"title": "Graffiti Bridge",
"year": 1990
}
],
"meta": [
{
"count": {
"total": 2
}
}
]
}
```
## Search service implementation
Code! That’s where it’s at. Keeping things as straightforward as possible so that our implementation is useful for every front-end technology, we’re implementing an HTTP service that works with standard GET request parameters and returns easily digestible JSON. And Java is our language of choice here, so let’s get to it. Coding is an opinionated endeavor, so we acknowledge that there are various ways to do this in Java and other languages — here’s one opinionated (and experienced) way to go about it.
To run with the configuration presented here, a good starting point is to get up and running with the examples from the article [Using Atlas Search from Java. Once you’ve got that running, create a new index, called `movies_index`, with a custom index configuration as specified in the following JSON:
```
{
"analyzer": "lucene.english",
"searchAnalyzer": "lucene.english",
"mappings": {
"dynamic": true,
"fields": {
"cast":
{
"type": "token"
},
{
"type": "string"
}
],
"genres": [
{
"type": "token"
},
{
"type": "string"
}
]
}
}
}
```
Here’s the skeleton of the implementation, a standard `doGet` servlet entry point, grabbing all the parameters we’ve specified:
```
public class SearchServlet extends HttpServlet {
private MongoCollection collection;
private String indexName;
private Logger logger;
// ...
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException {
String q = request.getParameter("q");
String searchFieldsValue = request.getParameter("search");
String limitValue = request.getParameter("limit");
String skipValue = request.getParameter("skip");
String projectFieldsValue = request.getParameter("project");
String debugValue = request.getParameter("debug");
String[] filters = request.getParameterMap().get("filter");
// ...
}
}
```
[SearchServlet
Notice that a few instance variables have been defined, which get initialized in the standard servlet `init` method from values specified in the `web.xml` deployment descriptor, as well as the `ATLAS_URI` environment variable:
```
@Override
public void init(ServletConfig config) throws ServletException {
super.init(config);
logger = Logger.getLogger(config.getServletName());
String uri = System.getenv("ATLAS_URI");
if (uri == null) {
throw new ServletException("ATLAS_URI must be specified");
}
String databaseName = config.getInitParameter("database");
String collectionName = config.getInitParameter("collection");
indexName = config.getInitParameter("index");
MongoClient mongo_client = MongoClients.create(uri);
MongoDatabase database = mongo_client.getDatabase(databaseName);
collection = database.getCollection(collectionName);
logger.info("Servlet " + config.getServletName() + " initialized: " + databaseName + " / " + collectionName + " / " + indexName);
}
```
SearchServlet#init
For the best protection of our `ATLAS_URI` connection string, we define it in the environment so that it’s not hard-coded nor visible within the application itself other than at initialization, whereas we specify the database, collection, and index names within the standard `web.xml` deployment descriptor which allows us to define end-points for each index that we want to support. Here’s a basic web.xml definition:
```
SearchServlet
com.mongodb.atlas.SearchServlet
1
database
sample_mflix
collection
movies
index
movies_index
SearchServlet
/search
```
web.xml
### GETting the search results
Requesting search results is a stateless operation with no side effects to the database and works nicely as a straightforward HTTP GET request, as the query itself should not be a very long string. Our front-end tier can constrain the length appropriately. Larger requests could be supported by adjusting to POST/getPost, if needed.
### Aggregation pipeline behind the scenes
Ultimately, to support the information we want returned (as shown above in the example response), the request example shown above gets transformed into this aggregation pipeline request:
```
{
"$search": {
"compound": {
"must": [
{
"text": {
"query": "purple rain",
"path": [
"title",
"plot"
]
}
}
],
"filter": [
{
"equals": {
"path": "genres",
"value": "Music"
}
}
]
},
"index": "movies_index",
"count": {
"type": "total"
}
}
},
{
"$facet": {
"docs": [
{
"$skip": 0
},
{
"$limit": 5
},
{
"$project": {
"title": 1,
"genres": 1,
"plot": 1,
"year": 1,
"_id": 0,
}
}
],
"meta": [
{
"$replaceWith": "$$SEARCH_META"
},
{
"$limit": 1
}
]
}
}
]
```
There are a few aspects to this generated aggregation pipeline worth explaining further:
* The query (`q`) is translated into a [`text` operator over the specified `search` fields. Both of those parameters are required in this implementation.
* `filter` parameters are translated into non-scoring `filter` clauses using the `equals` operator. The `equals` operator requires string fields to be indexed as a `token` type; this is why you see the `genres` and `cast` fields set up to be both `string` and `token` types. Those two fields can be searched full-text-wise (via the `text` or other string-type supporting operators) or used as exact match `equals` filters.
* The count of matching documents is requested in $search, which is returned within the `$$SEARCH_META` aggregation variable. Since this metadata is not specific to a document, it needs special handling to be returned from the aggregation call to our search server. This is why the `$facet` stage is leveraged, so that this information is pulled into a `meta` section of our service’s response.
The use of `$facet` is a bit of a tricky trick, which gives our aggregation pipeline response room for future expansion too.
>`$facet` aggregation stage is confusingly named the same as the
> Atlas Search `facet` collector. Search result facets give a group
> label and count of that group within the matching search results.
> For example, faceting on `genres` (which requires an index
> configuration adjustment from the example here) would provide, in
> addition to the documents matching the search criteria, a list of all
>`genres` within those search results and the count of how many of
> each. Adding the `facet` operator to this search service is on the
> roadmap mentioned below.
### $search in code
Given a query (`q`), a list of search fields (`search`), and filters (zero or more `filter` parameters), building the `$search` stage programmatically is straightforward using the Java driver’s convenience methods:
```
// $search
List searchPath = new ArrayList<>();
for (String search_field : searchFields) {
searchPath.add(SearchPath.fieldPath(search_field));
}
CompoundSearchOperator operator = SearchOperator.compound()
.must(List.of(SearchOperator.text(searchPath, List.of(q))));
if (filterOperators.size() > 0)
operator = operator.filter(filterOperators);
Bson searchStage = Aggregates.search(
operator,
SearchOptions.searchOptions()
.option("scoreDetails", debug)
.index(indexName)
.count(SearchCount.total())
);
```
$search code
We’ve added the `scoreDetails` feature of Atlas Search when `debug=true`, allowing us to introspect the gory Lucene scoring details only when desired; requesting score details is a slight performance hit and is generally only useful for troubleshooting.
### Field projection
The last interesting bit of our service implementation entails field projection. Returning the `_id` field, or not, requires special handling. Our service code looks for the presence of `_id` in the `project` parameter and explicitly turns it off if not specified. We have also added a facility to include the document’s computed relevancy score, if desired, by looking for a special `_score` pseudo-field specified in the `project` parameter. Programmatically building the projection stage looks like this:
```
List projectFields = new ArrayList<>();
if (projectFieldsValue != null) {
projectFields.addAll(List.of(projectFieldsValue.split(",")));
}
boolean include_id = false;
if (projectFields.contains("_id")) {
include_id = true;
projectFields.remove("_id");
}
boolean includeScore = false;
if (projectFields.contains("_score")) {
includeScore = true;
projectFields.remove("_score");
}
// ...
// $project
List projections = new ArrayList<>();
if (projectFieldsValue != null) {
// Don't add _id inclusion or exclusion if no `project` parameter specified
projections.add(Projections.include(projectFields));
if (include_id) {
projections.add(Projections.include("_id"));
} else {
projections.add(Projections.excludeId());
}
}
if (debug) {
projections.add(Projections.meta("_scoreDetails", "searchScoreDetails"));
}
if (includeScore) {
projections.add(Projections.metaSearchScore("_score"));
}
```
$project in code
### Aggregating and responding
Pretty straightforward at the end of the parameter wrangling and stage building, we build the full pipeline, make our call to Atlas, build a JSON response, and return it to the calling client. The only unique thing here is adding the `.explain()` call when `debug=true` so that our client can see the full picture of what happened from the Atlas perspective:
```
AggregateIterable aggregationResults = collection.aggregate(List.of(
searchStage,
facetStage
));
Document responseDoc = new Document();
responseDoc.put("request", new Document()
.append("q", q)
.append("skip", skip)
.append("limit", limit)
.append("search", searchFieldsValue)
.append("project", projectFieldsValue)
.append("filter", filters==null ? Collections.EMPTY_LIST : List.of(filters)));
if (debug) {
responseDoc.put("debug", aggregationResults.explain().toBsonDocument());
}
// When using $facet stage, only one "document" is returned,
// containing the keys specified above: "docs" and "meta"
Document results = aggregationResults.first();
if (results != null) {
for (String s : results.keySet()) {
responseDoc.put(s,results.get(s));
}
}
response.setContentType("text/json");
PrintWriter writer = response.getWriter();
writer.println(responseDoc.toJson());
writer.close();
logger.info(request.getServletPath() + "?" + request.getQueryString());
```
Aggregate and return results code
## Taking it to production
This is a standard Java servlet extension that is designed to run in Tomcat, Jetty, or other servlet API-compliant containers. The build runs Gretty, which smoothly allows a developer to either `jettyRun` or `tomcatRun` to start this example Java search service.
In order to build a distribution that can be deployed to a production environment, run:
```
./gradlew buildProduct
```
## Future roadmap
Our search service, as is, is robust enough for basic search use cases, but there is room for improvement. Here are some ideas for the future evolution of the service:
* Add negative filters. Currently, we support positive filters with the `filter=field:value` parameter. A negative filter could have a minus sign in front. For example, to exclude “Drama” movies, support for `filter=-genres:Drama` could be implemented.
* Support highlighting, to return snippets of field values that match query terms.
* Implement faceting.
* And so on… see the issues list for additional ideas and to add your own.
And with the service layer being a middle tier that can be independently deployed without necessarily having to make front-end or data-tier changes, some of these can be added without requiring changes in those layers.
## Conclusion
Implementing a middle-tier search service provides numerous benefits from security, to scalability, to being able to isolate changes and deployments independent of the presentation tier and other search clients. Additionally, a search service allows clients to easily leverage sophisticated search capabilities using standard HTTP and JSON techniques.
For the fundamentals of using Java with Atlas Search, check out Using Atlas Search from Java | MongoDB. As you begin leveraging Atlas Search, be sure to check out the Query Analytics feature to assist in improving your search results.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt749f7a8823712948/65ca9ba8bf8ac48b17c5b8e8/three-tier.png | md | {
"tags": [
"Atlas",
"Java"
],
"pageDescription": "In this article, we are going to detail an HTTP Java search service designed to be called from a presentation tier.",
"contentType": "Article"
} | How to Build a Search Service in Java | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/change-streams-in-java | created | # Using MongoDB Change Streams in Java
MongoDB has come a long way from being a database engine developed at the internet company DoubleClick to now becoming this leading NoSQL data store that caters to huge clients from many domains.
With the growth of the database engine, MongoDB kept adding new features and improvements in its database product which makes it the go-to NoSQL database for new requirements and product developments.
One such feature added to the MongoDB tool kit is change streams, which was added with the MongoDB 3.6 release. Before version 3.6, keeping a tailable cursor open was used to perform similar functionality. Change streams are a feature that enables real-time streaming of data event changes on the database.
The event-driven streaming of data is a critical requirement in many use cases of product/feature developments implemented these days. Many applications developed today require that changes in data from one data source need to propagate to another source in real-time. They might also require the application to perform certain actions when a change happens in the data in the data source. Logging is one such use case where the application might need to collect, process, and transmit logs in real-time and thus would require a streaming tool or platform like change streams to implement it.
## What are change streams in MongoDB?
As the word indicates, change streams are the MongoDB feature that captures "change" and "streams" it to the desired target data source.
It is an API that allows the user to subscribe their application to any change in collection, database, or even on the entire deployment. There is no middleware or data polling action to be initiated by the user to leverage this feature of event-driven, real-time data capture.
MongoDB uses replication as the underlying technology for change streams by using the operation logs generated for the data replication between replica members.
The oplog is a special capped collection that records all operations that modify the data stored in the databases. The larger the oplog, the more operations can be recorded on it. Using the oplog for change stream guarantees that the change stream will be triggered in the same order as they were applied to the database.
As seen in the above flow, when there is a CRUD operation on the MongoDB database, the oplog captures it, and those oplog files are used by MongoDB to stream those changes into real-time applications/data receivers.
## Kafka vs change streams
If we compare MongoDB and Kafka technologies, both would fall under completely separate buckets. MongoDB is classified as a NoSQL database, which can store JSON-like document structures. Kafka is an event streaming platform for real-time data feeds. It is primarily used as a publisher-subscriber model messaging service that provides a replicated message log system to stream data from one source to another.
Kafka helps to ingest huge data sets from desired data sources, filter/aggregate this data, and send it to the intended data source reliably and efficiently. Although MongoDB is a database system and its use case is miles apart from a messaging system like Kafka, the change streams feature does provide it with functionalities similar to those of Kafka.
Basically, change streams act as a messaging service to stream real-time data of any collection from your MongoDB database. It helps you to aggregate/filter that data and store it back to your same MongoDB database data source. In short, if you have a narrow use case that does not require a generalized solution but is curtailed to your data source (MongoDB), then you could go ahead with change streams as your streaming solution. Still, if you want to involve different data sources outside of MongoDB and would like a generalized solution for messaging data sets, then Kafka would make more sense.
By using change streams, you do not need a separate license or server to host your messaging service. Unlike Kafka, you would get the best of both worlds, which is a great database and an efficient messaging system.
MongoDB does provide Kafka connectors which could be used to read data in and out of Kafka topics in real-time, but if your use case is not big enough to invest in Kafka, change streams could be the perfect substitute for streaming your data.
Moreover, the Kafka connectors use change streams under the hood, so you would have to build your Kafka setup by setting up connector services and start source and sink connectors for MongoDB. In the case of change streams, you would simply watch for changes in the collection you would want without any prerequisite setup.
## How Change Streams works
Change streams, once open for a collection, act as an event monitoring mechanism on your database/collection or, in some cases, documents within your database.
The core functionality lies in helping you "watch" for changes in an entity. The background work required for this mechanism of streaming changes is implemented by an already available functionality in MongoDB, which is the oplog.
Although it comes with its overheads of blocking system resources, this event monitoring for your source collection has use cases in many business-critical scenarios, like capturing log inputs of application data or monitoring inventory changes for an e-commerce webshop, and so on. So, it's important to fit the change stream with the correct use case.
As the oplog is the driver of the entire change stream mechanism, a replicated environment of at least a single node is the first prerequisite to using change streams. You will also need the following:
- Start change stream for the collection/database intended.
- Have the necessary CPU resources for the cluster.
Instead of setting up a self-hosted cluster for fulfilling the above checklist, there is always an option to use the cloud-based hosted solution, MongoDB Atlas. Using Atlas, you can get a ready-to-use setup with a few clicks. Since change streams are resource-intensive, the cost factor has to be kept in mind while firing an instance in Atlas for your data streaming.
## Implementing change streams in your Java Spring application
In the current backend development world, streams are a burning topic as they help the developers to have a systematic pipeline in place to process the persisted data used in their application. The streaming of data helps to generate reports, have a notification mechanism for certain criteria, or, in some cases, alter some schema based on the events received through streams.
Here, I will demonstrate how to implement a change stream for a Java Spring application.
Once the prerequisite to enable change streams is completed, the steps at the database level are almost done. You will now need to choose the collection on which you want to enable change streams.
Let's consider that you have a Java Spring application for an e-commerce website, and you have a collection called `e_products`, which holds product information of the product being sold on the website.
To keep it simple, the fields of the collection can be:
```json
{"_id" , "productName", "productDescription" , "price" , "colors" , "sizes"}
```
Now, these fields are populated from your collection through your Java API to show the product information on your website when a product is searched for or clicked on.
Now, say there exists another collection, `vendor_products`, which holds data from another source (e.g., another product vendor). In this case, it holds some of the products in your `e_products` but with more sizes and color options.
You want your application to be synced with the latest available size and color for each product. Change streams can help you do just that. They can be enabled on your `vendor_products` collection to watch for any new product inserted, and then for each of the insertion events, you could have some logic to add the colors/sizes to your `e_products` collection used by your application.
You could create a microservice application specifically for this use case. By using a dedicated microservice, you could allocate sufficient CPU/memory for the application to have a thread to watch on your `vendor_products` collection. The configuration class in your Spring application would have the following code to start the watch:
```java
@Async
public void runChangeStreamConfig() throws InterruptedException {
CodecRegistry pojoCodecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(),
fromProviders(PojoCodecProvider.builder().automatic(true).build()));
MongoCollection vendorCollection = mongoTemplate.getDb().withCodecRegistry(pojoCodecRegistry).getCollection("vendor_products", VendorProducts.class);
List pipeline = singletonList(match(eq("operationType", "insert")));
oldEcomFieldsCollection.watch(pipeline).forEach(s ->
mergeFieldsVendorToProducts(s.getDocumentKey().get("_id").asString().getValue())
);
}
```
In the above code snippet, you can see how the collection is selected to be watched and that the monitored operation type is "insert." This will only check for new products added to this collection. If needed, we could also do the monitoring for "update" or "delete."
Once this is in place, whenever a new product is added to `vendor_products`, this method would be invoked and the `_id` of that product would then be passed to `mergeFieldsVendorToProducts()` method where you can write your logic to merge the various properties from `vendor_products` to the `e_products` collection.
```java
forEach(s ->
{
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(s.get("_id")));
Update update = new Update();
update.set(field, s.get(field));
mongoTemplate.updateFirst(query, update, EProducts.class);
})
```
This is a small use case for change streams; there are many such examples where change streams can come in handy. It's about using this tool for the right use case.
## Conclusion
In conclusion, change streams in MongoDB provide a powerful and flexible way to monitor changes to your database in real time. Whether you need to react to changes as they happen, synchronize data across multiple systems, or build custom event-driven workflows, change streams can help you achieve these goals with ease.
By leveraging the power of change streams, you can improve the responsiveness and efficiency of your applications, reduce the risk of data inconsistencies, and gain deeper insights into the behavior of your database.
While there is a bit of a learning curve when working with change streams, MongoDB provides comprehensive documentation and a range of examples to help you get started. With a little practice, you can take advantage of the full potential of change streams and build more robust, scalable, and resilient applications. | md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Change streams are an API that allows the user to subscribe to their application to any change in collection, database, or even on the entire deployment. There is no middleware or data polling action to be initiated by the user to leverage this event-driven, real-time data capture feature. Learn how to use change streams with Java in this article.\n",
"contentType": "Article"
} | Using MongoDB Change Streams in Java | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/developing-applications-mongodb-atlas-serverless-instances | created | # Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances
If you're a developer, worrying about your database is not necessarily something you want to do. You likely don't want to spend your time provisioning or sizing clusters as the demand of your application changes. You probably also don't want to worry about breaking the bank if you've scaled something incorrectly.
With MongoDB Atlas, you have a few deployment options to choose from when it comes to your database. While you could choose a pre-provisioned shared or dedicated cluster, you're still stuck having to size and estimate the database resources you will need and subsequently managing your cluster capacity to best fit demand. While a pre-provisioned cluster isn’t necessarily a bad thing, it might not make sense if your development becomes idle or you’re expecting frequent periods of growth or decline. Instead, you can opt for a serverless instance to help remove the capacity management burden and free up time to dedicate to writing code. Serverless instances provide an on-demand database endpoint for your application that will automatically scale up and down to zero with application demand and only charge you based on your usage.
In this short and sweet tutorial, we'll see how easy it is to get started with a MongoDB Atlas serverless instance and how to begin to develop an application against it.
## Deploy a MongoDB Atlas serverless instance
We're going to start by deploying a new MongoDB Atlas serverless instance. There are numerous ways to accomplish deploying MongoDB, but for this example, we'll stick to the web dashboard and some point and click.
From the MongoDB Atlas dashboard, click the "Create" button.
Choose "Serverless" as well as a cloud vendor where this instance should live.
If possible, choose a cloud vendor that matches where your application will live. This will give you the best possible latency between your database and your application.
Once you choose to click the "Create Instance" button, your instance is ready to go!
You're not in the clear yet though. You won't be able to use your Atlas serverless instance outside of the web dashboard until you create some database access and network access rules.
We'll start with a new database user.
Choose the type of authentication that makes the most sense for you. To keep things simple for this tutorial, I recommend choosing the "Password" option.
While you could use a "Built-in Role" when it comes to user privileges, your best bet for any application is to define "Specific Privileges" depending on what the user should be allowed to do. For this project, we'll be using an "example" database and a "people" collection, so it makes sense to give only that database and collection readWrite access.
Use your best judgment when creating users and defining access.
With a user created, we can move onto the network access side of things. The final step before we can start developing against our database.
In the "Network Access" tab, add the IP addresses that should be allowed access. If you're developing and testing locally like I am, just add your local IP address. Just remember to add the IP range for your servers or cloud vendor when the time comes. You can also take advantage of private networking if needed.
With the database and network access out of the way, let's grab the URI string that we'll be using in the next step of the tutorial.
From the Database tab, click the "Connect" button for your serverless instance.
Choose the programming language you wish to use and make note of the URI.
Need more help getting started with serverless instances? Check out this video that can walk you through it.
## Interacting with an Atlas serverless instance using a popular programming technology
At this point, you should have an Atlas serverless instance deployed. We're going to take a moment to connect to it from application code and do some interactions, such as basic CRUD.
For this particular example, we'll use JavaScript with the MongoDB Node.js driver, but the same rules and concepts apply, minus the language differences for the programming language that you wish to use.
On your local computer, create a project directory and navigate into it with your command line. You'll want to execute the following commands once it becomes your working directory:
```bash
npm init -y
npm install mongodb
touch main.js
```
With the above commands, we've initialized a Node.js project, installed the MongoDB Node.js driver, and created a **main.js** file to contain our code.
Open the **main.js** file and add the following JavaScript code:
```javascript
const { MongoClient } = require("mongodb");
const mongoClient = new MongoClient("MONGODB_URI_HERE");
(async () => {
try {
await mongoClient.connect();
const database = mongoClient.db("example");
const collection = database.collection("people");
const inserted = await collection.insertOne({
"firstname": "Nic",
"lastname": "Raboy",
"location": "California, USA"
});
const found = await collection.find({ "lastname": "Raboy" }).toArray();
console.log(found);
const deleted = await collection.deleteMany({ "lastname": "Raboy" });
} catch (error) {
console.error(error);
} finally {
mongoClient.close();
}
})();
```
So, what's happening in the above code?
First, we define our client with the URI string for our serverless instance. This is the same string that you took note of earlier in the tutorial and it should contain a username and password.
With the client, we can establish a connection and get a reference to a database and collection that we want to use. The database and collection does not need to exist prior to running your application.
Next, we are doing three different operations with the MongoDB Query API. First, we are inserting a new document into our collection. After the insert is complete, assuming our try/catch block didn't find an error, we find all documents where the lastname matches. For this example, there should only ever be one document, but you never know what your code looks like. If a document was found, it will be printed to the console. Finally, we are deleting any document where the lastname matches.
By the end of this, no documents should exist in your collection, assuming you are following along with my example. However, a document did (at some point in time) exist in your collection — we just deleted it.
Alright, so we have a basic example of how to build an application around an on-demand database, but it didn’t really highlight the benefit of why you’d want to. So, what can we do about that?
## Pushing an Atlas serverless instance with a plausible application scenario
We know that pre-provisioned and serverless clusters work well and from a development perspective, you’re going to end up with the same results using the same code.
Let’s come up with a scenario where a serverless instance in Atlas might lower your development costs and reduce the scaling burden to match demand. Let’s say that you have an online store, but not just any kind of online store. This online store sees mild traffic most of the time and a 1000% spike in traffic every Friday between the hours of 9AM and 12PM because of a lightning type deal that you run.
We’ll leave mild traffic up to your imagination, but a 1000% bump is nothing small and would likely require some kind of scaling intervention every Friday on a pre-provisioned cluster. That, or you’d need to pay for a larger sized database.
Let’s visualize this example with the following Node.js code:
```
const { MongoClient } = require("mongodb");
const Express = require("express");
const BodyParser = require("body-parser");
const app = Express();
app.use(BodyParser.json());
const mongoClient = new MongoClient("MONGODB_URI_HERE");
var database, purchasesCollection, dealsCollection;
app.get("/deal", async (request, response) => {
try {
const deal = await dealsCollection.findOne({ "date": "2022-10-07" });
response.send(deal || {});
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
app.post("/purchase", async (request, response) => {
try {
if(!request.body) {
throw { "message": "The request body is missing!" };
}
const receipt = await purchasesCollection.insertOne(
{
"sku": (request.body.sku || "000000"),
"product_name": (request.body.product_name || "Pokemon Scarlet"),
"price": (request.body.price || 59.99),
"customer_name": (request.body.customer_name || "Nic Raboy"),
"purchase_date": "2022-10-07"
}
);
response.send(receipt || {});
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
app.listen(3000, async () => {
try {
await mongoClient.connect();
database = mongoClient.db("example");
dealsCollection = database.collection("deals");
purchasesCollection = database.collection("receipts");
console.log("SERVING AT :3000...");
} catch (error) {
console.error(error);
}
});
```
In the above example, we have an Express Framework-powered web application with two endpoint functions. We have an endpoint for getting the deal and we have an endpoint for creating a purchase. The rest can be left up to your imagination.
To load test this application with bursts and simulate the potential value of a serverless instance, we can use a tool like Apache JMeter.
With JMeter, you can define the number of threads and iterations it uses when making HTTP requests.
Remember, we’re simulating a burst in this example. If you do decide to play around with JMeter and you go excessive on the burst, you could end up with an interesting bill. If you’re interested to know how serverless is billed, check out the pricing page in the documentation.
Inside your JMeter Thread Group, you’ll want to define what is happening for each thread or iteration. In this case, we’re doing an HTTP request to our Node.js API.
Since the API expects JSON, we can define the header information for the request.
Once you have the thread information, the HTTP request information, and the header information, you can run JMeter and you’ll end up with a lot of activity against not only your web application, but also your database.
Again, a lot of this example has to be left to your imagination because to see the scaling benefits of a serverless instance, you’re going to need a lot of burst traffic that isn’t easily simulated during development. However, it should leave you with some ideas.
## Conclusion
You just saw how quick it is to develop on MongoDB Atlas without having to burden yourself with sizing your own cluster. With a MongoDB Atlas serverless instance, your database will scale to meet the demand of your application and you'll be billed for that demand. This will protect you from paying for improperly sized clusters that are running non-stop. It will also save you the time you would have spent making size related adjustments to your cluster.
The code in this example works regardless if you are using an Atlas serverless instance or a pre-provisioned shared or dedicated cluster.
Got a question regarding this example, or want to see more? Check out the MongoDB Community Forums to see what's happening. | md | {
"tags": [
"Atlas",
"Serverless"
],
"pageDescription": "In this short and sweet tutorial, we'll see how easy it is to get started with a MongoDB Atlas serverless instance and how to begin to develop an application against it.",
"contentType": "Tutorial"
} | Developing Your Applications More Efficiently with MongoDB Atlas Serverless Instances | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-app-services-aws-bedrock-rag | created | # MongoDB Atlas Vector Search and AWS Bedrock modules RAG tutorial
Welcome to our in-depth tutorial on MongoDB Atlas Vector Search and AWS Bedrock modules, tailored for creating a versatile database assistant for product catalogs. This tutorial will guide you through building an application that simplifies product searches using diverse inputs such as individual products, lists, images, and even recipes. Imagine finding all the necessary ingredients for a recipe with just a simple search. Whether you're a developer or a product manager, this guide will equip you with the skills to create a powerful tool for navigating complex product databases.
Some examples of what this application can do:
### Single product search:
**Search query**: `"Organic Almonds"`
**Result**: displays the top-rated or most popular organic almond product in the catalog
### List-based search:
**Search query**: `"Rice", "Black Beans", "Avocado"]`
**Result**: shows a list of products including rice, black beans, and avocados, along with their different brands and prices
### Image-based search:
**Search query**: [image of a whole wheat bread loaf]
**Result**: identifies and shows the top-picked brand of whole wheat bread available in the catalog
### Recipe-based search:
**Search query**: `"Chocolate Chip Cookie Recipe"`
**Result**: lists all ingredients needed for the recipe, like flour, chocolate chips, sugar, butter, etc., and suggests relevant products
![Demo Application Search Functionality
Let’s start!
## High-level architecture
1\. Frontend VUE js application implementing a chat application
2\. Trigger:
- A trigger watching for inserted “product” documents and using a function logic sets vector embeddings on the product “title,” “img,” or both.
3\. App services to facilitate a backend hosting the endpoints to interact with the database and AI models
- **getSearch** — the main search engine point that receives a search string or base64 image and outputs a summarized document
- **getChats** — an endpoint to retrieve user chats array
- **saveChats** — an endpoint to save the chats array
4\. MongoDB Atlas database with a vector search index to retrieve relevant documents for RAG
enter image description here
## Deploy a free cluster
Before moving forward, ensure the following prerequisites are met:
- Database cluster setup on MongoDB Atlas
- Obtained the URI to your cluster
For assistance with database cluster setup and obtaining the URI, refer to our guide for setting up a MongoDB cluster, and our guide to get your connection string.
Preferably the database location will be in the same AWS region as the Bedrock enabled modules.
MongoDB Atlas has a rich set of application services that allow a developer to host an entire application logic (authentication, permissions, functions, triggers, etc.) with a generous free tier.
We will leverage this ability to streamline development and data integration in minutes of work.
## Setup app services
1\. Start by navigating to the App Services tab.
2\. You’ll be prompted to select a starter template. Let’s go with the **Build your own App** option that’s already selected. Click the **Next** button.
3\. Next, you need to configure your application.
- Data Source: Since we have created a single cluster, Atlas already linked it to our application.
- (Optional) Application Name: Let’s give our application a meaningful name, such as bedrockDemo. (This option might be chosen for you automatically as "Application-0" for the first application.)
- (Optional) App Deployment Model: Change the deployment to Single Region and select the region closest to your physical location.
4\. Click the **Create App Service** button to create your first App Services application!
5\. Once the application is created, we need to verify data sources are linked to our cluster. Visit the **Linked Data Sources** tab:
Our Atlas cluster with a linked name of `mongodb-atlas`
## Setup secrets and trigger
We will use the app services to create a Value and a Secret for AWS access and secret keys to access our Bedrock modules.
Navigate to the **Values** tab and click **Create New Value** by following this configuration:
| **Value Type** | **Name** | **Value** |
| --- | --- | --- |
| Secret | AWS_ACCESS_KEY| ``|
| Secret | AWS_SECRET_KEY | ``|
| Value | AWS_ACCESS_KEY| Link to SECRET: AWS_ACCESS_KEY|
| Value | AWS_SECRET_KEY | Link to SECRET: AWS_SECRET_KEY|
By the end of this process you should have:
Once done, press **Review Draft & Deploy** and then **Deploy**.
### Add aws sdk dependency
The AWS SDK Bedrock client is the easiest and most convenient way to interact with AWS bedrock models.
1\. In your app services application, navigate to the **Functions** tab and click the **Dependencies** tab.
2\. Click **Add Dependency** and add the following dependency:
```
@aws-sdk/client-bedrock-runtime
```
3\. Click **Add** and wait for it to be successfully added.
4\. Once done, press **Review Draft & Deploy** and then **Deploy**.
### Create a trigger
Navigate to **Triggers** tab and create a new trigger:
**Trigger Code**
Choose **Function type** and in the dropdown, click **New Function.** Add a name like setEmbeddings under **Function Name**.
Copy and paste the following code.
```javascript
// Header: MongoDB Atlas Function to Process Document Changes
// Inputs: MongoDB changeEvent object
// Outputs: Updates the MongoDB document with processing status and AWS model response
exports = async function(changeEvent) {
// Connect to MongoDB service
var serviceName = "mongodb-atlas";
var dbName = changeEvent.ns.db;
var collName = changeEvent.ns.coll;
try {
var collection = context.services.get(serviceName).db(dbName).collection(collName);
// Set document status to 'pending'
await collection.updateOne({'_id' : changeEvent.fullDocument._id}, {$set : {processing : 'pending'}});
// AWS SDK setup for invoking models
const { BedrockRuntimeClient, InvokeModelCommand } = require("@aws-sdk/client-bedrock-runtime");
const client = new BedrockRuntimeClient({
region: 'us-east-1',
credentials: {
accessKeyId: context.values.get('AWS_ACCESS_KEY'),
secretAccessKey: context.values.get('AWS_SECRET_KEY')
},
model: "amazon.titan-embed-text-v1",
});
// Prepare embedding input from the change event
let embedInput = {}
if (changeEvent.fullDocument.title) {
embedInput'inputText'] = changeEvent.fullDocument.title
}
if (changeEvent.fullDocument.imgUrl) {
const imageResponse = await context.http.get({ url: changeEvent.fullDocument.imgUrl });
const imageBase64 = imageResponse.body.toBase64();
embedInput['inputImage'] = imageBase64
}
// AWS SDK call to process the embedding
const input = {
"modelId": "amazon.titan-embed-image-v1",
"contentType": "application/json",
"accept": "*/*",
"body": JSON.stringify(embedInput)
};
console.log(`before model invoke ${JSON.stringify(input)}`);
const command = new InvokeModelCommand(input);
const response = await client.send(command);
// Parse and update the document with the response
const doc = JSON.parse(Buffer.from(response.body));
doc.processing = 'completed';
await collection.updateOne({'_id' : changeEvent.fullDocument._id}, {$set : doc});
} catch(err) {
// Handle any errors in the process
console.error(err)
}
};
```
Click **Save** and **Review Draft & Deploy**.
Now, we need to set the function setEmbeddings as a SYSTEM function. Click on the Functions tab and then click on the **setEmbeddings** function, **Settings** tab. Change the Authentication to **System** and click **Save**.
![System setting on a function
A trigger running successfully will produce a collection in our Atlas cluster. You can navigate to **Data Services > Database**. Click the **Browse Collections** button on the cluster view. The database name is Bedrock and the collection is `products`.
> Please note that the trigger run will only happen when we insert data into the `bedrock.products` collection and might take a while the first time. Therefore, you can watch the Logs section on the App Services side.
## Create an Atlas Vector Search index
Let’s move back to the **Data Services** and **Database** tabs.
**Atlas search index creation **
1. First, navigate to your cluster’s "Atlas Search" section and press the Create Index button.
1. Click **Create Search Index**.
2. Choose the Atlas Vector Search index and click **Next**.
3. Select the "bedrock" database and "products" collection.
4. Paste the following index definition:
```
{
"fields":
{
"type": "vector",
"path": "embedding",
"numDimensions": 1024,
"similarity": "dotProduct"
}
]
}
```
1. Click **Create** and wait for the index to be created.
2. The index is going to go through a build phase and will appear "Active" eventually.
Now, you are ready to write $search aggregations for Atlas Search.
The HTTP endpoint getSearch implemented in Chapter 3 already includes a search query.
```
const items = await collection.aggregate([
{
"$vectorSearch": {
"queryVector": doc.embedding,
"index": "vector_index",
"path": "embedding",
"numCandidates": 15,
"limit": 1
}
},
{"$project": {"embedding": 0}}
]).toArray();
```
With this code, we are performing a vector search with whatever is placed in the "doc.embedding" variable on fields "embedding." We look for just one document’s results and limit the set for the first one.
## Set up the backend logic
Our main functionality will rely on a user HTTP endpoint which will orchestrate the logic of the catalog search. The input from the user will be turned into a multimodal embedding via AWS Titan and will be passed to Atlas Vector Search to find the relevant document. The document will be returned to the user along with a prompt that will engineer a response from a Cohere LLM.
> Cohere LLM `cohere.command-light-text-v14` is part of the AWS Bedrock base model suite.
### Create application search HTTPS endpoint
1. On the App Services application, navigate to the **HTTPS Endpoints** section.
2. Create a new POST endpoint by clicking **Add An Endpoint** with a path of **/getSearch**.
3. Important! Toggle the **Response With Result** to On.
4. The logic of this endpoint will get a "term" from the query string and search for that term. If no term is provided, it will return the first 15 results.
![getSearch endpoint
5. Add under Function and New Function (name: getProducts) the following function logic:
```javascript
// Function Name : getProducts
exports = async function({ body }, response) {
// Import required SDKs and initialize AWS BedrockRuntimeClient
const { BedrockRuntimeClient, InvokeModelCommand } = require("@aws-sdk/client-bedrock-runtime");
const client = new BedrockRuntimeClient({
region: 'us-east-1',
credentials: {
accessKeyId: context.values.get('AWS_ACCESS_KEY'),
secretAccessKey: context.values.get('AWS_SECRET_KEY')
}
});
// MongoDB and AWS SDK service setup
const serviceName = "mongodb-atlas";
const dbName = "bedrock";
const collName = "products";
const collection = context.services.get(serviceName).db(dbName).collection(collName);
// Function to run AWS model command
async function runModel(command, body) {
command.body = JSON.stringify(body);
console.log(`before running ${command.modelId} and prompt ${body.prompt}`)
const listCmd = new InvokeModelCommand(command);
console.log(`after running ${command.modelId} and prompt ${body.prompt}`)
const listResponse = await client.send(listCmd);
console.log('model body ret', JSON.stringify(JSON.parse(Buffer.from(listResponse.body))))
console.log('before return from runModel')
return JSON.parse(Buffer.from(listResponse.body));
}
// Function to generate list query for text input
function generateListQuery(text) {
const listDescPrompt = `Please build a json only output start with: {productList : {"product" : "" , "quantity" : }]} stop output after json fully generated.
The list for ${text}. Complete {productList : `;
return {
"prompt": listDescPrompt,
"temperature": 0
};
}
// Function to process list items
async function processListItems(productList, embedCmd) {
let retDocuments = [];
for (const product of productList) {
console.log('product', JSON.stringify(product))
const embedBody = { 'inputText': product.product };
const resEmbedding = await runModel(embedCmd, embedBody);
const items = await collection.aggregate([
vectorSearchQuery(resEmbedding.embedding), {"$project" : {"embedding" : 0}}
]).toArray();
retDocuments.push(items[0]);
}
return retDocuments;
}
// Function to process a single item
async function processSingleItem(doc) {
const items = await collection.aggregate([
vectorSearchQuery(doc.embedding), {"$project" : {"embedding" : 0}}]).toArray();
return items;
}
// Function to create vector search query
function vectorSearchQuery(embedding) {
return {
"$vectorSearch": {
"queryVector": embedding,
"index": "vector_index",
"path": "embedding",
"numCandidates": 15,
"limit": 1
}
};
}
// Parsing input data
const { image, text } = JSON.parse(body.text());
try {
let embedCmd = {
"modelId": "amazon.titan-embed-image-v1",
"contentType": "application/json",
"accept": "*/*"
};
// Process text input
if (text) {
const genList = generateListQuery(text);
const listResult = await runModel({ "modelId": "cohere.command-light-text-v14", "contentType": "application/json",
"accept": "*/*" }, genList);
const list = JSON.parse(listResult.generations[0].text);
console.log('list', JSON.stringify(list));
let retDocuments = await processListItems(list.productList, embedCmd);
console.log('retDocuments', JSON.stringify(retDocuments));
let prompt, success = true;
prompt = `In one simple sentence explain how the retrieved docs: ${JSON.stringify(retDocuments)}
and mention the searched ingridiants from list: ${JSON.stringify(list.productList)} `;
// Generate text based on the prompt
genQuery = {
"prompt": prompt,
"temperature": 0
};
textGenInput = {
"modelId": "cohere.command-light-text-v14",
"contentType": "application/json",
"accept": "*/*"
};
const assistantResponse = await runModel(textGenInput, genQuery);
console.log('assistant', JSON.stringify(assistantResponse));
retDocuments[0].assistant = assistantResponse.generations[0].text;
return retDocuments;
}
// Process image or other inputs
if (image) {
const doc = await runModel(embedCmd, { inputImage: image });
return await processSingleItem(doc);
}
} catch (err) {
console.error("Error: ", err);
throw err;
}
};
```
Click **Save Draft** and follow the **Review Draft & Deploy** process.
Make sure to keep the http callback URL as we will use it in our final chapter when consuming the data from the frontend application.
> TIP:
>
> The URL will usually look something like: `https://us-east-1.aws.data.mongodb-api.com/app//endpoint/getSearch`
Make sure that the function created (e.g., getProducts) is on "SYSTEM" privilege for this demo.
This page can be accessed by going to the **Functions** tab and looking at the **Settings** tab of the relevant function.
### Import data into Atlas
Now, we will import the data into Atlas from our [github repo.
1. On the **Data Services** main tab, click your cluster name.
Click the **Collections** tab.
2. We will start by going into the "bedrock" database and importing the "products" collection.
3. Click **Insert Document** or **Add My Own Data** (if present) and switch to the document view. Paste the content of the "products.json" file from the "data" folder in the repository.
4. Click Insert and wait for the data to be imported.
### Create an endpoint to save and retrieve chats
1\. `/getChats` - will save a chat to the database
Endpoint
- Name: getChats
- Path: /getChats
- Method: GET
- Response with Result: Yes
``` javascript
// This function is the endpoint's request handler.
exports = async function({ query, headers, body}, response) {
// Data can be extracted from the request as follows:
const {player } = query;
// Querying a mongodb service:
const doc = await context.services.get("mongodb-atlas").db("bedrock").collection("players").findOne({"player" : player}, {messages : 1})
return doc;
};
```
2\. `/saveChats` — will save a chat to the database
Endpoint
- Name: saveChats
- Path: /saveChats
- Method: POST
- Response with Result: Yes
```javascript
// This function is the endpoint's request handler.
exports = async function({ query, headers, body}, response) {
// Data can be extracted from the request as follows:
// Headers, e.g. {"Content-Type": "application/json"]}
const contentTypes = headers["Content-Type"];
const {player , messages } = JSON.parse(body.text());
// Querying a mongodb service:
const doc = await context.services.get("mongodb-atlas").db("bedrock").collection("players").findOneAndUpdate({player : player}, {$set : {messages : messages}}, {returnNewDocument : true});
return doc;
};
```
Make sure that all the functions created (e.g., registerUser) are on "SYSTEM" privilege for this demo.
![System setting on a function
This page can be accessed by going to the Functions tab and looking at the Settings tab of the relevant function.
Finally, click Save **Draft** and follow the **Review Draft & Deploy** process.
## GitHub Codespaces frontend setup
It’s time to test our back end and data services. We will use the created search HTTPS endpoint to show a simple search page on our data.
You will need to get the HTTPS Endpoint URL we created as part of the App Services setup.
### Play with the front end
We will use the github repo to launch codespaces from:
1. Open the repo in GitHub.
2. Click the green **Code** button.
3. Click the **Codespaces** tab and **+** to create a new codespace.
### Configure the front end
1. Create a file called .env in the root of the project.
2. Add the following to the file:
```
VUE_APP_BASE_APP_SERVICE_URL=''
VUE_APP_SEARCH_ENDPOINT='getSearch'
VUE_APP_SAVE_CHATS_ENDPOINT='saveChats'
VUE_APP_GET_CHATS_ENDPOINT='getChats'
## Small chart to present possible products
VUE_APP_SIDE_IFRAME='https://charts.mongodb.com/charts-fsidemo-ubsdv/embed/charts?id=65a67383-010f-4c3d-81b7-7cf19ca7000b&maxDataAge=3600&theme=light&autoRefresh=true'
```
### Install the front end
```
npm install
```
Install serve.
```
npm install -g serve
```
### Build the front end
```
npm run build
```
### Run the front end
```
serve -s dist/
```
### Test the front end
Open the browser to the URL provided by serve in a popup.
## Summary
In summary, this tutorial has equipped you with the technical know-how to leverage MongoDB Atlas Vector Search and AWS Bedrock for building a cutting-edge database assistant for product catalogs. We've delved deep into creating a robust application capable of handling a variety of search inputs, from simple text queries to more complex image and recipe-based searches. As developers and product managers, the skills and techniques explored here are crucial for innovating and improving database search functionalities.
The combination of MongoDB Atlas and AWS Bedrock offers a powerful toolkit for efficiently navigating and managing complex product data. By integrating these technologies into your projects, you’re set to significantly enhance the user experience and streamline the data retrieval process, making every search query more intelligent and results more relevant. Embrace this technology fusion to push the boundaries of what’s possible in database search and management.
If you want to explore more about MongoDB and AI please refer to our main landing page.
Additionally, if you wish to communicate with our community please visit https://community.mongodb.com .
| md | {
"tags": [
"Atlas",
"JavaScript",
"AI",
"AWS",
"Serverless"
],
"pageDescription": "Explore our comprehensive tutorial on MongoDB Atlas Vector Search and AWS Bedrock modules for creating a dynamic database assistant for product catalogs. This guide covers building an application for seamless product searching using various inputs such as single products, lists, images, and recipes. Learn to easily find ingredients for a recipe or the best organic almonds with a single search. Ideal for developers and product managers, this tutorial provides practical skills for navigating complex product databases with ease.",
"contentType": "Tutorial"
} | MongoDB Atlas Vector Search and AWS Bedrock modules RAG tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-dotnet-for-xamarin-best-practices-meetup | created | # Realm .NET for Xamarin (Best Practices and Roadmap) Meetup
Didn't get a chance to attend the Realm .NET for Xamarin (best practices and roadmap) Meetup? Don't worry, we recorded the session and you can now watch it at your leisure to get you caught up.
>Realm .NET for Xamarin (best practices and roadmap)
>:youtube]{vid=llW7MWlrZUA}
In this meet-up, Nikola Irinchev, the engineering lead for Realm's .NET team, and Ferdinando Papale, .NET engineer on the Realm team, will walk us through the .NET ecosystem as it relates to mobile with the Xamarin framework. We will discuss things to consider when using Xamarin, best practices to implement and gotcha's to avoid, and what's next for the .NET team at Realm.
In this meetup, Nikola & Ferdinando spend about 45 minutes on
- Xamarin Overview & Benefits
- Xamarin Key Concepts and Architecture
- Realm Integration with Xamarin
- Realm Best Practices / Tips&Tricks with Xamarin
And then we have about 20 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our [Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
## Transcript
**Shane McAllister**: Welcome. It's good to have you all here. Sorry for the couple of minutes wait, we could see people entering. We just wanted to make sure everybody had enough time to get on board. So very welcome to what is our meetup today. We are looking forward to a great session and we're really delighted that you could join us. This is a new initiative that we have in MongoDB and Realm, and so far is we're trying to cater for all of the interested people who want to learn more about what we're building and how we're going about this.
**Shane McAllister**: Essentially, I think this is our third this year and we have another three scheduled as well too, you'll see those at the end of the presentation. And really it's all about bringing together Realm developers and builders and trying to have an avenue whereby you're going to get an opportunity, as you'll see in a moment when I do the introductions, to talk to the people who built the SDKs that you're using. So we very much look forward to that.
**Shane McAllister**: A couple of housekeeping things before I do the introductions. It is being recorded, we hope everybody here is happy with that. It's being recorded for those that can't attend, timezone might be work for them. And we will be putting it up. You will get a link to the recording probably within a day or two of the meetup finishing. It will go up on YouTube and we'll also share it in our developer hub.
**Shane McAllister**: We will have an opportunity for Q&A at the end of the presentation as well too. But for those of you not familiar with this platform that we're on at the moment, it's very straightforward like any other video platform you might be on. We have the ability to chat up there. Everybody's been put in there, where they're from, and it's great to see so many people from around the world. I myself am in Limerick, in the west coast of Ireland. And Ferdinando and Nikola, who are presenting shortly, are in Copenhagen. So we'll go through that as well too.
**Shane McAllister**: But as I said, we'll be doing Q&A, but if you want to, during the presentation, to put any questions into the chat, by all means, feel free to do so. I'll be manning that chat, I'll be looking through that. If I can answer you there and then I will. But what we've done in other meetups is we've opened out the mic and the cameras to all of our attendees at the end, for those that have asked questions in the chat. So we give you an opportunity to ask your own questions. There is no problem whatsoever if you're too shy to come on with an open mic and an open camera, I'll quite happily ask the question for you to both Ferdinando and Nikola.
**Shane McAllister**: This is a meetup. Albeit that we're all stuck on the screen, we want to try and recreate a meetup. So I'm quite happy to open out your cameras and microphones for the questions at the end. The house rules are, would be, just be respectful of other people's time, and if you can get your question asked, then you can either turn off your camera or turn off your mic and you'll leave the platform, again, but still be part of the chat.
**Shane McAllister**: So it's a kind of an interactive session towards the end. The presentation will be, hopefully, Nikola and Ferdinando, fingers crossed, 40 to 45 minutes or so, and then some Q&A. And what I'll be doing in the chat as well too, is I'll put a link during the presentation to a Google form for some Swag. We really do appreciate you attending and listening and we want you to share your thoughts with us on Realm and what you think, and in appreciation of your time we have some Swag goodies to share with you. The only thing that I would say with regard to that is that given COVID and postal and all of that, it's not going to be there very quick, you need to be a bit patient. A couple of weeks, maybe more, depending on where in the world that you are.
**Shane McAllister**: So look, really delighted with what we have scheduled here for you shortly here now. So joining me today, I'm only the host, but the guys with the real brains behind this are Nikola and Ferdinando from the .NET team in Realm. And I really hope that you enjoy what we're going through today. I'll let both of you do your own introductions. Where you are, your background, how long you've been with Realm, et cetera. So Nikola, why don't we start with yourself?
**Nikola Irinchev**: Sure. I'm Nikola. I'm hailing from sunny Denmark today, and usually for this time of the year. I've been with Realm for almost five years now, ever since before the MongoDB acquisition. Start a bit the dominant theme move to various different projects and I'm back to my favorite thing, which is the .NET one. I'm super excited to have all of you here today, and now I'm looking forward to the questions you ask us.
**Ferdinando Papale**: Hello. Do you hear me?
**Shane McAllister**: Yes.
**Ferdinando Papale**: Okay. And I'm Ferdinando. And I've joined Realm only in October so I'm pretty new. I'm in the same team as Nikola. And before working at Realm I was a Xamarin developer. Yes. Shane, you're muted.
**Shane McAllister**: Apologies. I'm talking head. Anyway, I'm very much looking forward to this. My background is iOS and Swift, so this is all relatively new as well to me. And I forgot to introduce myself properly at the beginning. I look after developer advocacy for Realm. So we have a team of developer advocates who in normal circumstances would be speaking at events and conferences. Today we're doing that but online and meetups such as this, but we also create a ton of content that we upload to our dev hub on developer.mongodb.com.
**Shane McAllister**: We're also active on our forums there. And anywhere else, social in particular, I look after the @Realm Twitter a lot of the time as well too. So please if you're enjoying this meetup please do shout outs on @Realm at Twitter, we want to gather some more followers, et cetera, as well too. But without further ado, I will turn it over to Nikola and Ferdinando, and you can share screen and take it away.
**Ferdinando Papale**: Yes. I will be the one starting the presentation. We already said who we were and now first let's take a look at the agenda. So this presentation will be made up of two parts. In the first part we'll talk about Xamarin. First some overview and benefits, and then some key concepts in architecture. And then in the second part, we're going to be more talk about Realm. How it integrates with Xamarin and then some tips, and then some final thoughts.
**Ferdinando Papale**: Then let's go straight to the Xamarin part. That will be the first part of our presentation. First of all, if. Xamarin is an open source tool to build cross platform applications using C-sharp and .NET, and at the moment is developed by Microsoft. You can develop application with Xamarin for a lot of different platforms, but the main platforms are probably iOS and Android.
**Ferdinando Papale**: You can actually also develop for MacOS, for Tizen, UWP, but probably iOS, Android are still the main targets of Xamarin. Why should you choose to develop your application using Xamarin? If we go to the next slide. Okay, yes. Probably the most important point of this is the code reuse. According to Microsoft, you can have up to 90% of the code shared between the platforms. This value actually really depends on the way that you structure of your application, how you decide to structure it, and if you decide, for example, to use Xamarin.Forms or not, but we'll discuss about it later.
**Ferdinando Papale**: Another important point is that you are going to use C-sharp and .NET. So there is one language and one ecosystem. This means that you don't need to learn how to use Swift on iOS, you don't need to learn how to use Kotlin on Android, so it's a little bit more convenient, let's say.
**Ferdinando Papale**: And then the final thing that needs to be known is the fact that in the end, the application that you develop with Xamarin feels native. I mean, a final user will not see any difference with a native app. Because whatever you can obtain natively you can also obtain with Xamarin from the UI point of view.
**Ferdinando Papale**: Now, to talk a little bit more about the architecture of Xamarin. If you go to the next slide. Yes. In general, Xamarin works differently depending on the platform that we are targeting. But for both Android and iOS, the main targets, they both work with Mono. Mono is another implementation of them that is cross platform.
**Ferdinando Papale**: And it's a little bit different the way that it works on Android and iOS. So on Android, the C-sharp code gets compiled to an intermediate language. And then when the application runs, this gets compiled with the just-in-time compiler. So this means that if you try to open the package developed with Xamarin, you will see that it's not the same as a completely native application.
**Ferdinando Papale**: Instead, with iOS, it's not possible to have just-in-time compilation, and we have ahead-of-time compilation. This means that the C-sharp code gets directly compiled to assembly. And this was just to give a very brief introduction to the architecture.
**Ferdinando Papale**: Now, if we want to talk more specifically about how to structure Xamarin application, there are essentially two ways to use Xamarin. On the left we have the, let's say, traditional way, also let's say Xamarin Native. In this case we have one project that contains the shared app logic, and this one will be common to all the platforms. And then on top of that, we have one project for each platform that we are targeting, in this case, Android and iOS. And this project contain the platform specific code, but from the practical point of view, this is mostly UI code.
**Ferdinando Papale**: Then we have Xamarin.Forms. Xamarin.Forms is essentially a UI framework. If you have Xamarin.Forms, we still have these project with the shared app logic, but we also have another project with the shared UI. We still have the platform-specific projects but this contains almost nothing, and they are the entry point of the application.
**Ferdinando Papale**: What happens in this case is that Xamarin.Forms has it's own UI paradigm that is different from Android and iOS. And then these gets... The controls that you use with Xamarin.Forms are the one that transform to native controls on all the platforms that are supported. Obviously, because this needs to support multiple platforms, you don't have a one to one correspondence between UI controls.
**Ferdinando Papale**: Because with Xamarin.Forms, practically, you have these additional shared layer. Using Xamarin.Forms is the way that allows to have the most shared code between the two possibilities. And now we can talk a little bit more about some key concepts in forms. First of all, data binding and XAML.
**Ferdinando Papale**: In Xamarin.Forms there are essentially two ways that you can define your UI. First, programmatically. So you define your UI in a C-sharp file. Or you can define your application in a XAML file. And XAML is just a language that is defined on top of XML. And the important thing is that it's human readable. On the left here you have an example of such a XAML file. And on the bottom you can see how it looks on an iOS and Android device.
**Ferdinando Papale**: This application practically just contains a almost empty screen with the clock in the middle. If you look at the XAML file you will see it has a content page that is just Xamarin.Forms named for a screen. And then inside of that it contains a label that is centered horizontally and vertically. But that's not very important. Yes.
**Ferdinando Papale**: And then the important thing here to notice is the fact that this label has a text that is not static, but is actually defined with bindings. You can see the binding time written in the XAML file. What this means here is the fact that if the bindings are done properly, whenever the time variable is changing our code, then it will also be updating the UI. This simple application it means that we have a functioning clock.
**Ferdinando Papale**: The way that this is implemented, actually, you can see it on the right, we have an example of a ViewModel. In order for the ViewModel to notify the UI of these changes, it needs to implement I notify property changed, that is an interface that contains just one event, that is the property change event that you see almost at the top.
**Ferdinando Papale**: Practically, the way that it works is that you can see how it works with the property time that is on the bottom. Every time we set the property time, we need to call property changed. And we need to pass also the name of the property that we're changing. Practically, let's say behind the curtains, what happens is that the view subscribes to this property change event and then gets notified when certain properties change, and so the UI gets updated accordingly.
**Ferdinando Papale**: As you can see, it's not exactly straightforward to choose data binding, because you will need to do this for every property that needs to be bound in the UI. And the other thing to know is that this is just one simple way to use, data binding can be very complicated. It can be two way, one way in one direction, and so on.
**Ferdinando Papale**: But data binding actually is extremely important, especially in the context of MVVM. MVVM is essentially the architectural pattern that Xamarin suggests for the Xamarin.Forms application. This is actually the interpretation that Microsoft has, obviously, of MVVM, because this really depends who you ask, everybody has his own views on this.
**Ferdinando Papale**: In MVVM, essentially, the application is divided into three main blocks, the model, the view, and the ViewModel. The model contains the app data and the business logic, the view represents what is shown on the screen, so the UI of the application, and preferably should be in XAML, because it simplifies the things quite a lot. And then finally we have the ViewModel, that essentially is the glue between both the view and the model.
**Ferdinando Papale**: The important thing to know here is that as you see on the graph, on the left, is that the view communicates with ViewModel through the data binding and commands, so the view knows about the ViewModel. Instead, the ViewModel actually doesn't know about the view. And the communication happens indirectly through notifications. Practically, the views subscribes to the property change event on the ViewModel, and then gets notified when something is changed, and so the UI needs to be updated eventually.
**Ferdinando Papale**: This is really important. Because the ViewModel is independent from the view, this means that we can just swap the view for another one, we can change it without having to modify the ViewModel at all. And also these independents allows to have the code much more testable if this wasn't there, that's why the data binding is so important.
**Ferdinando Papale**: Then there is another thing that is really important in Xamarin.Forms, and those are custom renders. As I said before, because Xamarin.Forms essentially needs to target multiple applications, sometimes the translation within the forms' UI and the native UI, is not what you expect or maybe what you want. And in this case, the way that you can go around it is use custom renders. Really with custom renders, you have the same control that you will have natively.
**Ferdinando Papale**: What is on the screen is an example of how to create a custom render practically. So on the left, we can see that first of all we need to create a custom class, in this case my entry. And they need to derive from one of the forms' class, in this case an entry is just a text view on the screen where the user can write some stuff.
**Ferdinando Papale**: Obviously you need also to add this custom view to your XAML page. And then you need to go into the platform-specific projects, so iOS and Android, and define the render. The render needs to obviously derive from a certain class in forms. And you need also to define the attribute expert render. This attribute practically say, this render, to which class it should be linked to.
**Ferdinando Papale**: Once you use the render, obviously, you have full control over how the UI should look like. One thing to know is that what you have on this screen is actually a little bit of a simplified example, because actually it's a little bit more complicated than this. And also, one needs to understand that it's true that it's possible to define as many custom renders as needed, but the more custom renders are created, probably the less code reuse you have, because you need to create these custom renders in each of the platform-specific projects. So it starts to become... You have less and less shared codes, so you should start asking yourself if Xamarin.Forms is exactly what you want. And also, they are not exactly the easiest thing to use, in my opinion.
**Ferdinando Papale**: Finally, why should you decide to use Xamarin.Forms or Xamarin Native. Essentially, there are a couple of things to consider. If the development time and the budgets are limited, Xamarin.Forms is a better option, because in this case you will need to create the UI just once and then it will run on both platforms, you don't need to do this development twice.
**Ferdinando Papale**: Still, unfortunately, if your UI or UX needs to be polished, needs to be pixel perfect, you want to have exactly the specific UI, then probably you will need to use Xamarin Native. And this is because, as I've said before, if you want to have something that looks exactly as you want, you will need to probably use a lot of custom renders. And more custom renders means that Xamarin.Forms starts to be less and less important, or less advantageous, let's say.
**Ferdinando Papale**: Another thing to consider is what kind of people you have in your team. If you have people in your team that only have C-sharp and .NET experience, then Xamarin.Forms can be important as an advantage because you don't need to learn... Even if you use Xamarin Native you will still use C-sharp and .NET, but you will also need to understand how you will need to have some native experience, you will need to know what is the lifecycle of an iOS application, of an Android application, how the UI is built in both cases and so on. So in this case, probably Xamarin.Forms will be a better option.
**Ferdinando Papale**: And the final thing to consider is that generally Xamarin.Forms application are bigger than Xamarin Native applications. So if this is a problem, then probably native is the way to go. And I think that this is the end of my half of the presentation and now probably Nikola should continue with the rest.
**Nikola Irinchev**: Sure. That's great. That hopefully gives people some idea for which route to take for the next project, the route they should take regards whether they use Xamarin Native or Forms they use to use Realm. Let's talk about how it fits into all that.
**Nikola Irinchev**: The first thing to understand about Realm is that it's an open source, standalone object database. It's not an ORM or an interface for accessing MongoDB. All the data leaves locally on the device and is available regardless of whether the user has internet connectivity or not. Realm has also been meticulously optimized to work on devices with heavily constraint resources.
**Nikola Irinchev**: Historically, these have been mobile devices, but recently we're seeing more and more IoT use cases. To achieve an extremely low memory footprint, Realm adopts a technique that is known as zero copy. When you fetch an object from Realm, you don't need the entire thing in memory, instead, you get some cleverly-organized metadata that tells us which memory offsets the various properties are located.
**Nikola Irinchev**: Only when you ask us the property to which the database and read the information stored there. This means that you won't need to do any select X, Y, Z's, and it also allows you to use the exact same object in your master view, where you only need one or two properties as in the detail view where you need to display the information about the entire entity.
**Nikola Irinchev**: Similarly, collections are lazily loaded and data is never copied into memory. A collection of a million items is, again, a super lightweight wrapper around some metadata. And accessing an element, just calculates the exact memory offset where the element is located, returns the data there. This, again, means you can get a collection of millions of items in fractions of a second, then drop it in the ListView with data binding, and as the user scrolls on the screen, new elements will be loaded on demand and don't want to be garbage collected. Meaning you never have to do pagination limits or add load more buttons.
**Nikola Irinchev**: To contribute to that seamless experience, the way you define models in Realm is nearly identical to the way you define your in-memory of the process. You give it a name, you add some properties, and that's it. The only thing that you need to do to make sure, it's compatible with Realm, is to inherit from RealmObject.
**Nikola Irinchev**: When you compile your project, Realm will use this code leaving. It will replace the appropriate getters and setters with custom code that will read and write to the database directly. And we do support most built in primitive types. You can use strings, various sizes of integers, floats, doubles, and so on.
**Nikola Irinchev**: You can of course define links to other objects, as well as collection of items. For example, if you have a tweet model, you might want to have a list of strings that contain all the tags for the tweets, or you have a person model, you might want to have a list of dogs that are owned by that person.
**Nikola Irinchev**: The final piece of core Realm functionality that I want to touch on is one that is directly related to what Ferdinando was talking about with Xamarin.Forms and data binding. That thing that I mentioned about properties that hook up directly to the database, apart from being super efficient in performance, it has the nice side effect that we're always working with up to date data.
**Nikola Irinchev**: So if you have a background thread then you update a person's age, the next time you access the person's age property on the main thread, you're going to get the new value. That in and of itself is cool, but will be kind of useless if we didn't have a way to be notified when such a change has occurred. Luckily, we do. As all Realm objects implement I notify property changed, and all Realms collections implement I notify collection changed.
**Nikola Irinchev**: These are the interfaces that are the foundation of any data binding engine, and are of course supported and respected by Xamarin.Forms, WTF, and so on. This means that you can data bind to your database models directly, and then we'll learn the UI whenever a property changes regardless of where the change originated from. And for people who want to have an extra level of control or those working with our native, we do have a callback that you can subscribe to, which gives you more detailed information than what the system interfaces expose.
**Nikola Irinchev**: To see all these concepts in action, I've prepared a super simple app that lists some people and their dogs. Let me show that to you. All right. Let's start with the model definition. I have my person class. It has name, birthday and favorite dog. And it looks like your poco out there. The only difference again being that it inherits from Realm object, which is a hint for the code leaver that we use to replace the getter and setter with some clever code that hooks into the native Realm API.
**Nikola Irinchev**: All right. Then let's take a look at lazy loading. I cheated a little bit, and I already populate my grammar, I inserted a million people with their dogs and their names and so on. And I added button in my view, which is called load, and it invokes the load medium items command. What it does is it starts a stopwatch, gets all items from Realm, and alerts how many they are and how much time it took.
**Nikola Irinchev**: If I go back to my simulator, if I click load, we can see that we loaded a million elements in zero milliseconds. Again, this is cheating, we're not really loading them all, we are creating a collection that has the necessary metadata to know where the items are. But for all intents and purposes, for you as a developer, they are there. If I set a breakpoint here, both the items again, I can just drop the evaluator and I can pick any element of the collection, of any unit, and it's there. The property channel that their dog is all that... You can access any element as if you were accessing any memory structure.
**Nikola Irinchev**: All right. That's cool. Let's display these million people. In my main page, I would have a ListView. Let's use a UITableViewController or just a collection of cells. And in my cell I have a text field which binds to the person's dog name, and I have a detail field which binds to favorite dog.name. And the entire ListView is bound to the people collection.
**Nikola Irinchev**: In my main view model, people collection is just empty, but we can populate it with the data from Realm. I'm just passing all people there, which, as we saw, are on \[inaudible 00:29:55\]. And I'm going mute. What's going to happen now is Realm will feed this collection, and the data binding engine will start reading data from the collection to populate its UI. I can go back to my simulator. We can see that all the people are loaded in the ListView. And as I scroll the ListView, we can see that new people are being displayed.
**Nikola Irinchev**: Judging by the fact that my scroller doesn't move far, we can guess that there are indeed a million people in there. And again, of course, we don't have a million items in memory, that would be ridiculous. The way Xamarin.Forms works is, it's only going to draw what's on screen, it's only going to ask around for the data that is being currently displayed. And as the user scrolls, all data is being garbage collected, new data is being picked up. So this allows you to have a very smooth user experience and a very small developer experience, because you no longer have to think about pagination and figuring out what's the minimum set of properties that you need to load to drive that UI.
**Nikola Irinchev**: Finally, to build on top of example, I added a simple timer. I have a model called statistics which has a single property, which is an integer counting the total seconds users pass in the app. What I'm going to do is, in my app, when it starts, I'm going to run in the background my app data code. And what that does is, it waits one second, very imprecise, we don't care about precision here, and opens around and increments the number of total of seconds.
**Nikola Irinchev**: In my main page, I will data bind my title property to statistics.total of seconds, also to the total of seconds property. I have a nice string format there to write a lapse time there. And in my view, I'll just populate my statistics instance with the first element from the statistics collection.
**Nikola Irinchev**: I know that there's one. Okay. So when I run the top, what is going to happen is, every second, my app will increment this value on a background thread. In my main view model, the statistics systems, which points to the same object in the database, is going to be notified that there's a change to total of seconds property, is going to proxy that to the UI. And if we go to the UI, we can see that every second, the title is getting updated. And that require absolutely synchronization or UI code on my end, apart from the data binding logic.
**Nikola Irinchev**: Clearly, that is a super silly example, I don't suppose any of you to ship that into production, but it's the exact same principle you can apply when fetching updates from your server or when doing some background processing in your offline, converting images or generating documents. What you need to do is just store the results in Realm, and as long as you set up your data bindings property, the UI will update itself regardless of where in the app the user is. All right. That was my little demo, and we can go back to more boring part of the presentation and talk about some tips when starting out with Realm and Xamarin.
**Nikola Irinchev**: The main thing that trips people up when they start using Realm, is the threading model. Now that definitely deserves a talk of its own. And I'm not going to go into too much detail here, but I'll give you the TLDR of it, and you should just trust me on that. We can probably have some different talk about threading.
**Nikola Irinchev**: First of, on the main thread, it's perfectly fine and probably good idea to keep a reference to the Realm in your ViewModel. You can either get a new instance, with the Realm getinstance close, or you can just use some singleton. As long as it's only accessible on the my thread, that is perfectly fine. And regardless of which approach you choose, the performance will be very similar. We do have native caching of main thread instances, so you won't be generating a lot of garbage if you did the getinstance approach.
**Nikola Irinchev**: And on the my thread, you don't have to worry about disposing the managing instances, it's perfectly fine to let them be garbage collected when your ViewModel gets garbage collected. That's just fine. On the background thread though, it's quite the opposite. There you always want to wrap your getinstances into using statements.
**Nikola Irinchev**: The reason for that is, background threads will cause the file size to increase when data gets updated, even if we don't insert new objects. This base is eventually reclaimed when you dispose the instance or when the app restarts. But it's nevertheless problematic for devices with constrained resources.
**Nikola Irinchev**: Similarly, it is strongly encouraged to keep background instances short-lived. If you need to do some slow data pre-processing, think before you open the Realm file and just write the results when you open it. Or if you need to read some data from Realm, do the processing and private results, open the Realm plus. First, open it with the data, extract putting it in memory, then pause Realm, start the slow job, then open the Realm again, bind results. As a rule of thumb, always run background threads on using statements, and never have any advice in using block.
**Nikola Irinchev**: All right. Let's move to a topic that will inevitably be controversial. And that is avoid repository pattern. And only going to be a bit of a shock especially for people coming from Java or back end backgrounds. But the benefit to complexity ratio of abstracting Realm usage is pretty low.
**Nikola Irinchev**: The first argument is universal, doesn't apply just to Realm but with mobile apps. You should really design your app for the database that you're going to use. Each database has strengths and weaknesses. And some things are easy with \[sycilite 00:36:54\], others are easy with Realm. By abstracting away the database in a way that you can just swap it out with a different implementation, it means you're not taking advantage of any of the strong sides of the current database that you're using.
**Nikola Irinchev**: And when an average active development time for a mobile app are between six and eight months, you'll likely spend more time preparing for database which then you save in case you actually have to go through with it.
**Nikola Irinchev**: Speaking of strong sides, as much as one of Realm's strong sides is, the data is live. Collections are lazily loaded. And abstracting data in a generic repository pattern is going to be confusing for your consumers. You have two options. Return data is easy. Return live collections, live objects. But in a general purpose repository, there'll be no way to communicate with the consumer that this data is live, so they might think that they will need to fetch it or be confused as to why there are no pagination API. And if you do decide to materialize the FTC into memory, you're foregoing one of the main benefits of using Realm and taking a massive performance hit.
**Nikola Irinchev**: Finally, having Realm refine the repository will inevitably complicate threading. As we've seen earlier, the recommendation is to use thread from instances or background threads. And if you want to have to go get repository, dispose repository all the time, you might as well use Realm directly.
**Nikola Irinchev**: None of that is to say that abstractions are bad and you should avoid using them at all costs. We've seen plenty of good obstructions built on top of Realm, that work very well in the context of the apps that they're waiting for. But if you already have a secure-line-based app that uses circles and pattern and you think you can just swap out secure life with Realm, you're probably going to have a bad time and not take full advantage of what Realm has to offer.
**Nikola Irinchev**: Finally, something that many people miss about Realm, is that you totally can't have more than one database at play in the same app. This can unlock many interesting use cases, and we've seen people get very creative with it. One benefit of using multiple Realms is that you have a clear separation of information in your app.
**Nikola Irinchev**: For example, in a news app, you might have Realm that holds the app settings, a different one that holds the lyrics metadata, and a third one that holds the user playlist. We've seen similar setups in modular apps, where different themes work on different components of the app, and want to avoid having to always synchronize and align changes and migrations.
**Nikola Irinchev**: Speaking of migrations, keeping data in different Realms can eliminate the need to do some migrations altogether. For example, if you have a Realm instance, this whole, mostly-cached data, and your server side models change significantly, it's probably cheaper to just use the new model and not deal with the cost of migration. If that instance was also holding important user data, you wouldn't be able to do that, making it much more complicated to shift the new version.
**Nikola Irinchev**: And finally, it can allow you to offer improved security without data duplication. In a multicolored application, like our earlier music app, you may wish to have the lyrics' metadata Realm be unencrypted and shared between all users, while their personal playlist or user information, it can be encrypted with their user-specific key and accessible only for them.
**Nikola Irinchev**: Obviously, you don't have to use multiple Realms. Most of the apps we've seen only use one. But it's something many folks just don't realize is an option, so I wanted to put it out there. And with that, I'm out of tips. I'm going to pass the virtual mic back to Ferdinando to give us a glimpse into the future of Xamarin.
**Ferdinando Papale**: Yes. I'm going to talk just a little bit about what is the future of Xamarin.Forms, and the future of Xamarin.Forms is called MAUI. That stands for Multi-platform App UI, that is essentially the newest evolution of Xamarin.Forms that will be included in .NET 6. .NET 6 is coming out at the end of the year, I think in November, if everything goes well.
**Ferdinando Papale**: Apart from containing all the new and shiny features, the interesting thing that they did with .NET 6 is that they are trying to unify a little bit the ecosystem, because .NET has always been a little bit all over the place with .NET Standard, .NET Core, .NET Framework, .NET this, .NET that. And now they're trying to put everything under the same name. So Xamarin, they will not be Xamarin iOS and Xamarin Android anymore, but just .NET iOS and .NET Android. Also Mono will be part of .NET and so on.
**Ferdinando Papale**: Another important thing to know is that MAUI applications will still be possible to develop application for iOS and Android, but there is also a bigger focus also on MacOS and Windows application. So it would be much more complete.
**Ferdinando Papale**: Then they're going also to work a lot on the design, so to improve the customization that can be done so that one needs to use much less custom renders. But also there is the possibility of creating UI controls that instead of feeling native on each platform, they look almost the same on each platform, for a bigger UI consistency, let's say.
**Ferdinando Papale**: And the final things is the single project experience, let's say, that they are going to push. At the moment with Xamarin.Forms, if you want to have an application target five platforms, you need to have at least five projects plus the common one. What they want to do is that they want to eliminate these platform-specific projects and they have only the shared ones. This means that in this case, you will have all the, let's say, all the platform-specific icons, and so on, in these single projects. And this is something that they are really pushing on. And this is just the... It was just a brief look into the future of Xamarin.Forms.
**Nikola Irinchev**: All right. Yeah, that's awesome. I for one I'm really looking forward to some healthy electronic competition which doesn't need to buy Realm for breakfasts. So hopefully it's our dog that seats in MAUI, we'll deliver that. I guess the future of Realm is Realm. We don't have Polynesian delegates in the pipeline, but we do have some pretty exciting plans for the rest.
**Nikola Irinchev**: Soon in the spring, we'll be shipping some new datatypes we've been actively working on our past couple of months. These are iDictionary, Sets and Guids. We're also adding a thought that can hold any \[inaudible 00:44:37\].
**Nikola Irinchev**: At Realm we do like schema so definitely don't expect Realm to become MongoDB anytime soon. But there are legitimate use cases for apps that need to have heterogeneous data sometimes. For example, a person class may hold the reference to a cat or a dog or a fish in their pet property, or an address in just a string for an address structure. So we kind of want to give developers the flexibility to let them be in control of their own destiny.
**Nikola Irinchev**: Moving forward, in the summer, we're turning our attention to mobile gaming and Unity. This has been the most highly qualification on GitHub, so hope to see what the gaming community will do with Realm. And as Ferdinando mentioned, we are expecting a brand new .NET releasing in the fall. We fully intend to offer first-class MAUI support as soon as it lands.
**Nikola Irinchev**: And I didn't have any plans for the winter, but we're probably going to be opening Christmas presents making cocoa. With the current situation, it's very hard to make long term plans, so we'll take these goals. But we are pretty excited with what we have in the pipeline so far. And with that, I will pass the virtual mic back to Shane and see if we have any questions.
**Shane McAllister**: Excellent. That was brilliant. Thank you very much, Nikola and Ferdinando. I learned a lot, and lots to look forward to with MAUI and Unity as well too. And there has been some questions in the chat. And we thought Sergio was going to win it by asking all the questions. And we did get some other brave volunteers. So we're going to do our best to try and get through these.
**Shane McAllister**: Sergio, I know you said you wanted me to ask on your behalf, that's no problem, I'll go through some of those. And James and Parth and Nick, if you're happy to ask your own questions, just let me know, and I can your mic and your video to do that. But we'll jump back to Sergio's, and I hope \[inaudible 00:46:44\] for you now. I might not get to all of them, we might get to all of them, so we try and fly through them. I'm conscious of everybody's time. But we have 10, 15 minutes here for the questions.
**Shane McAllister**: So Nikola, Ferdinando, Sergio starts off with, I can generate Realm-encrypt DB at the server and send this file to different platforms, Windows, iOS, Android, MacOS. The use case; I have a large database and not like to use sync at app deploy, only using sync to update the database. Can he do that?
**Nikola Irinchev**: That is actually something that I'm just writing the specification for. It's not available right now but it's a very valid use case, we definitely want to support it. But it's a few months in the future outside, but definitely something that we have in the works. One caveat there, the way encryption in Realm works, is that it depends on the page file size of the platform is running on.
**Nikola Irinchev**: So it's possible that... For my question iOS is the same, but I believe that there are differences between Windows and Android. So if you encrypt your database, it's not guaranteed that it's going to be opened on all platforms. What you want to do is you shift the database unencrypted, in your app, encrypted with the page file of the specific platform that their app is running on.
**Shane McAllister**: Okay, that makes sense. It means Sergio's explained his use case was that they didn't want some user data to be stored at the server, but the user wanted to sync it between their devices, I suppose. And that was the reason that he was looking for this. And we'll move on.
**Shane McAllister**: Another follow up from Sergio there is, does the central database have to be hosted in the public cloud or can he choose another public cloud or on premise as well too?
**Nikola Irinchev**: Currently we don't have a platform on premise for sync. That is definitely something we're looking into but we don't have any timeline for when that might be available. In terms of where the central database is hosted, it's hosted in Atlas. That means that, because it's on Azure, AWS and Google Cloud, it's conforming to all the rules that Atlas has for where the database is stored.
**Nikola Irinchev**: I believe that Atlas has support for golf cloud, so the government version of AWS. But it's really something that we can definitely follow up on that if he gives us more specific place where he wants to foster. But on premise definitely not an option at the moment.
**Shane McAllister**: And indeed, Sergio, if you want more in-depth feedback, our forum is the place to go. Post the questions. I know our engineering team and our developer advocates are active on the forums there too. And you touched on this slightly by way of showing that if you have a Realm that you don't necessarily care about and you go and update the schema, you can dump that. Sergio asked if the database schema changes, how does the update process work in that regard?
**Nikola Irinchev**: That depends on whether the database... If a lot of the questions Sergio has are sync related, we didn't touch too much on sync because I didn't want to blow the presentation up. There's a slight difference to how the local database and how sync handle schema updates. The local database, when you do schema update, you write the migration like you would do with any other database. In the migration you have access to the old data, the new data, and you can, for example, populate new properties from all properties, split them or manipulate.
**Nikola Irinchev**: For example if you're changing a string column to an integer, parse the string values from right to zeros there. With sync, there are more restrictions about what schema changes are allowed. You can only make additive schema changes, which means that you can only add properties after losses, you cannot change the type of a property.
**Nikola Irinchev**: This is precisely to preserve backwards compatibility and allow apps that are already out in the wild in the hands of users not to break in case the schema changes, because so you cannot ship back your code and handle the different schema there.
**Shane McAllister**: Super. Great. And I'll jump to Sergio's last question because I kind of know the answer to this, about full text search. Where are you with that?
**Nikola Irinchev**: Again, we are in the specification part. One thing to notice, that the Realm database is fully open source. Like the core database, RTS is open source. And if he goes to the Realm core repository, which is the core database, he can definitely see the pull request that has full text search. That's a very much in a POC phase. We're nowhere near ready to shift that to the production quality, but it's definitely something we're actively working on and I do hope to have interesting updates in the coming months.
**Shane McAllister**: Super. Thank you, Nikola. And James, you have three questions there. Would you want to ask them yourself or let me ask on your behalf? If you want to just type into the chat there, James, because with three you can follow up on them. I'll happily open the video and the mic. So he's happy to try. Fair play, brave individual. Let me just find you here now, James. You will come in as a host, so you will now have mic and video controls down the bottom of your screen. Please turn them on. We will see you hopefully, we'll hear you, and then you can ask your own questions.
**James**: I don't have a camera so you won't see me, you'll only hear me.
**Shane McAllister**: It's always dangerous doing this stuff live.
**James**: Can you hear me?
**Shane McAllister**: Is that working for you? Yeah, he's getting there.
**James**: Can you hear me?
**Shane McAllister**: We'll see.
**James**: No?
**Shane McAllister**: James, I see your mic is on, James. No. Hold on one second. We'll just set you again. Apologies everyone else for the... Turn that back on and turn this on. Should be okay. Look, James, we'll give you a moment to see if you appear on screen. And in the meantime, Parth had a couple of questions. How does Realm play with backward compatibility?
**Nikola Irinchev**: That's a difficult question. There are many facets of backwards compatibility and \[inaudible 00:54:22\]. Let's see if the other questions give any hints.
**Shane McAllister**: A follow up from Parth, and I hope I've got the name correct, is there any use case where I should not use Realm and use the native ones? In every use case you should always use Realm, that's the answer to that.
**Nikola Irinchev**: Now I know that there are cases where sycilite performs better than Realm. The main difference is sycilite gives you static data. You get something from a database and it never changes. That may be desirable in certain cases. That is certainly desirable if you want to pass that data to a lot of threads. Because data is static, you don't have to worry about, let's say, updating values, environment, suddenly, things change under your feet.
**Nikola Irinchev**: That being said, we believe that Realm should fit all use cases. And we are working hard to make it fit all use cases, but there's certainly going to be cases where, for example, we've got that you can use the iOS synchronization with the Apple ID. That is absolutely valid case. That has its own synchronization but it works differently from what Apple offers. It's not as automatic.
**Shane McAllister**: Sure. No, that makes sense. And you answered his third question during the presentation, which was about having multiple Realms in a single app. I think that was definitely covered. Nick has told me to go and ask his questions too to save time. I presume the minute or two we tried to get James on board it wasn't the best use of our time.
**Shane McAllister**: Nick, you've requested number two here but I can't see question number one, so go ahead and... I have to read this myself. Is RealmObject still the direction? A few years ago there was talk of using generators with an interface which would make inheritance easier, particularly for NSObject requirements for iOS.
**Nikola Irinchev**: Yes. Generators are definitely something that we are very interested in. This is very much up in the air, I'm not giving any promises. But generators shipped with .NET 5 in November, at least, the stable version of generators. We haven't gotten the time to really play with them properly, but are definitely interested. And especially for Unity, that is an option that we want to offer, because certain things there also have special inheritance requirements. So yeah, generators are in the hazy part of the roadmap, but definitely an area of interest. We would want to offer both options in the future.
**Shane McAllister**: That makes sense. And Nick had a follow up question then was, performance recommendations for partial string search of a property, is it indexing the property?
**Nikola Irinchev**: Yeah. Right now, indexing the property it will not make up performance differences searching for partial matches. Once the full text search effort is closer to completion, that will yield performance benefits for partial searches even for non-full text search properties. Right now indexing won't make a difference, in the future it will.
**Shane McAllister**: Okay, perfect. So we have two from James here. Will Realm be getting cascading deletes?
**Nikola Irinchev**: Another thing that we didn't touch on in this talk is the concept of embedded objects. If you're familiar with MongoDB and their embedded objects there, it's a very similar concept. You have a top-level object like a person, and they may have objects that are embedded in that object, say, a list of addresses associated with that person.
**Nikola Irinchev**: Embedded objects implement cascading deletes in the sense that if you delete the person, then all their objects are going to be deleted. That is not supported currently for top-level objects. It is something that we are continuously evaluating how to support in the best possible way. The main challenge there, of course, in a distributed system where sync is involved, cascading deletes are very dangerous. You never know who might be linking to a particular object that has been offline, for example, and you haven't seen their changes. So we are evaluating cascading deletes for standalone objects, but embedded objects will fit like 90% of the use cases people could have for cascading deletes.
**Shane McAllister**: Super. Perfect. Thank you, Nikola. And I think there's only one more. Again, from James. Will Realm be providing a database viewer without having to remove it from the device, was the question.
**Nikola Irinchev**: That is an interesting question. Yeah, that's an interesting question and I don't know the answer to that, unfortunately.
**Shane McAllister**: That's all right. We don't need to know all the answers, that's what the meetups are for, right? You get to go back to the engineering team now and say, "Hey, I got asked a really interesting question in a meetup, what are we going to do with this?"
**Shane McAllister**: James had another one there that he just snuck in. He's quick at typing. Will Realm objects work in unit tests or do they only work when the Realm is running, for example, integration test.
**Nikola Irinchev**: Realm objects, they behave exactly like in-memory objects when they're not associated with Realm. So you can create a standalone person, don't turn it to Realm, it will behave exactly like the person model that you find with the in-memory properties. So that's probably not going to be enough for a unit test, especially if you rely on the property change notification mechanism. Because an object that is not associated with Realm, it's not going to get notified if another instance of the same object changes, because they're not linked in any way whatsoever.
**Nikola Irinchev**: But Realm does have the option to run in in-memory mode. So you don't have to create a file on disk, you can run it in memory. And that is what we've seen people typically use for unit tests. It's a stretch call unit test, it's an integration test, but it fits 90% of the expectations from a unit test. So that's something that James could try, give it a go.
**Nikola Irinchev**: But we're definitely interested in seeing what obstacles people are saying when writing unit tests, so we'll be very happy to see if we fit the bill currently, or if there's a way to fit the bill by changing some of the API.
**Shane McAllister**: Super. And another final one there from \[Nishit 01:02:26\]. UI designed by schema XML or something else?
**Nikola Irinchev**: I'm not sure. It can mean two things the way I understand it. One is, if he's asking about the design of the schema of Realm, then it's all done by XML or anything, it's designed by just defining your models.
**Shane McAllister**: I think it's more to do with the UI design in XML, in your app development. That's the way I \[crosstalk 01:03:02\] question.
**Nikola Irinchev**: I don't know. For Xamarin.Forms, and we like to XAML, then yeah. XAML is a great way to design your UI, and it's XML based. But yeah. And Nishi, if you want to drop your question in the forum, I'd be happy to follow up on that.
**Shane McAllister**: Yeah.
**Nikola Irinchev**: Just a little bit more context there.
**Shane McAllister**: Well, look, we're going over the hour there. I think this has been superb, I've certainly learned a lot. And thanks, everybody, for attending. Everybody seems to have filled out the Swag form, so that seems to have gone down well. As I said at the beginning, the shipping takes a little while so please be patient with us, it's certainly going to take maybe two, three weeks to hit some of you, depending on where you are in the world.
**Shane McAllister**: We do really appreciate this. And so the couple of things that I would ask you to do for those that attended today, is to follow @Realm on Twitter. As Nikola and Ferdinando have said, we're active on our forums, please join our forums. So if you go to developer.mongodb.com you'll see our forums, but you'll also see our developer hub, and links to our meetup platform, live.mongodb.com.
**Shane McAllister**: Perfect timing, thank you. There are the URLs. So please do that. But the other thing too, is that this, as I said, is the third this year, and we've got three more coming up. Now look, that they're all in different fields, but the dates are here. So up next, on the 18th of March, we have Jason and Realm SwiftUI, Property wrappers, and MVI architecture.
**Shane McAllister**: And then we're back on the 24th of March with Realm Kotlin multi platform for modern mobile apps. And then into April, but we probably might slot another one in before then. We have Krane with Realm JS for React Native applications as well too. So if you join the global Realm user group on live.mongodb.com, any future events that we create, you will automatically get emailed about those and you simply RSVP, and you end up exactly how you did today. So we do appreciate it.
**Shane McAllister**: For me, I appreciate Ferdinando and Nikola, all the work. I was just here as a talking head at the beginning, at the end, those two did all the heavy lifting. So I do appreciate that, thank you very much. We did record this, so if there's anything you want to go back over, there was a lot of information to take in, it will be available. You will get via the platform, the YouTube link for where it lives, and we'll also be probably posting that out on Twitter as well too. So that's all from me, unless Nikola, Ferdinando you've anything else further to add. We're good?
**Nikola Irinchev**: Yeah.
**Shane McAllister**: Thank you very much, everyone, for attending. Thank you for your time and have a good rest of your week. Take care.
**Ferdinando Papale**: See you. | md | {
"tags": [
"Realm",
"C#",
".NET"
],
"pageDescription": "Missed Realm .NET for Xamarin (best practices and roadmap) meetup event? Don't worry, you can catch up here.",
"contentType": "Article"
} | Realm .NET for Xamarin (Best Practices and Roadmap) Meetup | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/taking-rag-to-production-documentation-ai-chatbot | created | # Taking RAG to Production with the MongoDB Documentation AI Chatbot
At MongoDB, we have a tagline: "Love your developers." One way that we show love to our developers is by providing them with excellent technical documentation for our products. Given the rise of generative AI technologies like ChatGPT, we wanted to use generative AI to help developers learn about our products using natural language. This led us to create an AI chatbot that lets users talk directly to our documentation. With the documentation AI chatbot, users can ask questions and then get answers and related content more efficiently and intuitively than previously possible.
You can try out the chatbot at mongodb.com/docs.
This post provides a technical overview of how we built the documentation AI chatbot. It covers:
- The chatbot’s retrieval augmented generation (RAG) architecture.
- The challenges in building a RAG chatbot for the MongoDB documentation.
- How we built the chatbot to overcome these challenges.
- How we used MongoDB Atlas in the application.
- Next steps for building your own production RAG application using MongoDB Atlas.
## The chatbot's RAG architecture
We built our chatbot using the retrieval augmented generation (RAG) architecture. RAG augments the knowledge of large language models (LLMs) by retrieving relevant information for users' queries and using that information in the LLM-generated response. We used MongoDB's public documentation as the information source for our chatbot's generated answers.
To retrieve relevant information based on user queries, we used MongoDB Atlas Vector Search. We used the Azure OpenAI ChatGPT API to generate answers in response to user questions based on the information returned from Atlas Vector Search. We used the Azure OpenAI embeddings API to convert MongoDB documentation and user queries into vector embeddings, which help us find the most relevant content for queries using Atlas Vector Search.
Here's a high-level diagram of the chatbot's RAG architecture:
.
## Building a "naive RAG" MVP
Over the past few months, a lot of tools and reference architectures have come out for building RAG applications. We decided it would make the most sense to start simple, and then iterate with our design once we had a functional minimal viable product (MVP).
Our first iteration was what Jerry Liu, creator of RAG framework LlamaIndex, calls "naive RAG". This is the simplest form of RAG. Our naive RAG implementation had the following flow:
- **Data ingestion**: Ingesting source data into MongoDB Atlas, breaking documents into smaller chunks, and storing each chunk with its vector embedding. Index the vector embeddings using MongoDB Atlas Vector Search.
- **Chat**: Generating an answer by creating an embedding for the user's question, finding matching chunks with MongoDB Atlas Vector Search, and then summarizing an answer using these chunks.
We got a reasonably functional naive RAG prototype up and running with a small team in under two months. To assess the quality of generated responses and links, we had MongoDB employees volunteer to test out the chatbot in a red teaming exercise.
To learn more about the approach we took to red teaming, refer to the documentation from Microsoft.
## Challenges in building a RAG application for MongoDB documentation
The red teaming exercise revealed that the naive RAG chatbot provided satisfactory answers roughly 60% of the time.
For the 40% of answers that were unsatisfactory, we noticed a few common themes:
- The chatbot was not aware of previous messages in the conversation.
For example, the conversation might go like:
```txt
User: how to create a cluster atlas
Chatbot: To create a MongoDB Atlas cluster using the CLI, you can use the `mongocli atlas quickstart` command...
User: how to with UI
Chatbot: I'm sorry, I don't know what you mean...
```
- The chatbot sometimes gave niche or overly specific solutions when a more general answer would have been useful. MongoDB has many products with overlapping functionality (database drivers in multiple languages, MongoDB on Atlas and self-hosted, etc.) and without a clear priority, it could seemingly choose one at random.
For example, the conversation might go like:
```txt
User: how to connect to mongodb
Chatbot: To connect to MongoDB you can use the Java Driver...
```
- The chatbot’s further reading links were not consistently relevant.
For example the conversation might go like:
```txt
User: write data to mongodb node.js
Chatbot: /* Reasonable generated answer */
Further reading:
- https://www.mongodb.com/docs/drivers/node/current/usage-examples/insertOne/ (👍)
- https://www.mongodb.com/developer/languages/javascript/node-connect-mongodb/ (👍)
- https://www.mongodb.com/developer/products/realm/realm-meetup-javascript-react-native/ (🤷)
```
To get the chatbot to a place where we felt comfortable putting it out into the world, we needed to address these limitations.
## Refactoring the chatbot to be production ready
This section covers how we built the documentation AI chatbot to address the previously mentioned limitations of naive RAG to build a not-so-naive chatbot that better responds to user questions.
Using the approach described in this section, we got the chatbot to over 80% satisfactory responses in a subsequent red teaming exercise.
### Data ingestion
We set up a CLI for data ingestion, pulling content from MongoDB's documentation and the Developer Center. A nightly cron job ensures the chatbot's information remains current.
Our ingestion pipeline involves two primary stages:
#### 1. Pull raw content
We created a `pages` CLI command that pulls raw content from data sources into Markdown for the chatbot to use. This stage handles varied content formats, including abstract syntax trees, HTML, and Markdown. We stored this raw data in a `pages` collection in MongoDB.
Example `pages` command:
```sh
ingest pages --source docs-atlas
```
#### 2. Chunk and Embed Content
An `embed` CLI command takes the data from the `pages` collection and transforms it into a form that the chatbot can use in addition to generating vector embeddings for the content. We stored the transformed content in the `embedded_content` collection, indexed using MongoDB Atlas Vector Search.
Example `embed` command:
```sh
ingest embed --source docs-atlas \
--since 2023-11-07 # only update documentation changed since this time
```
To transform our `pages` documents into `embedded_content` documents, we used the following strategy:
Break each page into one or more chunks using the LangChain RecursiveCharacterTextSplitter. We used the RecursiveCharacterTextSplitter to split the text into logical chunks, such as by keeping page sections (as denoted by headers) and code examples together.
Allow max chunk size of 650 tokens. This led to an average chunk size of 450 tokens, which aligns with emerging best practices.
Remove all chunks that are less than 15 tokens in length. These would sometimes show up in vector search results because they'd closely match the user query even though they provided little value for informing the answer generated by the ChatGPT API.
Add metadata to the beginning of each chunk before creating the embedding. This gives the chunk greater semantic meaning to create the embedding with. See the following section for more information about how adding metadata greatly improved the quality of our vector search results.
##### Add chunk metadata
The most important improvement that we made to the chunking and embedding was to **prepend chunks with metadata**. For example, say you have this chunk of text about using MongoDB Atlas Vector Search:
```txt
### Procedure
#### Go to the Search Tester.
- Click the cluster name to view the cluster details.
- Click the Search tab.
- Click the Query button to the right of the index to query.
#### View and edit the query syntax.
Click Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.
```
This chunk itself has relevant information about performing a semantic search on Atlas data, but it lacks context data that makes it more likely to be found in the search results.
Before creating the vector embedding for the content, we add metadata to the top of the chunk to change it to:
```txt
---
tags:
- atlas
- docs
productName: MongoDB Atlas
version: null
pageTitle: How to Perform Semantic Search Against Data in Your Atlas Cluster
hasCodeBlock: false
---
### Procedure
#### Go to the Search Tester.
- Click the cluster name to view the cluster details.
- Click the Search tab.
- Click the Query button to the right of the index to query.
#### View and edit the query syntax.
Click Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.
```
Adding this metadata to the chunk greatly improved the quality of our search results, especially when combined with adding metadata to the user's query on the server before using it in vector search, as discussed in the “Chat Server” section.
#### Example document from `embedded_content` collection
Here’s an example document from the `embedded_content` collection. The `embedding` field is indexed with MongoDB Atlas Vector Search.
```js
{
_id: new ObjectId("65448eb04ef194092777bcf6")
chunkIndex: 4,
sourceName: "docs-atlas",
url: "https://mongodb.com/docs/atlas/atlas-vector-search/vector-search-tutorial/",
text: '---\ntags:\n - atlas\n - docs\nproductName: MongoDB Atlas\nversion: null\npageTitle: How to Perform Semantic Search Against Data in Your Atlas Cluster\nhasCodeBlock: false\n---\n\n### Procedure\n\n\n\n\n\n#### Go to the Search Tester.\n\n- Click the cluster name to view the cluster details.\n\n- Click the Search tab.\n\n- Click the Query button to the right of the index to query.\n\n#### View and edit the query syntax.\n\nClick Edit $search Query to view a default query syntax sample in JSON (Javascript Object Notation) format.',
tokenCount: 151,
metadata: {
tags: "atlas", "docs"],
productName: "MongoDB Atlas",
version: null,
pageTitle: "How to Perform Semantic Search Against Data in Your Atlas Cluster",
hasCodeBlock: false,
},
embedding: [0.002525234, 0.038020607, 0.021626275 /* ... */],
updated: new Date()
};
```
#### Data ingestion flow diagram
![Ingest data flow diagram][2]
### Chat server
We built an Express.js server to coordinate RAG between the user, MongoDB documentation, and ChatGPT API. We used MongoDB Atlas Vector Search to perform a vector search on the ingested content in the `embedded_content` collection. We persist conversation information, including user and chatbot messages, to a `conversations` collection in the same MongoDB database.
The Express.js server is a fairly straightforward RESTful API with three routes:
- `POST /conversations`: Create a new conversation.
- `POST /conversations/:conversationId/messages`: Add a user message to a conversation and get back a RAG response to the user message. This route has the optional parameter `stream` to stream back a response or send it as a JSON object.
- `POST /conversations/:conversationId/messages/:messageId/rating`: Rate a message.
Most of the complexity of the server was in the `POST /conversations/:conversationId/messages` route, as this handles the whole RAG flow.
We were able to make dramatic improvements over our initial naive RAG implementation by adding what we call a **query preprocessor**.
#### The query preprocessor
A query preprocessor mutates the original user query to something that is more conversationally relevant and gets better vector search results.
For example, say the user inputs the following query to the chatbot:
```txt
$filter
```
On its own, this query has little inherent semantic meaning and doesn't present a clear question for the ChatGPT API to answer.
However, using a query preprocessor, we transform this query into:
```txt
---
programmingLanguages:
- shell
mongoDbProducts:
- MongoDB Server
- Aggregation Framework
---
What is the syntax for filtering data in MongoDB?
```
The application server then sends this transformed query in MongoDB Atlas Vector Search. It yields *much* better search results than the original query. The search query has more semantic meaning itself and also aligns with the metadata that we prepend during content ingestion to create a higher degree of semantic similarity for vector search.
Adding the `programmingLanguage` and `mongoDbProducts` information to the query focuses the vector search to create a response grounded in a specific subset of the total surface area of the MongoDB product suite. For example, here we **would not** want the chatbot to return results for using the PHP driver to perform `$filter` aggregations, but vector search would be more likely to return that if we didn't specify that we're looking for examples that use the shell.
Also, telling the ChatGPT API to answer the question "What is the syntax for filtering data in MongoDB?" provides a clearer answer than telling it to answer the original "$filter".
To create a preprocessor that transforms the query like this, we used the library [TypeChat. TypeChat takes a string input and transforms it into a JSON object using the ChatGPT API. TypeChat uses TypeScript types to describe the shape of the output data.
The TypeScript type that we use in our application is as follows:
```ts
/**
You are an AI-powered API that helps developers find answers to their MongoDB
questions. You are a MongoDB expert. Process the user query in the context of
the conversation into the following data type.
*/
export interface MongoDbUserQueryPreprocessorResponse {
/**
One or more programming languages present in the content ordered by
relevancy. If no programming language is present and the user is asking for
a code example, include "shell".
@example "shell", "javascript", "typescript", "python", "java", "csharp",
"cpp", "ruby", "kotlin", "c", "dart", "php", "rust", "scala", "swift"
...other popular programming languages ]
*/
programmingLanguages: string[];
/**
One or more MongoDB products present in the content. Which MongoDB products
is the user interested in? Order by relevancy. Include "Driver" if the user
is asking about a programming language with a MongoDB driver.
@example ["MongoDB Atlas", "Atlas Charts", "Atlas Search", "Aggregation
Framework", "MongoDB Server", "Compass", "MongoDB Connector for BI", "Realm
SDK", "Driver", "Atlas App Services", ...other MongoDB products]
*/
mongoDbProducts: string[];
/**
Using your knowledge of MongoDB and the conversational context, rephrase the
latest user query to make it more meaningful. Rephrase the query into a
question if it's not already one. The query generated here is passed to
semantic search. If you do not know how to rephrase the query, leave this
field undefined.
*/
query?: string;
/**
Set to true if and only if the query is hostile, offensive, or disparages
MongoDB or its products.
*/
rejectQuery: boolean;
}
```
In our app, TypeChat uses the `MongoDbUserQueryPreprocessorResponse` schema and description to create an object structured on this schema.
Then, using a simple JavaScript function, we transform the `MongoDbUserQueryPreprocessorResponse` object into a query to send to embed and then send to MongoDB Atlas Vector Search.
We also have the `rejectQuery` field to flag if a query is inappropriate. When the `rejectQuery: true`, the server returns a static response to the user, asking them to try a different query.
#### Chat server flow diagram
![Chat data flow diagram][3]
### React component UI
Our front end is a React component built with the [LeafyGreen Design System. The component regulates the interaction with the chat server's RESTful API.
Currently, the component is only on the MongoDB docs homepage, but we built it in a way that it could be extended to be used on other MongoDB properties.
You can actually download the UI from npm with the `mongodb-chatbot-ui` package.
Here you can see what the chatbot looks like in action:
to the `embedding` field of the `embedded_content` collection:
```json
{
"type": "vectorSearch,
"fields": {
"path": "embedding",
"dimensions": 1536,
"similarity": "cosine",
"type": "vector"
}]
}
```
To run queries using the MongoDB Atlas Vector Search index, it's a simple aggregation operation with the [`$vectorSearch` operator using the Node.js driver:
```ts
export async function getVectorSearchResults(
collection: Collection,
vectorEmbedding: number],
filterQuery: Filter
) {
return collection
.aggregate>([
{
$vectorSearch: {
index: "default",
vector: vectorEmbedding,
path: "embedding",
filter: filterQuery,
limit: 3,
numCandidates: 30
},
},
{
$addFields: {
score: {
$meta: "vectorSearchScore",
},
},
},
{ $match: { score: { $gte: 0.8 } } },
])
.toArray();
}
```
Using MongoDB to store the `conversations` data simplified the development experience, as we did not have to think about using a data store for the embeddings that is separate from the rest of the application data.
Using MongoDB Atlas for vector search and as our application data store streamlined our application development process so that we were able to focus on the core RAG application logic, and not have to think very much about managing additional infrastructure or learning new domain-specific query languages.
## What we learned building a production RAG application
The MongoDB documentation AI chatbot has now been live for over a month and works pretty well (try it out!). It's still under active development, and we're going to roll it to other locations in the MongoDB product suite over the coming months.
Here are a couple of our key learnings from taking the chatbot to production:
- Naive RAG is not enough. However, starting with a naive RAG prototype is a great way for you to figure out how you need to extend RAG to meet the needs of your use case.
- Red teaming is incredibly useful for identifying issues. Red team early in the RAG application development process, and red team often.
- Add metadata to the content before creating embeddings to improve search quality.
- Preprocess user queries with an LLM (like the ChatGPT API and TypeChat) before sending them to vector search and having the LLM respond to the user. The preprocessor should:
- Make the query more conversationally and semantically relevant.
- Include metadata to use in vector search.
- Catch any scenarios, like inappropriate queries, that you want to handle outside the normal RAG flow.
- MongoDB Atlas is a great database for building production RAG apps.
## Build your own production-ready RAG application with MongoDB
Want to build your own RAG application? We've made our source code publicly available as a reference architecture. Check it out on [GitHub.
We're also working on releasing an open-source framework to simplify the creation of RAG applications using MongoDB. Stay tuned for more updates on this RAG framework.
Questions? Comments? Join us in the MongoDB Developer Community forum.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbd38c0363f44ac68/6552802f9984b8dc525a96e1/281539442-64de6f3a-9119-4b28-993a-9f8c67832e88.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2016e04b84663d9f/6552806b4d28595c45afa7e9/281065694-88b0de91-31ed-4a18-b060-3384ac514b6c.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt65a54cdc0d34806a/65528091c787a440a22aaa1f/281065692-052b15eb-cdbd-4cf8-a2a5-b0583a78b765.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58e9afb62d43763f/655280b5ebd99719aa13be92/281156988-2c5adb94-e2f0-4d4b-98cb-ce585baa7ba1.gif | md | {
"tags": [
"Atlas",
"React",
"Node.js"
],
"pageDescription": "Explore how MongoDB enhances developer support with its innovative AI chatbot, leveraging Retrieval Augmented Generation (RAG) technology. This article delves into the technical journey of creating an AI-driven documentation tool, discussing the RAG architecture, challenges, and solutions in implementing MongoDB Atlas for a more intuitive and efficient developer experience. Discover the future of RAG applications and MongoDB's pivotal role in this cutting-edge field.",
"contentType": "Article"
} | Taking RAG to Production with the MongoDB Documentation AI Chatbot | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/kubernetes-operator-application-deployment | created | # Application Deployment in Kubernetes with the MongoDB Atlas Operator
Kubernetes is now an industry-wide standard when it comes to all things containers, but when it comes to deploying a database, it can be a bit tricky! However, tasks like adding persistence, ensuring redundancy, and database maintenance can be easily handled with MongoDB Atlas. Fortunately, the MongoDB Atlas Operator gives you the full benefits of using MongoDB Atlas, while still managing everything from within your Kubernetes cluster. In this tutorial, we’ll deploy a MERN stack application in Kubernetes, install the Atlas operator, and connect our back end to Atlas using a Kubernetes secret.
## Pre-requisites
* `kubectl`
* `minikube`
* `jq`
You can find the complete source code for this application on Github. It’s a mini travel planner application using MongoDB, Express, React, and Node (MERN). While this tutorial should work for any Kubernetes cluster, we’ll be using Minikube for simplicity and consistency.
## Getting started
When it comes to deploying a database on Kubernetes, there’s no simple solution. Apart from persistence and redundancy challenges, you may need to move data to specific geolocated servers to ensure that you comply with GDPR policies. Thus, you’ll need a reliable, scalable, and resilient database once you launch your application into production.
MongoDB Atlas is a full developer data platform that includes the database you love, which takes care of many of the database complexities you’re used to. But, there is a gap between MongoDB Atlas and your Kubernetes cluster. Let’s take a look at the MongoDB Atlas Operator by deploying the example MERN application with a back end and front end.
This application uses a three-tier application architecture, which will have the following layout within our Kubernetes cluster:
To briefly overview this layout, we’ve got a back end with a deployment that will ensure we have two pods running at any given time, and the same applies for our front end. Traffic is redirected and configured by our ingress, meaning `/api` requests route to our back end and everything else will go to the front end. The back end of our application is responsible for the connection to the database, where we’re using MongoDB Atlas Operator to link to an Atlas instance.
## Deploying the application on Kubernetes
To simplify the installation process of the application, we can use a single `kubectl` command to deploy our demo application on Kubernetes. The single file we’ll use includes all of the deployments and services for the back end and front end of our application, and uses containers created with the Dockerfiles in the folder.
First, start by cloning the repository that contains the starting source code.
```
git clone https://github.com/mongodb-developer/mern-k8s.git
cd mern-k8s
```
Secondly, as part of this tutorial, you’ll need to run `minikube tunnel` to access our services at `localhost`.
```
minikube tunnel
```
Now, let’s go ahead and deploy everything in our Kubernetes cluster by applying the following `application.yaml` file.
```
kubectl apply -f k8s/application.yaml
```
You can take a look at what you now have running in your cluster by using the `kubectl get` command.
```
kubectl get all
```
You should see multiple pods, services, and deployments for the back end and front end, as well as replicasets. At the moment, they are more likely in a ContainerCreating status. This is because Kubernetes needs to pull the images to its local registry. As soon as the images are ready, the pods will start.
To see the application in action, simply head to `localhost` in your web browser, and the application should be live!
However, you’ll notice there’s no way to add entries to our application, and this is because we haven’t provided a connection string yet for the back end to connect to a MongoDB instance. For example, if we happen to check the logs for one of the recently created backend pods, we can see that there’s a placeholder for a connection string.
```
kubectl logs pod/mern-k8s-back-d566cc88f-hhghl
Connecting to database using $ATLAS_CONNECTION_STRING
Server started on port 3000
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
```
We’ve ran into a slight issue, as this demo application is using a placeholder (`$ATLAS_CONNECTION_STRING`) for the MongoDB connection string, which needs to be replaced by a valid connection string from our Atlas cluster. This issue can be taken care of with the MongoDB Atlas Operator, which allows you to manage everything from within Kubernetes and gives you the full advantages of using MongoDB Atlas, including generating a connection string as a Kubernetes secret.
## Using the MongoDB Atlas Operator for Kubernetes
As there’s currently a gap between your Kubernetes cluster and MongoDB Atlas, let’s use the Atlas Operator to remedy this issue. Through the operator, we’ll be able to manage our Atlas projects and clusters from Kubernetes. Specifically, getting your connection string to fix the error we received previously can be done now through Kubernetes secrets, meaning we won’t need to retrieve it from the Atlas UI or CLI.
### Why use the Operator?
The Atlas Operator bridges the gap between Atlas, the MongoDB data platform, and your Kubernetes cluster. By using the operator, you can use `kubectl` and your familiar tooling to manage and set up your Atlas deployments. Particularly, it allows for most of the Atlas functionality and tooling to be performed without having to leave your Kubernetes cluster. Installing the Atlas operator creates the Custom Resource Definitions that will connect to the MongoDB Atlas servers.
### Installing the Atlas Operator
The installation process for the Atlas Operator is as simple as running a `kubectl` command. All of the source code for the operator can be found on the Github repository.
```
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/main/deploy/all-in-one.yaml
```
This will create new custom resources in your cluster that you can use to create or manage your existing Atlas projects and clusters.
### Creating a MongoDB Atlas cluster
If you haven't already, head to the Atlas Registration page to create your free account. This account will let you create a database on a shared server, and you won't even need a credit card to use it.
### Set up access
In order for the operator to be able to manage your cluster, you will need to provide it with an API key with the appropriate permissions. Firstly, let’s retrieve the organization ID.
In the upper left part of the Atlas UI, you will see your organization name in a dropdown. Right next to the dropdown is a gear icon. Clicking on this icon will open up a page called _Organization Settings_. From this page, look for a box labeled _Organization ID_.
Save that organization ID somewhere for future use. You can also save it in an environment variable.
```
export ORG_ID=60c102....bd
```
>Note: If using Windows, use:
```
set ORG_ID=60c102....bd
```
Next, let’s create an API key. From the same screen, look for the _Access Manager_ option in the left navigation menu. This will bring you to the _Organization Access_ screen. In this screen, follow the instructions to create a new API key.
The key will need the **Organization Project Creator** role in order to create new projects and clusters. If you want to manage existing clusters, you will need to provide it with the **Organization Owner** role. Save the API private and public keys. You can also add them to the environment.
```
export ATLAS_PUBLIC_KEY=iwpd...i
export ATLAS_PRIVATE_KEY=e13debfb-4f35-4...cb
```
>Note: If using Windows, use:
```
set ATLAS_PUBLIC_KEY=iwpd...i
set ATLAS_PRIVATE_KEY=e13debfb-4f35-4...cb
```
### Create the Kubernetes secrets
Now that you have created the API key, you can specify those values to the MongoDB Atlas Operator. By creating this secret in our Kubernetes cluster, this will give the operator the necessary permissions to create and manage projects and clusters for our specific Atlas account.
You can create the secret with `kubectl`, and to keep it simple, let’s name our secret `mongodb-atlas-operator-api-key`. For the operator to be able to find this secret, it needs to be within the namespace `mongodb-atlas-system`.
```
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$ORG_ID" \
--from-literal="publicApiKey=$ATLAS_PUBLIC_KEY" \
--from-literal="privateApiKey=$ATLAS_PRIVATE_KEY" \
-n mongodb-atlas-system
```
Next, we’ll need to label this secret, which helps the Atlas operator in finding the credentials.
```
kubectl label secret mongodb-atlas-operator-api-key atlas.mongodb.com/type=credentials -n mongodb-atlas-system
```
### Create a user password
We’ll need a password for our database user in order to access our databases, create new databases, etc. However, you won't want to hard code this password into your yaml files. It’s safer to save it as a Kubernetes secret. Just like the API key, this secret will need to be labeled too.
```
kubectl create secret generic atlaspassword --from-literal="password=mernk8s"
kubectl label secret atlaspassword atlas.mongodb.com/type=credentials
```
## Create and manage an Atlas deployment
Congrats! You are now ready to manage your Atlas projects and deployments from Kubernetes. This can be done with the three new CRDs that were added to your cluster. Those CRDs are `AtlasProject` to manage projects, `AtlasDeployment` to manage deployments, and `AtlasDatabaseUser` to manage database users within MongoDB Atlas.
* Projects: Allows you to isolate different database environments (for instance, development/qa/prod environments) from each other, as well as users/teams.
* Deployments: Instance of MongoDB running on a cloud provider.
* Users: Database users that have access to MongoDB database deployments.
The process of creating a project, user, and deployment is demonstrated below, but feel free to skip down to simply apply these files by using the `/atlas` folder.
### Create a project
Start by creating a new project in which the new cluster will be deployed. In a new file called `/operator/project.yaml`, add the following:
```
apiVersion: atlas.mongodb.com/v1
kind: AtlasProject
metadata:
name: mern-k8s-project
spec:
name: "MERN K8s"
projectIpAccessList:
- ipAddress: "0.0.0.0/0"
comment: "Allowing access to database from everywhere (only for Demo!)"
```
This will create a new project called "MERN K8s" in Atlas. Now, this project will be open to anyone on the web. It’s best practice to only open it to known IP addresses as mentioned in the comment.
### Create a new database user
Now, in order for your application to connect to this database, you will need a database user. To create this user, open a new file called `/operator/user.yaml`, and add the following:
```
apiVersion: atlas.mongodb.com/v1
kind: AtlasDatabaseUser
metadata:
name: atlas-user
spec:
roles:
- roleName: "readWriteAnyDatabase"
databaseName: "admin"
projectRef:
name: mern-k8s-project
username: mernk8s
passwordSecretRef:
name: atlaspassword
```
You can see how the password uses the secret we created earlier, `atlaspassword`, in the `mern-k8s-project` namespace.
### Create a deployment
Finally, as you have a project setup and user to connect to the database, you can create a new deployment inside this project. In a new file called `/operator/deployment.yaml`, add the following yaml.
```
apiVersion: atlas.mongodb.com/v1
kind: AtlasDeployment
metadata:
name: mern-k8s-cluster
spec:
projectRef:
name: mern-k8s-project
deploymentSpec:
name: "Cluster0"
providerSettings:
instanceSizeName: M0
providerName: TENANT
regionName: US_EAST_1
backingProviderName: AWS
```
This will create a new M0 (free) deployment on AWS, in the US_EAST_1 region. Here, we’re referencing the `mern-k8s-project` in our Kubernetes namespace, and creating a cluster named `Cluster0`. You can use a similar syntax to deploy in any region on AWS, GCP, or Azure. To create a serverless instance, see the serverless instance example.
### Apply the new files
You now have everything ready to create this new project and cluster. You can apply those new files to your cluster using:
```
kubectl apply -f ./operator
```
This will take a couple of minutes. You can see the status of the cluster and project creation with `kubectl`.
```
kubectl get atlasprojects
kubectl get atlasdeployments
```
In the meantime, you can go to the Atlas UI. The project should already be created, and you should see that a cluster is in the process of being created.
### Get your connection string
Getting your connection string to that newly created database can now be done through Kubernetes. Once your new database has been created, you can use the following command that uses `jq` to view the connection strings, without using the Atlas UI, by converting to JSON from Base64.
```
kubectl get secret mern-k8s-cluster0-mernk8s -o json | jq -r '.data | with_entries(.value |= @base64d)'
{
…
"connectionStringStandard": "",
"connectionStringStandardSrv": "mongodb+srv://mernk8s:mernk8s@cluster0.fb4qw.mongodb.net",
"password": "mernk8s",
"username": "mernk8s"
}
```
## Configure the application back end using the Atlas operator
Now that your project and cluster are created, you can access the various properties from your Atlas instance. You can now access the connection string, and even configure your backend service to use that connection string. We’ll go ahead and connect our back end to our database without actually specifying the connection string, instead using the Kubernetes secret we just created.
### Update the backend deployment
Now that you can find your connection string from within Kubernetes, you can use that as part of your deployment to specify the connection string to your back end.
In your `/k8s/application.yaml` file, change the `env` section of the containers template to the following:
```
env:
- name: PORT
value: "3000"
- name: "CONN_STR"
valueFrom:
secretKeyRef:
name: mern-k8s-cluster0-mernk8s
key: connectionStringStandardSrv
```
This will use the same connection string you've just seen in your terminal.
Since we’ve changed our deployment, you can apply those changes to your cluster using `kubectl`:
```
kubectl apply -f k8s/application.yaml
```
Now, if you take a look at your current pods:
```
kubectl get pods
```
You should see that your backend pods have been restarted. You should now be able to test the application with the back end connected to our newly created Atlas cluster. Now, just head to `localhost` to view the updated application once the deployment has restarted. You’ll see the application fully running, using this newly created cluster.
In addition, as you add items or perhaps clear the entries of the travel planner, you’ll notice the entries added and removed from the “Collections” tab of the `Cluster0` database within the Atlas UI. Let’s take a look at our database using MongoDB Compass, with username `mernk8s` and password `mernk8s` as we set previously.
### Delete project
Let’s finish off by using `kubectl` to delete the Atlas cluster and project and clean up our workspace. We can delete everything from the current namespace by using `kubectl delete`
```
kubectl delete atlasdeployment mern-k8s-cluster
kubectl delete atlasproject mern-k8s-project
```
## Summary
You now know how to leverage the MongoDB Atlas Operator to create and manage clusters from Kubernetes. We’ve only demonstrated a small bit of the functionality the operator provides, but feel free to head to the documentation to learn more.
If you are using MongoDB Enterprise instead of Atlas, there is also an Operator available, which works in very similar fashion.
To go through the full lab by Joel Lord, which includes this guide and much more, check out the self-guided Atlas Operator Workshop. | md | {
"tags": [
"Atlas",
"JavaScript",
"Kubernetes",
"Docker"
],
"pageDescription": "Get started with application deployment into a Kubernetes cluster using the MongoDB Atlas Operator.",
"contentType": "Tutorial"
} | Application Deployment in Kubernetes with the MongoDB Atlas Operator | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/demystifying-stored-procedures-mongodb | created | # Demystifying Stored Procedures in MongoDB
If you have ever used a SQL database, you might have heard about stored procedures. Stored procedures represent pre-written SQL code designed for reuse. By storing frequently used SQL queries as procedures, you can execute them repeatedly. Additionally, these procedures can be parameterized, allowing them to operate on specified parameter values. Oftentimes, developers find themselves wondering:
- Does MongoDB support stored procedures?
- Where do you write the logic for stored procedures in MongoDB?
- How can I run a query every midnight, like a CRON job?
In today’s article, we are going to answer these questions and demystify stored procedures in MongoDB.
## Does MongoDB support stored procedures?
Essentially, a stored procedure consists of a set of SQL statements capable of accepting parameters, executing tasks, and optionally returning values. In the world of MongoDB, we can achieve this using an aggregation pipeline.
An aggregation pipeline, in a nutshell, is basically a series of stages where the output from a particular stage is an input for the next stage, and the last stage’s output is the final result.
Now, every stage performs some sort of processing to the input provided to it, like filtering, grouping, shaping, calculating, etc. You can even perform vector search and full-text search using MongoDB’s unified developer data platform, Atlas.
Let's see how MongoDB’s aggregation pipeline, Atlas triggers, and change streams together can act as a super efficient, powerful, and flexible alternative to stored procedures.
## What is MongoDB Atlas?
MongoDB Atlas is a multi-cloud developer data platform focused on making it stunningly easy to work with data. It offers the optimal environment for running MongoDB, the leading non-relational database solution.
MongoDB's document model facilitates rapid innovation by directly aligning with the objects in your code. This seamless integration makes data manipulation more intuitive and efficient. With MongoDB, you have the flexibility to store data of diverse structures and adapt your schema effortlessly as your application evolves with new functionalities.
The Atlas database is available in 100+ regions across AWS, Google Cloud, and Azure. You can even take advantage of multi-cloud and multi-region deployments, allowing you to target the providers and regions that best serve your users. It has best-in-class automation and proven practices that guarantee availability, scalability, and compliance with the most demanding data security and privacy standards.
## What is an Atlas Trigger?
Database triggers enable the execution of server-side logic whenever a document undergoes addition, modification, or deletion within a connected Atlas cluster.
Unlike conventional SQL data triggers confined to the database server, Atlas Triggers operate on a serverless compute layer capable of scaling autonomously from the database server.
It seamlessly invokes Atlas Functions and can also facilitate event forwarding to external handlers via Amazon EventBridge.
## How can Atlas Triggers be invoked?
An Atlas Trigger might fire on:
- A specific operation type in a given collection, like insert, update, and delete.
- An authentication event, such as User Creation or Deletion.
- A scheduled time, like a CRON job.
## Types of Atlas Triggers
There are three types of triggers in Atlas:
- Database triggers are used in scenarios where you want to respond when a document is inserted, changed, or deleted.
- Authentication triggers can be used where you want to respond when a database user is created, logged in, or deleted.
- Scheduled triggers acts like a CRON job and run on a predefined schedule.
Refer to Configure Atlas Triggers for advanced options.
## Atlas Triggers in action
Let's compare how stored procedures can be implemented in SQL and MongoDB using triggers, functions, and aggregation pipelines.
### The SQL way
Here's an example of a stored procedure in MySQL that calculates the total revenue for the day every time a new order is inserted into an orders table:
```
DELIMITER $$
CREATE PROCEDURE UpdateTotalRevenueForToday()
BEGIN
DECLARE today DATE;
DECLARE total_revenue DECIMAL(10, 2);
-- Get today's date
SET today = CURDATE();
-- Calculate total revenue for today
SELECT SUM(total_price) INTO total_revenue
FROM orders
WHERE DATE(order_date) = today;
-- Update total revenue for today in a separate table or perform any other necessary action
-- Here, I'm assuming you have a separate table named 'daily_revenue' to store daily revenue
-- If not, you can perform any other desired action with the calculated total revenue
-- Update or insert the total revenue for today into the 'daily_revenue' table
INSERT INTO daily_revenue (date, revenue)
VALUES (today, total_revenue)
ON DUPLICATE KEY UPDATE revenue = total_revenue;
END$$
DELIMITER ;
```
In this stored procedure:
- We declare two variables: today to store today's date and total_revenue to store the calculated total revenue for today.
- We use a SELECT statement to calculate the total revenue for today from the orders table where the order_date matches today's date.
- We then update the daily_revenue table with today's date and the calculated total revenue. If there's already an entry for today's date, it updates the revenue. Otherwise, it inserts a new row for today's date.
Now, we have to create a trigger to call this stored procedure every time a new order is inserted into the orders table. Here's an example of how to create such a trigger:
```
CREATE TRIGGER AfterInsertOrder
AFTER INSERT ON orders
FOR EACH ROW
BEGIN
CALL UpdateTotalRevenueForToday();
END;
```
This trigger will call the UpdateTotalRevenueForToday() stored procedure every time a new row is inserted into the orders table.
### The MongoDB way
If you don’t have an existing MongoDB Database deployed on Atlas, start for free and get 500MBs of storage free forever.
Now, all we have to do is create an Atlas Trigger and implement an Atlas Function in it.
Let’s start by creating an Atlas database trigger.
.
, are powerful alternatives to traditional stored procedures. MongoDB Atlas, the developer data platform, further enhances development flexibility with features like Atlas Functions and Triggers, enabling seamless integration of server-side logic within the database environment.
The migration from stored procedures to MongoDB is not just a technological shift; it represents a paradigm shift towards embracing a future-ready digital landscape. As organizations transition, they gain the ability to leverage MongoDB's innovative solutions, maintaining agility, enhancing performance, and adhering to contemporary development practices.
So, what are you waiting for? Sign up for Atlas today and experience the modern alternative to stored procedures in MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcb5b2b2db6b3a2b6/65dce8447394e52da349971b/image1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5cbb9842024e79f1/65dce844ae62f722b74bdfe0/image2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt529e154503e12f56/65dce844aaeb364e19a817e3/image3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta2a7b461deb6879d/65dce844aaeb36b5d5a817df/image4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5801d9543ac94f25/65dce8446c65d723e087ae99/image5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5ceb23e853d05d09/65dce845330e0069f27f5980/image6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt626536829480d1be/65dce845375999f7bc70a71b/image7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1b1366584ee0766a/65dce8463b4c4f91f07ace17/image8.png | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js"
],
"pageDescription": "Let's see how MongoDB’s aggregation pipeline, Atlas triggers, and change streams together can act as a super efficient, powerful, and flexible alternative to stored procedures.",
"contentType": "Tutorial"
} | Demystifying Stored Procedures in MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/efficiently-managing-querying-visual-data-mongodb-atlas-vector-search-fiftyone | created | # Efficiently Managing and Querying Visual Data With MongoDB Atlas Vector Search and FiftyOne
between FiftyOne and MongoDB Atlas enables the processing and analysis of visual data with unparalleled efficiency!
In this post, we will show you how to use FiftyOne and MongoDB Atlas Vector Search to streamline your data-centric workflows and interact with your visual data like never before.
## What is FiftyOne?
for the curation and visualization of unstructured data, built on top of MongoDB. It leverages the non-relational nature of MongoDB to provide an intuitive interface for working with datasets consisting of images, videos, point clouds, PDFs, and more.
You can install FiftyOne from PyPi:
```
pip install fiftyone
```
The core data structure in FiftyOne is the Dataset, which consists of samples — collections of labels, metadata, and other attributes associated with a media file. You can access, query, and run computations on this data either programmatically, with the FiftyOne Python software development kit, or visually via the FiftyOne App.
As an illustrative example, we’ll be working with the Quickstart dataset, which we can load from the FiftyOne Dataset Zoo:
```python
import fiftyone as fo
import fiftyone.zoo as foz
## load dataset from zoo
dataset = foz.load_zoo_dataset("quickstart")
## launch the app
session = fo.launch_app(dataset)
```
💡It is also very easy to load in your data.
Once you have a `fiftyone.Dataset` instance, you can create a view into your dataset (`DatasetView`) by applying view stages. These view stages allow you to perform common operations like filtering, matching, sorting, and selecting by using arbitrary attributes on your samples.
To programmatically isolate all high-confidence predictions of an `airplane`, for instance, we could run:
```python
from fiftyone import ViewField as F
view = dataset.filter_labels(
"predictions",
(F("label") == "airplane") & (F("confidence") > 0.8)
)
```
Note that this achieves the same result as the UI-based filtering in the last GIF.
This querying functionality is incredibly powerful. For a full list of supported view stages, check out this View Stages cheat sheet. What’s more, these operations readily scale to billions of samples. How? Simply put, they are built on MongoDB aggregation pipelines!
When you print out the `DatasetView`, you can see a summary of the applied aggregation under “View stages”:
```python
# view the dataset and summary
print(view)
```
```
Dataset: quickstart
Media type: image
Num samples: 14
Sample fields:
id: fiftyone.core.fields.ObjectIdField
filepath: fiftyone.core.fields.StringField
tags: fiftyone.core.fields.ListField(fiftyone.core.fields.StringField)
metadata: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.metadata.ImageMetadata)
ground_truth: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)
uniqueness: fiftyone.core.fields.FloatField
predictions: fiftyone.core.fields.EmbeddedDocumentField(fiftyone.core.labels.Detections)
View stages:
1. FilterLabels(field='predictions', filter={'$and': {...}, {...}]}, only_matches=True, trajectories=False)
```
We can explicitly obtain the MongoDB aggregation pipeline when we create directly with the `_pipeline()` method:
```python
## Inspect the MongoDB agg pipeline
print(view._pipeline())
```
```
[{'$addFields': {'predictions.detections': {'$filter': {'input': '$predictions.detections',
'cond': {'$and': [{'$eq': ['$$this.label', 'airplane']},
{'$gt': ['$$this.confidence', 0.8]}]}}}}},
{'$match': {'$expr': {'$gt': [{'$size': {'$ifNull': ['$predictions.detections',
[]]}},
0]}}}]
```
You can also inspect the underlying MongoDB document for a sample with the to_mongo() method.
You can even create a DatasetView by applying a MongoDB aggregation pipeline directly to your dataset using the Mongo view stage and the add_stage() method:
```python
# Sort by the number of objects in the `ground_truth` field
stage = fo.Mongo([
{
"$addFields": {
"_sort_field": {
"$size": {"$ifNull": ["$ground_truth.detections", []]}
}
}
},
{"$sort": {"_sort_field": -1}},
{"$project": {"_sort_field": False}},
])
view = dataset.add_stage(stage)
```
## Vector Search With FiftyOne and MongoDB Atlas
![Searching images with text in the FiftyOne App using multimodal vector embeddings and a MongoDB Atlas Vector Search backend.][3]
Vector search is a technique for indexing unstructured data like text and images by representing them with high-dimensional numerical vectors called *embeddings*, generated from a machine learning model. This makes the unstructured data *searchable*, as inputs can be compared and assigned similarity scores based on the alignment between their embedding vectors. The indexing and searching of these vectors are efficiently performed by purpose-built vector databases like [MongoDB Atlas Vector Search.
Vector search is an essential ingredient in retrieval-augmented generation (RAG) pipelines for LLMs. Additionally, it enables a plethora of visual and multimodal applications in data understanding, like finding similar images, searching for objects within your images, and even semantically searching your visual data using natural language.
Now, with the integration between FiftyOne and MongoDB Atlas, it is easier than ever to apply vector search to your visual data! When you use FiftyOne and MongoDB Atlas, your traditional queries and vector search queries are connected by the same underlying data infrastructure. This streamlines development, leaving you with fewer services to manage and less time spent on tedious ETL tasks. Just as importantly, when you mix and match traditional queries with vector search queries, MongoDB can optimize efficiency over the entire aggregation pipeline.
### Connecting FiftyOne and MongoDB Atlas
To get started, first configure a MongoDB Atlas cluster:
```
export FIFTYONE_DATABASE_NAME=fiftyone
export FIFTYONE_DATABASE_URI='mongodb+srv://$USERNAME:$PASSWORD@fiftyone.XXXXXX.mongodb.net/?retryWrites=true&w=majority'
```
Then, set MongoDB Atlas as your default vector search back end:
```
export FIFTYONE_BRAIN_DEFAULT_SIMILARITY_BACKEND=mongodb
```
### Generating the similarity index
You can then create a similarity index on your dataset (or dataset view) by using the FiftyOne Brain’s `compute_similarity()` method. To do so, you can provide any of the following:
1. An array of embeddings for your samples
2. The name of a field on your samples containing embeddings
3. The name of a model from the FiftyOne Model Zoo (CLIP, OpenCLIP, DINOv2, etc.), to use to generate embeddings
4. A `fiftyone.Model` instance to use to generate embeddings
5. A Hugging Face `transformers` model to use to generate embeddings
For more information on these options, check out the documentation for compute_similarity().
```python
import fiftyone.brain as fob
fob.compute_similarity(
dataset,
model="clip-vit-base32-torch", ### Use a CLIP model
brain_key="your_key",
embeddings='clip_embeddings',
)
```
When you generate the similarity index, you can also pass in configuration parameters for the MongoDB Atlas Vector Search index: the `index_name` and what `metric` to use to measure similarity between vectors.
### Sorting by Similarity
Once you have run `compute_similarity()` to generate the index, you can sort by similarity using the MongoDB Atlas Vector Search engine with the `sort_by_similarity()` view stage. In Python, you can specify the sample (whose image) you want to find the most similar images to by passing in the ID of the sample:
```python
## get ID of third sample
query = dataset.skip(2).first().id
## get 25 most similar images
view = dataset.sort_by_similarity(query, k=25, brain_key="your_key")
session = fo.launch_app(view)
```
If you only have one similarity index on your dataset, you don’t need to specify the `brain_key`.
We can achieve the same result with UI alone by selecting an image and then pressing the button with the image icon in the menu bar:
!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb7504ea028d24cc7/65df8d81eef4e3804a1e6598/1.gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a282069dd09ffbf/65df8d976c65d7a87487e309/2.gif
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt64eb99496c21ea9f/65df8db7c59852e860f6bb3a/3.gif
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5d0148a55738e9bf/65df8dd3eef4e382751e659f/4.gif
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt27b44a369441ecd8/65df8de5ffa94a72a33d40fb/5.gif | md | {
"tags": [
"Python",
"AI"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Efficiently Managing and Querying Visual Data With MongoDB Atlas Vector Search and FiftyOne | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/atlas-device-sdks-with-dotnet-maui | created | # Online/Offline Data-Capable Cross-Platform Apps with MongoDB Atlas, Atlas Device SDKs and .NET MAUI
In a world of always-on, always-connected devices, it is more important than ever that apps function in a way that gives a user a good experience. But as well as the users, developers matter too. We want to be able to feel productive and focus on delivery and innovation, not solving common problems.
In this article, we will look at how you can mix .NET MAUI with MongoDB’s Atlas App Services, including the Atlas Device SDKs mobile database, for online/offline-capable apps without the pain of coding for network handling and errors.
## What are Atlas Device SDKs?
Atlas Device SDKs, formerly Realm is an alternative to SQLite that takes advantage of MongoDB’s document data model. It is a mobile-first database that has been designed for modern data-driven applications. Although the focus of this article is the mobile side of Atlas Device SDK, it actually also supports the building of web, desktop, and IoT apps.
Atlas Device SDKs have some great features that save a lot of time as a developer. It uses an object-oriented data model so you can work directly with the native objects without needing any Object Relational Mappers (ORMs) or Data Access Objects (DAO). This also means it is simple to start working with and scales well.
Plus, Atlas Device SDKs are part of the Atlas App Services suite of products that you get access to via the SDK. This means that Realm also has automatic access to a built-in, device-to-cloud sync feature. It uses a local database stored on the device to allow for always-on functionality. MongoDB also has Atlas, a document database as a service in the cloud, offering many benefits such as resilience, security, and scaling. The great thing about device sync with App Services is it links to a cloud-hosted MongoDB Atlas cluster, automatically taking care of syncing between them, including in the event of changes in network connectivity. By taking advantage of Atlas, you can share data between multiple devices, users, and the back ends using the same database cluster.
## Can you use Atlas Device SDKs with .NET MAUI?
In short, yes! There is a .NET SDK available that supports .NET, MAUI (including Desktop), Universal Windows Platform (UWP), and even Unity.
In fact, Maddy Montaquila (Senior PM for MAUI at Microsoft) and I got talking about fun project ideas and came up with HouseMovingAssistant, an app built using .NET MAUI and Atlas Device SDKs, for tracking tasks related to moving house.
It takes advantage of all the great features of Atlas Device SDKs and App Services, including device sync, data partitioning based on the logged-in user, and authentication to handle the logging in and out.
It even uses another MongoDB feature, Charts, which allows for great visualizations of data in your Atlas cluster, without having to use any complex graphing libraries!
## Code ##
The actual code for working with Atlas Device SDKs is very simple and straightforward. This article isn't a full tutorial, but we will use code snippets to show how simple it is. If you want to see the full code for the application, you can find it on GitHub.
> Note that despite the product update name, the Realm name is still used in the library name and code for now so you will see references to Realm throughout the next sections.
### Initialization
```csharp
RealmApp = Realms.Sync.App.Create(AppConfig.RealmAppId);
```
This code creates your Realm Sync App and lives inside of App.Xaml.cs.
```csharp
PartitionSyncConfiguration config = new PartitionSyncConfiguration($"{App.RealmApp.CurrentUser.Id}", App.RealmApp.CurrentUser); return Realm.GetInstance(config);
```
The code above is part of an initialization method and uses the RealmApp from earlier to create the connection to your app inside of App Services. This gives you access to features such as authentication (and more), as well as your Atlas data.
### Log in/create an account ###
Working with authentication is equally as simple. Creating an account is as easy as picking an authentication type and passing the required credentials.
The most simple way is email and password auth using details entered in a form in your mobile app.
```csharp
await App.RealmApp.EmailPasswordAuth.RegisterUserAsync(EmailText, PasswordText);
```
Logging in, too, is one call.
```csharp
var user = await App.RealmApp.LogInAsync(Credentials.EmailPassword(EmailText, PasswordText));
```
Of course, you can add conditional handling around this, such as checking if there is already a user object available and combining that with navigation built into MAUI, such as Shell, to simply skip logging in if the user is already logged in:
```csharp
if (user != null)
{
await AppShell.Current.GoToAsync("///Main");
}
```
### Model
As mentioned earlier in the article, Atlas Device SDKs can work with simple C# objects with properties, and use those as fields in your document, handling mapping between object and document.
One example of this is the MovingTask object, which represents a moving task. Below is a snippet of part of the MovingTask.cs model object.
```csharp
PrimaryKey]
[MapTo("_id")]
public ObjectId Id { get; set; } = ObjectId.GenerateNewId();
[MapTo("owner")]
public string Owner { get; set; }
[MapTo("name")]
[Required]
public string Name { get; set; }
[MapTo("_partition")]
[Required]
public string Partition { get; set; }
[MapTo("status")]
[Required]
public string Status { get; set; }
[MapTo("createdAt")]
public DateTimeOffset CreatedAt { get; set; }
```
It uses standard properties, with some additional attributes from the [MongoDB driver, which mark fields as required and also say what fields they map to in the document. This is great for handling different upper and lower case naming conventions, differing data types, or even if you wanted to use a totally different name in your documents versus your code, for any reason.
You will notice that the last property uses the DateTimeOffset data type, which is part of C#. This isn’t available as a data type in a MongoDB document, but the driver is able to handle converting this to and from a supported type without requiring any manual code, which is super powerful.
## Do Atlas Device SDKs support MVVM?
Absolutely. It fully supports INotifyPropertyChanged events, meaning you don’t have to worry about whether the data is up to date. You can trust that it is. This support for events means that you don’t need to have an extra layer between your viewmodel and your database if you don’t want to.
As of Realm 10.18.0 (as it was known at the time), there is even support for Source Generators, making it even easier to work with Atlas Device SDKs and MVVM applications.
HouseMovingAssistant fully takes advantage of Source Generators. In fact, the MovingTask model that we saw earlier implements IRealmObject, which is what brings in source generation to your models.
The list of moving tasks visible on the page uses a standard IEnumerable type, fully supported by CollectionView in MAUI.
```csharp
ObservableProperty]
IEnumerable movingTasks;
```
Populating that list of tasks is then easy thanks to LINQ support.
```chsarp
MovingTasks = realm.All().OrderBy(task => task.CreatedAt);
```
## What else should I know?
There are a couple of extra things to know about working with Atlas Device SDKs from your .NET MAUI applications.
### Services
Although as discussed above, you can easily and safely talk directly to the database (via the SDK) from your viewmodel, it is good practice to have an additional service class. This could be in a different/shared project that is used by other applications that want to talk to Atlas, or within your application for an added abstraction.
In HouseMovingAssistant, there is a RealmDatabaseService.cs class which provides a method for fetching the Realm instance. This is because you only want one instance of your Realm at a time, so it is better to have this as a public method in the service.
```csharp
public static Realm GetRealm()
{
PartitionSyncConfiguration config = new PartitionSyncConfiguration($"{App.RealmApp.CurrentUser.Id}", App.RealmApp.CurrentUser);
return Realm.GetInstance(config);
}
```
### Transactions
Because of the way Atlas Device SDKs work under the hood, any kind of operation to it — be it read, create, update, or delete — is done inside what is called a write transaction. The use of transactions means that actions are grouped together as one and if one of those fails, the whole thing fails.
Carrying out a transaction inside the Realm .NET SDK is super easy. We use it in HouseMovingAssistant for many features, including creating a new task, updating an existing task, or deleting one.
```csharp
var task =
new MovingTask
{
Name = MovingTaskEntryText,
Partition = App.RealmApp.CurrentUser.Id,
Status = MovingTask.TaskStatus.Open.ToString(),
Owner = App.RealmApp.CurrentUser.Profile.Email,
CreatedAt = DateTimeOffset.UtcNow
};
realm.Write(() =>
{
realm.Add(task);
});
```
The code above creates a task using the model we saw earlier and then inside a write transaction, adds that object to the Realm database, which will in turn update the Atlas cluster it is connected to. This is a great example of how you don’t need an ORM, as we create an object from our model class and can directly add it, without needing to do anything extra.
## Summary
In this article, we have gone on a whistle stop tour of .NET MAUI with Atlas Device SDKs (formerly Realm), and how you can quickly get up and running with a data capable application, with online/offline support and no need for an ORM.
There is so much more you can do with Atlas Device SDKs, MongoDB Atlas, and the App Services platform. A great article to read next is on [advanced data modelling with Realm and .NET by the lead engineer for the Atlas Device SDKs .NET team, Nikola Irinchev.
You can get started today by signing up to an Atlas account and discovering the world of Realm, Atlas and Atlas App Services! | md | {
"tags": [
"Realm",
"C#",
".NET",
"Mobile"
],
"pageDescription": "A tutorial showing how to get started with Atlas Device SDKs, MongoDB Atlas and .NET MAUI",
"contentType": "Tutorial"
} | Online/Offline Data-Capable Cross-Platform Apps with MongoDB Atlas, Atlas Device SDKs and .NET MAUI | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-swiftui-property-wrappers-mvi-meetup | created | # Realm SwiftUI Property wrappers and MVI architecture Meetup
Didn't get a chance to attend the Realm SwiftUI Property wrappers and
MVI architecture Meetup? Don't worry, we recorded the session and you
can now watch it at your leisure to get you caught up.
>Realm SwiftUI Property wrappers and MVI architecture
:youtube]{vid=j72YIxJw4Es}
In this second installment of our SwiftUI meetup series, Jason Flax, the lead for Realm's iOS team, returns to dive into more advanced app architectures using SwiftUI and Realm. We will dive into what property wrappers SwiftUI provides and how they integrate with Realm, navigation and how to pass state between views, and where to keep your business logic in a MVI architecture.
> **Build better mobile apps with Atlas Device Sync**: Atlas Device Sync is a fully-managed mobile backend-as-a-service. Leverage out-of-the-box infrastructure, data synchronization capabilities, built-in network handling, and much more to quickly launch enterprise-grade mobile apps. [Get started now by build: Deploy Sample for Free!
Note - If you missed our first SwiftUI & Realm talk, you can review it here before the talk and get all your questions answered -
.
In this meetup, Jason spends about 35 minutes on
- StateObject, ObservableObject, EnvironmentObject
- Navigating between Views with state
- Business Logic and Model-View-Intent Best Practices
And then we have a full 25 minutes of live Q&A with our Community. For those of you who prefer to read, below we have a full transcript of the meetup too. As this is verbatim, please excuse any typos or punctuation errors!
Throughout 2021, our Realm Global User Group will be planning many more online events to help developers experience how Realm makes data stunningly easy to work with. So you don't miss out in the future, join our Realm Global Community and you can keep updated with everything we have going on with events, hackathons, office hours, and (virtual) meetups. Stay tuned to find out more in the coming weeks and months.
To learn more, ask questions, leave feedback, or simply connect with other Realm developers, visit our community forums. Come to learn. Stay to connect.
## Transcript
**Jason Flax**: Great. So, as I said, I'm Jason Flax. I'm the lead engineer of the Realm Cocoa team. Potentially soon to be named the Realm Swift team. But I will not go into that. It's raining outside, but it smells nice. So, let's begin the presentation. So, here's, today's agenda. First let's go over. What is an architecture? It's a very loaded word. It means a number of things, for developers of any level, it's an important term to have down pat. W hat are the common architectures? There's going to be a lot of abbreviations that you hear today. How does SwiftUI change the playing field? SwiftUI is two-way data-binding makes the previous architecture somewhat moot in certain cases. And I'm here to talk about that. And comparing the architectures and pretty much injecting into this, from my professional opinion what the most logical architecture to use with SwiftUI is. If there is time, I have some bonus slides on networking and testing using the various architectures.
**Jason Flax**: But if there is not, I will defer to the Q&A where you all get to ask a bunch of questions that Shane had enumerated before. So, let us begin. What is an architecture? x86, PowerPC, ARM. No, it's not, this is not, we're not talking about hardware architecture here. Architecture is short for an architectural pattern. In my opinion, hardware is probably too strong of a word or architecture is too strong of a word. It's just a term to better contextualize how data is displayed and consumed it really helps you organize your code. In certain cases, it enhances testability. In certain cases, it actually makes you have to test more code. Basically the patterns provide guidelines and a unified or United vocabulary to better organize the software application.
**Jason Flax**: If you just threw all of your code onto a view, that would be a giant mess of spaghetti code. And if you had a team of 20 people all working on that, it would be fairly measurable and a minimum highly disorganized. The images here, just MVC, MVVM, Viper, MBI. These are the main ones I'm going to talk about today. There are a number of architectures I won't really be touching on. I think the notable missing one from this talk will be CLEAN architecture, which I know is becoming somewhat big but I can address that later when we talk or in the Q&A.
**Jason Flax**: Let's go over some of those common architectures. So, from the horse's mouth, the horse here being Apple the structure of UIKit apps is based on the Model-View-Controller design pattern, wherein objects are divided by their purpose. Model objects manage the app's data and business logic. View objects provide the visual representation of your data. Controller objects acts as a bridge between your model and view objects, moving data between them at appropriate times.
**Jason Flax**: Going over this, the user uses the controller by interacting with the view, the view talks to the controller, generally controllers are going to be one-to-one with the view, the controller then manipulates the data model, which in this case generally speaking would actually just be your data structure/the data access layer, which could be core data or Realm. The model then updates the view through the controller is displayed on the view the user sees it, interacts with it goes in a big circle. This is a change from the original MVC model. I know there are a few people in this, attending right now that could definitely go into a lot more history than I can. But the original intent was basically all the data and logic was in the model.
**Jason Flax**: The controller was just for capturing user input and passing it to the model. And the communication was strictly user to controller, to model, to view. With no data flowing the other way this was like the OG unidirectional data flow. But over time as the MVC model evolved controllers got heavier and heavier and heavier. And so what you ended up with is MVC evolving into these other frameworks, such as MVVM. MVVM, Viper, CLEAN. They didn't come about out of nowhere. People started having issues with MVC, their apps didn't scale well, their code didn't scale well. And so what came about from that was new architectures or architectural design patterns.
**Jason Flax**: Let's go over Model-View-ViewModel. It's a bit of a mouthful. So, in MVVM the business logic is abstracting to an object called a ViewModel. The ViewModel is in charge of providing data to the view and updating the view when data changes, traditionally this is a good way to separate business logic from the view controller and offer a much cleaner way to test your code. So, generally here, the ViewModel is going to be one-to-one with the model as opposed to the view and what the ViewModel ends up being is this layer of business logic and presentation logic. And that's an important distinction from what the controller previously did as the controller was more associated with the view and less so the model. So, what you end up with, and this will be the pattern as I go through each of the architectures, you're going to end up with these smaller and smaller pieces of bite-sized code. So, in this case, maybe you have more models than views. So, MVVM makes more sense.
**Jason Flax**: So ViewModel is responsible for persistence, networking and business logic. ViewModel is going to be your data access layer, which is awkward when it comes to something like Realm, since Realm is the data access layer. I will dig more into that. But with SwiftUI you end up with a few extra bits that don't really make much sense anymore. This is just a quick diagram showing the data flow with MVVM. The ViewModel binds the view, the user inputs commands or intent or actions or whatever we want to say. The commands go through to the ViewModel, which effectively filters and calculates what needs to be updated on the model, it then reads back from the model updates the view does that in sort of a circular pattern.
**Jason Flax**: Let's go over Viper. So, Viper makes the pieces even smaller. It is a design pattern used to separate logic for each specific module of your app. So, the view is your SwiftUI, A View owns a Presenter and a Router. The Interactive, that is where your business logic for your module lives, the interactor talks to your entity and other services such as networking. I'll get back to this in a second. The presenter owns the interactor and is in charge of delivering updates to the view when there is new data to display or an event is triggered. So, in this case, the breakdown, if we're associating concepts here, the presenter is associated with the view. It's closer to your view controller and the interactor is more associated with the model. So, it's closer to your ViewModel. So, now we're like mixing all these concepts together, but breaking things and separating things, into smaller parts.
**Jason Flax**: In this case, the data flow is going to be a bit different. Your View is going to interact with the Presenter, which interacts with the Interactor, which interacts with the Entity. So you end up with this sort of onion that you're slowly peeling back. The Entity is your data model. I'm not entirely sure why they didn't call it model my guess is that Viper sounds a lot better than \[inaudibile 00:06:46\] doesn't really work. The router handles the creation of a View for a particular destination. This is a weird one. I'll touch on it a couple of times in the talk.
**Jason Flax**: Routers made more sense when the view flow was executed by storyboards and segues and nibs and all that kind of thing. Now it's SwiftUI because it's all programmatic, routers don't really make as much sense. That said maybe in a complex enough application, I could be convinced that a router might elucidate the flow of use, but at the moment I haven't seen anything yet. But that is what the router is meant to do anyway, this is a brief diagram on how Viper works, sorry. So, again the View owns and sends actions to the Presenter, the Presenter owns and asks for updates and sends updates to the Interactor and the Interactor actually manipulates the data, it edits the entity, it contains the data access layer, it'll save things and load things and will update things.
**Jason Flax**: And then the Interactor will notify the Presenter, which then notifies them View. As you can see this ... For anybody that's actually used SwiftUI, this is immediately not going to make much sense considering the way that you actually buying data to Views. MVI, this is kind of our end destination, this is where we want to end up at the end of our journey. MVI is fairly simple, to be honest, it's closer to an ideal, it's closer to a concept than an architecture to be honest. You have user, user has intent that intent changes the model, that model changes the view user sees that and they can keep acting on it. This has not really been possible previously UIKit was fairly complex, apps grow in complexity. Having such a simple thing would not have been enough in previous frameworks, especially uni-directional ones where a circular pattern like this doesn't really make sense.
**Jason Flax**: But now it's SwiftUI there's so much abstracted way with us, especially with Realm, especially with live objects, especially with property rappers that update the view automatically under the hood that update your Realm objects automatically under the hood. We can finally achieve this, which is what I'm going to be getting out in the stock. So, let's go over some of those common concepts. Throwing around terms like View and Presenter and Model is really easy to do if you're familiar, but just in case anybody isn't. The View is what the user sees and interacts with all architectural patterns have a view. It is the monitor. It is your phone. It is your whatever. It is the thing that the user touches that the user plays with.
**Jason Flax**: The User, the person behind the screen, the user actions can be defined as intent or interactions, actions trigger view updates which can trigger business logic. And I do feel the need to explicitly say that one thing that is also missing from this presentation not all actions, not all intent will be user-driven there are things that can be triggered by timers things that can be triggered by network requests things that don't necessarily line up perfectly with the model. That said, I felt comfortable leaving out of the top because it actually doesn't affect your code that much, at least if you're using MVI.
**Jason Flax**: The model. So, this term gets a bit complicated basically it's the central components of the pattern. It's the application status structure, independent of the user interface it manages the data, logic and rules of the application. The reason that this gets a bit wonky when describing it is that oftentimes people just speak about the model as if it was the data structures themselves as if it was my struct who with fields, bars, whatever. That is the model. It's also an object. It's also potentially an instance of an object. So, hopefully over this talk, I can better elaborate on what the model actually is.
**Jason Flax**: The Presenter. So, this is the presenter of the ViewModel, whatever you want to call it. It calculates and projects what data needs to actually be displayed on the view. As more often referred to as the ViewModel, the presenter. Again, frameworks with two-way data-binding obviate the need for this. It is awkward to go to a presenter when you don't necessarily need to. So, let's get the meat of this, how does SwiftUI actually change the playing field? For starters
there is no controller. It does not really make sense to have a controller. It was very much a UIKit concept ironically or coincidentally by eliminating the controller this graphic actually ends up looking a lot like MVI. The user manipulates the model through intent, the model updates, the view, the user sees the view goes around in a big circle. I will touch on that a bit more later. But MVC really doesn't make as much sense anymore if you consider it as the new school MVC.
**Jason Flax**: MVVM, added a few nice screen arrows here. The new model doesn't really make sense anymore. Again, this is the presentation layer. When you can bind directly to the data, all you're doing by creating a ViewModel with SwiftUI in a two-way data-binding framework is shifting responsibility in a way that creates more code to test and doesn't do much else especially these days where you can just throw that business logic on the model. If you were doing traditional MVVM with SwiftUI, the green arrows would be going from, View to ViewModel and you could create the same relationship. It's just extra code it's boiler plate. Viper lot of confusion here. Again not really sure that the router makes a lot of sense. I can be convinced otherwise. Presentation view, the presenter doesn't really again makes sense it's basically your ViewModel. Interactor also doesn't make sense. It is again, because the view is directly interacting with the model itself or the entity. This piece is again, kind of like, eh, what are you doing here?
**Jason Flax**: There's also an element here that as you keep building these blocks with Viper and it's both a strength and weakness of Viper. So, the cool thing about it you end up with all these cool little pieces to test with, but if the 10,000 line controller is spaghetti code. 10,000 lines of Viper code is ravioli code. Like these little pieces end up being overwhelming in themselves and a lot of them do nothing but control a lot at the same time. We'll get more into that when I show the actual code itself. And here's our golden MVI nothing changes. This is the beauty of it. This is the simplicity of it. User interacts, changes the model changes the view ad infinitum.
**Jason Flax**: Now, let's actually compare these with code. So, the app that we will be looking at today, Apple came out with a Scrumdinger app to show offs with UI. It is basically an app that lets you create scrums. If you're not familiar with the concept of scrum it's a meeting, it's a stand-up where people briefly chat and update and so on and so forth. And I could go into more detail, but that's not what this talk is about. So, we took their app and we added Realm to it and we also then
basically wrote it three different times, one in Viper, one in MVVM and one in MVI. This will allow us to show you what works and what doesn't work. Obviously it's going to be slightly biased towards MVI. And of course I do feel the need to disclaim that this is a simple application. So, there's definitely going to be questions of like, "How does this scale? How does MVI actually scale?" I can address those if they are asked, if not, it should become pretty clear why I'm pushing MVI as the go-to thing for SwiftUI plus Realm plus the Realm Property Wrappers.
**Jason Flax**: Let's compare the models. So, in Viper, this is your entity. So, it contains the scrums so the DailyScrum is going to be our like core data structure here. It's going to contain an ID. It's going to contain a title, it's going to contain attendees and things like that. But the main thing I'm trying to show with this slide is that the entity loads from the Realm, it loads the DailyScrum, which for those that have used Realm, you already know this is a bit awkward because everything with Realm is live. Those DailyScrum, that objects Realm.objects(DailyScrum.self).map. So, if you were to just save out those objects, those results are always updating, those results read and write, directly from the persistent store. So, loading from the database is already an awkward step.
**Jason Flax**: Otherwise, you can push new scrums. You can update scrums. Again, what this is doing is creating a data access layer for a database that is already the data access layer. Either way this is the idea of Viper. You are creating these abstractions to better organize how things are updated, pushed, loaded, et cetera. MVVM the model is still a more classically the data structure itself. I will show the actual ViewModel in another slide. This should look a bit more similar to probably what you'd be used to. Actually there's probably a mistake the color shouldn't be there because that should be in the ViewModel. But for the most part, these are your properties.
**Jason Flax**: The big difference here between MVI will be the fact that again you're creating this access layer around the Realm where you're going to actually pass in the ViewModel to update the scrum itself and then right to the Realm. It's even possible that depending on your interpretation of MVVM, which again is something I should have actually said earlier in the talk, a lot of these architectures end up being up for interpretation. There is a level of subjectivity and when you're on a team of 20 people trying to all architect around the same concepts, you might end up with some wishy-washy ViewModels and models and things like that.
**Jason Flax**: MVI, this is your model that is the Realm database, and I'm not actually being facetious here. If you consider Realm to be your data access layer, to be your persistent storage, to be the thing that actually syncs data, it is. It holds all your data, It maintains all of the state, which is kind of what the model is supposed to do to. It maintain state, it maintains the entire, flow and state of your application. That is what Realm can be if you use it as it's intended to be used. Let's go over what the View actually look like depending on your architecture. Spoilers, they look very similar. It's what's happening under the hood that actually really changes the game. So, in this case you have these ScrumsView.
**Jason Flax**: The object that you have on the View, you'll notice this does not, even though this app uses Realm is not using the Realm property wrappers because you are presenting the Presenter kind of makes sense, I suppose. You're going to show the scrums from the presenter and you're going to pass around that presenter around this view, to be able to interact with the actual underlying model, which is the DailyScrum class. You'll also notice at the bottom, which is I suppose a feature of Viper, each view has a presenter and potentially each model has an interactor.
**Jason Flax**: For the EditView. So, I have the scrum app, I want to edit the scrum I have, I want to change the title or the color of it or something like that. I want to add attendees. For Viper, you have to pass in a new presenter, you have to pass in a new interactor and these are going to be these bite-sized pieces that again interact with the View or model, depending on which thing you're talking about. How am I doing on time? Cool. So, this is the actual like EditView now that I'm talking about. So, the EditView has that presenter, that presenter is going to basically give the view all of the data. So, lengthInminutes title, color, attendees, things like that. They're all coming off the presenter. You can see over here where I'm circling, you would save off the presenter as well. So, when you're done editing this view, you save on the presenter, that presenter is actually going to then speak to the interactor and that interactor is going to interact with their database and actually say about that data.
**Jason Flax**: Again, the reason that this is a bit awkward when using Realm and SwiftUI at least is that because you have live objects with Realm, having intermediary layers is unnecessary abstraction. So, this is MVVM and now we actually have the video of the View as well on the right side. So, instead of a Presenter, you have a ViewModel right now you're seeing all the terms come together. You're going to read the ViewModels off of the ViewModel for each view. So, for the detailed view, you're going to pass in the detail ViewModel for the EditView, you're going to pass in the EditViewModel and the set ViewModel is going to take a scrum and it's going to read and write the data into that scrum.
**Jason Flax**: This is or MBI now. So, MBI is going to look a little different. The view code is slightly larger but there are no obstructions beyond this. So, in this case, you have your own property wrap, you have observed results. This is going to be all of the DailyScrum in your round database. It is going to live update. You are not going to know that it's updating, but it will notify the view that it's updating. So, the DailyScrum was added, say, you have Realm sync, a DailyScrum is added from somebody else's phone, you just update. There is no other code you have to write for that. Below that you have a StateRealmObject, which is new scrum data. So, in this case, this is a special case for forms, which is a very common use case, that scrum data is going to be passed into the EditView and it's going to be operated on directly.
**Jason Flax**: So the main added code here, or the main difference in code is this bit right here, where we actually add the scrum data to the observed results. So, somebody, following MVVM or Viper religiously might say, that's terrible. "Why are you doing business logic in a view like that? And why would that happen?" This is a direct result of user action. A user hits the done button or the add button. This needs to happen afterwards technically, if you really wanted to, you could extract this out to the model itself. You could put this on in instance of new scrum data and have it write itself to the Realm that is totally valid. I've seen people do that with, MBI, SwiftUI and Realm. In this case, it's simple enough where those layers of abstraction don't actually add anything beneficial.
**Jason Flax**: And for testing, this would be, you'd want to test this from a UI test anyway. And the reason for that is that we test that the scrum data is added to the Realm we being Realm. I suppose there's a level of trust you have to have with Realm here that we actually are testing our code, we promise that we are. But that's the idea is that all of this appending data, all of this adding to the database, the data access layer, that's tested by us. You don't have to worry about that. So, yeah. Why is this view larger than the MVVM view? Because the interactive logic has been shifted to the ViewModel in MVVM, and there's no extra logic here, it's all there. But for MVVM again, it's all been pushed back, the responsibility has been shifted slightly. MVI this is actually what the EditView would look like. There's no obstructions here. You have your StateRealmObject, which is the DailyScrum that's been passed in from the previous view. And you bind the title directly to that.
**Jason Flax**: So if you look at the right side video here, as I changed the Cocoa team to Swift team Scrum so mouthful that is updating to the Realm that is persisting and if you were using Realm sync, that would be syncing as well. But there is no other logic here that is just handled. Hopefully at this point you would be asking yourself why add the extra logic. I can't give you a good reason, which is the whole point of this talk. So, let's go over the ViewModel and Persistence or dig in a bit deeper. So, this is our actual Realm Object. This is the basic object that we have. It's the POJO, the POSO whatever you want to call it. It is your plain old Swift object.
**Jason Flax**: In this case, it is also a Realm Object has an ID. That ID would largely be for syncing if you had say, if you weren't using Realm Sync and you just had a REST API, it would be your way of identifying the Scrums. It has a title, which is the name of it of course, a list of attendees, which in this case for this simple use case, it's just strings it's names it's whatever that would be length of the scrum in minutes and the color components, which depending on which thing you're using is actually pretty cool. And this is something that I probably won't have time to fully dig into, but you can use Realm to manage view state. You can use Realm to manage the app state if say you're scrolling in a view and you're at a certain point and then you present another view over that maybe it's a model or something, and the phone dies, that sucks.
**Jason Flax**: You can open up the app when the phone turns back on, if they've charged it of course, and you can bring them back to that exact state, if it's persistent in the Realm. In the case of color components, the cool thing there is that you can have a computer variable, which I'll show after that will display directly to the view as a color. And with that binding, the view can also then change that color, the model can break it down into its components and then store that in the Realm. Let's actually skip the presenter in that case, because we were actually on the EditView, which I think is the more interesting view to talk about. So, this is the edit presenter for Viper.
**Jason Flax**: This is your ViewModel, this is your presenter. And as you can see here, it owns the Interactor and it's going to modify each of these fields as they're modified. It's going to fetch them. It's going to modify them. It's going to send updates to the view because it can't take advantage of Realms update. It can't take advantage of Realms observe or the Property wrappers or anything like that because you are creating this, separation of layers. In here with colors it's going to grab everything. And when you actually add new attendees, it's going to have to do that as well. So, as you can see, it just breaks everything down.
**Jason Flax**: And this is the Interactor that's actually now going to talk to the model. This is where your business logic is. This is where you could validate to make sure that say the title's not empty, that the attendees are not empty, that the length of time is not negative or something like that. And this is also where you'd save it. This would be the router, which again I didn't really know where to put this. It doesn't fit in with any other architecture but this is how you would present views with Viper
**Jason Flax**: And for anybody that's used SwiftUI you might be able to see that this is a bit odd. So, this would be your top level ViewModel for MVVM. In this case, you actually can somewhat take advantage of Realm. If you wanted to. Again, it depends on how by the book you are approaching the architecture as, so you have all your scrums there, you have what is, and isn't presented. You have all your ViewModels there as well, which are probably going to be derived from the result of scrums. And it's going to manage the Realm. It's going to own the Realm. It's going to own a network service potentially. You're going to add scrums through here. You're going to fetch scrums through here. It controls everything. It is the layer that blocks the data layer. It is the data access layer.
**Jason Flax**: And this is going to be your ViewModel for the actual DailyScrum. This is the presentation layer. This is where you're seeing. So you get the scrum title that you change the scrum title, and you get the scrum within minutes you change the scrum length in minutes. You validate it from here, you can add it from here. You can modify it from here. It also depends on the view. But to avoid repeating myself and this would be the EditView with the ViewModel. So, instead of having the Realm object here, as you saw with MBI, you'd have the ViewModel. The two-way data-binding is actually going to change the model. And then at the end you can update. So, things don't need to be live necessarily. And again the weird thing here is that with Realms live objects, why would you want to use the ViewModel when you have two-way data-binding?
**Jason Flax**: And just to ... My laptop fan just got very loud. This is the path to persistence with MVVM as well. So, user intent, user interaction they modify the view in whatever way they do it. it goes through the presenter, which is the DailyScrum ViewModel. This is specifically coming from the EditView. It goes to the presenter. It changes the DailyScrum model, which then interacts and persists to the Realm. Given anybody that's used Realm again to repeat myself, this is a strange way to use Realm considering what Realm is supposed to be as your persistent storage for live objects.
**Jason Flax**: MVI, what is your presentation layer? There's no VM here. There's no extra letters in here. So, what do we actually do? In MVI, the presentation layer is an instance of your data model. It is an instance of these simple structures. So in this case, this is the actual DailyScrum model. You can see on here, the color thing that I was talking about before. This color variable is going to bind directly to the view and when the view updates, it will update the model. It will persist to the Realm. It will sync to MongoDB Realm. It will then get the color back showed in the view, et cetera. And for business logic, that's going to be on the instance. This could be an extension. It could be in an extension. It could be in a different file. There's ways to organize that obviate the need for these previously needed abstractions in UIKit.
**Jason Flax**: So, this is an actual implementation of it, which I showed earlier. You have your StateRealmObject, which is the auto updating magic property wrapper in SwiftUI. You have your DailyScrum model and instance has been passed in here. So, when we actually write the title down, type the title, I suppose. Because it's on the phone, it is going to update sync persist, et cetera. MVI is a much shorter path to persistence because we are binding directly to the view. User makes an action, action modifies the view, view modifies the actual state. You modifies the Realm, modifies the DailyScrum syncs et cetera.
**Jason Flax**: Why MVI is NOT scary, not as an all capital letters because I'm super serious guys. So, MVI is lightweight. It's nearly a concept as opposed to a by the book architecture. There are standards and practices. You should definitely follow your business logic should be on the actual instance of the data model. The two-way data-bindings should be happening on the view itself. There's some wiggle room, but not really, but the implication is that the View is entirely data-driven. It has zero state of its own, bar a few dangling exceptions, like things being presented, like views being presented or scroll position or things like that.
**Jason Flax**: And all UI change has come from changes in the model which again, leveraging Realm, the model auto-updates and auto-notifies you anyway. So, that is all done for you. SwiftUI though imperfect does come very close to this ideal. View state can even be stored and persisted within the guidelines of the architecture to perfectly restore user state in the application, which ties back to the case I gave of somebody's phone dying and wanting to reopen right to the exact page that they were in the app. So, when considering the differences in SwiftUI's two way databinding versus UIKit's unidirectional data flow, we can rethink certain core concepts of at least MVVM and to an extent Viper.
**Jason Flax**: And this is where rethinking the acronyms or abbreviations comes into play a bit. It's a light spin, but I think for those who actually, let's say, you're on your team of 20 iOS developers and you go to the lead engineer and you're like, "I really think we should start doing MVI." And they're like, "Well, we've been doing MVVM for the past five years. So, you can take a walk." In this case, just rephrase MVVM. Rephrase Viper. In this case, your model becomes the round database. It is where you persist state. It is the data access layer. The View is still the View. That one doesn't change. The ViewModel, again, just becomes an instance of your Realm object.
**Jason Flax**: You just don't need the old school ViewModel anymore. The business logic goes on that. The transformation goes on that. It is honestly a light shift in responsibility, but it prevents having to test so much extra boilerplate code. If the goal of MVVM was to make things easier to test in previous iterations of iOS development, it no longer applies here because now you're actually just adding extra code. Viper concepts can be similarly rethought. Again, your View is your View, your presenter and interactor or the ViewModel. Your entity is the model and your router is enigma to me. So, I'll leave that one to the Viper doves out there to figure out. It looks like we have enough time for the extra slides that I have here before the Q &A.
**Jason Flax**: So, just a bit of networking code, this is really basic. It's not very good code either. So, in this case, we're just going to fetch the scrums from a third party service. So, we're not using Realm sync in this case. We have some third party service or our own service or whatever that we call in. And if we want those to show on the View, we're going to have to notify the View. We're going to want the cache those maybe. So, we're going to add those to the Realm. If they actually do have IDs, we want to make sure that our update policy does not overwrite the same scrums or anything like that for updating. And this is Viper, by the way. For updating similarly, we're going to pass the scrum to the interactor. That scrum is going to get sent up to the server. We're going to make sure that that scrum is then added to the Realm, depends on what we're updating.
**Jason Flax**: If we've updated it properly and using Realm as Realm is intended to be used, you should not have to re-add it to the Realm. But if you are following Viper by the book, you need to go through all the steps of reloading your model, saving this appropriately and updating the View, which again is a lot of extra work. Not to mention here as well, that this does not account for anything like conflicts or things that would happen in a real-world scenario, but I will get to that in a later slide. So, for MVVM in this case, the networking is likely going to be on the ViewModel and go through again, some kind of service. It's very similar to Viper, except that it's on the ViewModel we're going to fetch.
**Jason Flax**: We're going to add to the Realm of cache, cache and layer. And because we're not using the Realm property wrappers on the View, we're using the ViewModel, we have to update the View manually, which is the objectWillChange.send. So, for MVI, it's similar, but again, slightly different because the Realms are on the View this time, the main difference here is that we don't have to update anything. That results from before the observed results. That's going to automatically update the View. And for the update case, you shouldn't really have to do anything, which is the big difference between the other two architectures because you're using live objects, everything should just be live.
**Jason Flax**: And because in MVI, the business logic is going to be on the data models themselves or the instances of the Realm objects themselves. These methods are going to be on that, you update using yourself which is here. And the cool thing, if you're using MongoDB Realm Sync and you're thinking about networking, you don't have to do anything. Again, not being facetious, that's handled for you. If you're using your persistence layer and thinking about sync, when you actually open up the Realm, those scrums are going to be pulled down for you, and they're going to hydrate the Realm.
**Jason Flax**: If somebody on their phone updates one of the existing scrums, that's going to be automatically there for you. It is going to appear on your View live without you having to edit any extra code, any extra networking or whatever. Similar, removal. And of course, Realm sync also handles things like conflicts, where if I'm updating the scrum at the same time as somebody else, the algorithm on the backend will figure out how to handle that conflict for you. And your Realm, your persistence layer, your instances of your data models as well, which is another cool feature from because remember that they're live, they will be up-to-date.
**Jason Flax**: They will sync to the Views, which will then have the most up-to-date information without you adding any extra code. I would love to go into this more. So, for my next talk, I think the thing I want to do, and of course I'd like to hear from everybody if they'd be interested, but the thing I want to do is to show a more robust, mature, fully fledged application using MVI MongoDB Realm sync, SwiftUI Realm and the property wrappers, which we can talk about more in the Q&A, but that's my goal. I don't know when the talk will be, but hopefully sooner than later. And then finally, the last bit of slides here. Actually, testing your models. So, for MVVM you actually have to test the ViewModels. You're going to test that things are writing to the database and reading from database appropriately.
**Jason Flax**: You're testing that the business logic validates correctly. You're testing that it calculates View data correctly. You're testing out all of these calculations that you don't necessarily have to test out with other architectures. Viper, it's going to be the same thing. You're just literally swapping out the ViewModel for the interactor and presenter. But for MVI, colors are a little messed up there. You're really just going to be testing the business logic on your models. You're going to create instances of those Realm objects and make sure that the business logic checks out. For all of these, I would also highly recommend that you write UI tests. Those are very important parts of testing UI applications. So, please write those as well. And that's it. Thank you, everyone. That is all for the presentation. And I would love to throw this back to Ian and Shane, so that we can start our Q&A.
**Shane McAllister**: Excellent. Thank you. Thank you, Jason. That was great. I learned a lot in that as well, too. So, do appreciate that. I was watching the comments on the side and thank you for the likes of Jacob and Sebastian and Ian and Richard and Simon who've raised some questions. There's a couple that might come in there. But above all, they've been answered by Lee and also Alexander Stigsen. Who, for those of you who don't know, and he'll kill me for saying, is the founder of Realm and he's on the chat. So, if you question, drop it in there. He's going to kill me now. I'm dead. So, I think for anybody, as I said at the beginning, we can open and turn on your camera and microphone if you want to ask a question directly.
**Shane McAllister**: There's no problem if you don't want to come on camera well you can throw it into the chat and I'll present it to essentially Jason and Ian and we'll discuss it. So, I think while we're seeing, if anybody comes in there, and for this scrum dinger example, Jason, are we going to put our Realm version up on a repo somewhere that people can play around with?
**Jason Flax**: Yes, we will. It is not going to be available today, unfortunately. But we are going to do that in the next, hopefully few days. So, I will I guess we'll figure out a way to send out a link to when that time comes.
**Shane McAllister**: Okay. So, we've a question from Jacob. "And what thoughts do you have on using MVI for more mixed scenarios, for example, an app or some Views operate on the database while others use something like a RIA service?"
**Jason Flax**: Where is that question, by the way, \[crosstalk
00:38:31\].
**Shane McAllister**: On the chat title, and there's just a period in the chat there it'll give you some heads up. Most of the others were answered by Alexander and Lee, which is great. Really appreciate that. But so looking at the bottom of the chat there, Jason, if you want to see them come through.
**Jason Flax**: I see, Jacob. Yeah. So, I hope I was able to touch on that a bit at the end. For Views that need to talk to some network service, I would recommend that that logic again, no different than MVVM or Viper, that logic, which I would consider business logic, even though it's talking to RIA service, it just goes back on the instance of the object itself. In certain cases, I think let's say you're fetching all of the daily scrums from the server, I would make that a static method on the instance of the data object, which is mainly for organizational purposes, to be honest. But I don't think that it needs to be specially considered beyond that. I'm sure in extremely complex cases, more questions could be asked, but I would probably need to see a more complex case to be able to-
**Ian Ward**: I think one of the themes while you were presenting with the different architecture patterns, is that a lot of the argument here is that we are eliminating boilerplate code. We're eliminating a lot of the code that a developer would normally need to write in order to implement MVVM or there was a talk of MVC as Massive View Controller. And some of the questions around MVI were, "Do we have the risk of also maybe inflating the model as well here?" Some of that boilerplate code now go into the model. How would you talk to that a little bit of putting extra code into the model now to handle some of this?
**Jason Flax**: As in like how to avoid this massive inflation of your model \[crosstalk 00:40:33\]?
**Ian Ward**: Yeah. Exactly. Are we just moving the problem around or does some of this eliminate some of that boilerplate?
**Jason Flax**: To be honest, each one of these \[crosstalk 00:40:45\].
**Ian Ward**: That's fair. I guess that's why it's a contentious issue. You have your opinions and at some point it's, where do you want to put the code?
**Jason Flax**: Right. Which is why, there is no best solution and there is no best answer to your question either. The reason that I'm positing MVI here is not necessarily about code organization, which is always going to be a problem and it's going to be unique to somebody's application. If you have a crazy amount of business logic on one of your Realm objects, you probably need to break up that Realm object. That would be my first thought. It might not be true for each case. I've seen applications where people have 40 different properties on their Realm object and a ton of logic associated with it. I personally would prefer to see that broken down a bit more.
**Jason Flax**: You can then play devil's advocate and say, "Well, okay," then you end up with the Ravioli Code that you were talking about from before. So it's all, it's this balancing act. The main reason I'm positing MVI as the go-to architecture is less about code organization and more about avoiding unnecessarily boilerplate and having to frankly test more than.
**Ian Ward**: Right.That's a fair answer. And a couple of questions that are coming in here. There's one question at the beginning asking about the block pattern, which watch out sounds like we have a flutter developer in here. But the block pattern is very much about event streams and passing events back and forth, which although we have the property wrappers, we've done a lot of the work under the hood. And then there was another question on combined. So, maybe you could talk a little bit about our combined support and some of the work that we've done with property wrappers to integrate with that.
**Jason Flax**: Sure. So, we've basically added extensions to each of our observable types which in the case of Realm is going to be objects, lists, backlinks, results which is basically a View of the table that the object is stored on, which can be queried as well. And then by effect through objects, you can also observe individual properties. We have added support to combine. So, you can do that through the flow of combine, to get those nice chains of observations, to be able to map data how you want it to sync it at the end and all that kind of thing. Under the hood of our property wrappers are hooking that observation logic into SwiftUI.
**Jason Flax**: Those property wrappers themselves have information on that, so that when a change happens, it notifies the View. To be honest, some of that is not through combined, but it's just through standard observation. But I think the end mechanism where we actually tell the View, this thing needs to update that is through, I guess, one of the added combined features, which is the publisher for object changes. We notified the View, "Hey, this thing is updated." So, yeah, there's full combine support for Realm, is the short answer as well.
**Ian Ward**: Perfect.
**Shane McAllister**: Cool. There was a question hiding away in the Q&A section as well too. "Does at state Realm object sends Realm sync requests for each key stroke?"
**Jason Flax**: It would. But surprisingly enough, that is actually not as heavy of an action as you might think. We've had a lot of debate about this as well, because that is one of the first questions asked when people see the data being bound to a text field. It's really not that heavy. If you are worried about it or maybe this is just some application that you want to work in the Tundras of Antarctica, and maybe you don't want to have to worry about things like network connection or something, I would consider using a draft object, something that is not being persistent to the Realm. And then at the end, when you are ready to persist that you can persistent it. Classically, that would have been the ViewModel, but now you can just use an instance of a non-persistent Realm object, a non \[crosstalk 00:44:51\].
**Ian Ward**: Yeah. That was another question as well. I believe Simon, you had a question regarding draft objects and having ... And so when you say draft objects, you're saying a copy of the Realm object in memory, is that correct? Or maybe you can go into that a little bit.
**Jason Flax**: It could be a copy. That would be the way to handle an existing object that you want to modify, if you don't want to set it up on every keystroke for form Views in this case, let's say it's a form. Where it doesn't exist in the Realm, you can just do an on manage type and to answer Simon's second query there. Yeah, it could also be managed by a local Realm that is also perfectly valid, and that is another approach. And if I recall Simon, were you working on the workout app with that?
**Ian Ward**: I believe he was.
**Jason Flax**: I don't know. Yeah. Yeah. I played around with that. That is a good app example for having a lot of forms where maybe you don't want to persist on every keystroke. Maybe you even want something like specifically, and I believe this might've even been the advice that I gave you on the forums. Yes, store a draft object in a local Realm. It could be the exact same object. It could be a different model that is just called, let's say, you want to save your workout and it has sets and reps and whatever. You might have a workout object stored in the sync Realm, and then you might have a workout draft object stored in a local Realm, and you can handle it that way as well.
**Shane McAllister**: Great. Does anybody want to come on screen with us, turn on the camera, turn on the mic, join us? If you do, just ping in the chat, I'll jump in. I'll turn that right on for you. Richard had a question further up and it was more advice, more so than a question per se, "Jason, nice to show some examples of how you would blend MVI with wrapped View controllers." He's saying that rewrites are iterative and involve hybrid systems was the other point he made.
**Jason Flax**: Right. Yeah. That would be a great concept for another talks because yeah, you're totally right. It's really easy for me to come in with a cricket bat or whatever, and just knock everything down and say, "Use MVI." But in reality of course, yeah, you want to incrementally migrate to something you never want to do ever. Well, not never, but most of the time you don't want to do a total rewrite.
**Ian Ward**: Yeah, a total rewrite would be a sticky wicket, I think. So, for cricket. So, we have another question here on Realm's auto-sync. And the question from Sebastian is, "Can we force trigger from an API sync?" And actually I can answer this one. So, yes, you can. There is a suspend and resume method for Realm sync. So, if you really want to be prescriptive about when Realm syncs and doesn't sync, you can control that in your code.
**Jason Flax**: Perfect.
**Shane McAllister**: And asks, "Is there any learning path available to get started with Realm?" Well, we've got a few. Obviously our docs is a good place to start, and if you go look in there, but the other thing too is come on who, and this is the plug, developer.mongodb.com. And from there, you can get to our developer hub for articles. You can get into our forums to ask the questions of the engineers who are on here and indeed our wider community as well too. But we're also very active where our developers are. So, in GitHub and Stack Overflow, et cetera, as well too, there's comments and questions whizzing around there. Jason, is there anywhere else to go and grab information on getting started with Realm?
**Shane McAllister**: Yeah. Obviously this is the place to go as well too. I know we're kind of, we went in at a high level and a lot of this here and maybe it's not obviously the beginner stuff, but we intend to run these as often as we can. Certainly once or twice a month going forward, resources permitting and time permitting for everybody too. So, as Ian said, I think at the beginning, tell us what you want to hear in meetups like this as well too because we want to engage with our community, understand where you're at and help you resolve your problems with Realm as much as possible.
**Ian Ward**: Absolutely
**Shane McAllister**: Ian has another one in here, Ian. Thank you, Ian. "And how to move a local Realm into sync? Just copy the items manually from one to the other or is there a switch you can throw to make the local one a synced one?"
**Ian Ward**: Yeah.
**Jason Flax**: \[crosstalk 00:49:49\]. So, we do get this feature request. It is something that is on my list, like by list of product backlog. Definitely something I want to add and we just need to put a product description together, another thing on my backlog. But yes, right now what you would do is to open the local Realm, iterate through all the objects, copy them over into a synced Realm. The issue here is that a synced Realm has to match the history of the MondoDB Realm sync server on the side. So, the histories have to match and the local Realm doesn't have that history. So, it breaks the semantics of conflict resolution. In the future, we would like to give a convenience API to do this very simply for the user. And so hopefully we can solve that use case for you.
**Shane McAllister**: Good. Well, Ian has responded to say, "That makes sense." And indeed it does, as always. Something else for your task list then. So, yeah, definitely.
**Ian Ward**: Absolutely.
**Shane McAllister**: I'm trying to scroll back through here to see, did we miss anybody. If we did miss anybody, do to let me know. I noticed a comment further up from \[Anov 00:51:01\], which was great to see, which is, "These sessions turn out to be the best use of my time." And that's what we're looking for, that validity from our community, that this is worth the time. Jason puts a ton of effort into getting this prepared as does Ian and pulling it all together. Those examples don't write themselves. And indeed the wider team, the Coca team with Jason as well had put effort into putting this together. So, it's great to see that these are very beneficial for our community. So, unless, is there anything else, any other questions? I suppose throwing it back out to you, Jason, what's next? What's on the roadmap? What's keeping you busy at the moment? Ian, what are we planning later on? You're not going to say you can't tell, right?
**Ian Ward**: Yeah. For iOS specifically, I think maybe Jason, we were talking about group results. I know we had a scope the other day to do that. We're also talking about path filtering. These are developer improvements for some of the APIs that we have that are very iOS-specific. So, I don't know, Jason, if you want to talk to a couple of those things that would be great.
**Jason Flax**: Sure. Yeah. And I'll talk about some of stuff that hopefully we'll get to next quarter as well. So, group results is something we actually have to figure out and ironically actually ties more to UIKit and basically how to better display Realm data on table Views. But we are still figuring out what that looks like. Key path filtering is nice. It just gives you granual observation for the properties that you do actually want to observe and listen to on objects. Some of the other things that we've begun prototyping, and I don't think it's ... I can't promise any dates. Also, by the way, Realm is open source. So, all of this stuff that we're talking about, go on our branches, you can see our poll requests. So, some of the stuff that we're prototyping right now Async rights, which is a pretty common use case we're writing data to Realm asynchronously.
**Jason Flax**: We're toying with that. We're toying with another property wrapper called auto-open, which will hopefully simplify some of the logic around MongoDB Realm locking in and async opening the Realm. Basically the goal of that project is so that let's say your downloading a synced Realm with a ton of data in it as opposed to having to manually open the Realm, notify the View that it's done, et cetera, you'll again, just use a property wrapper that when the Realm is done downloading, it will notify the View that that's occurred. We're also talking about updating our query syntax. That one I'm particularly excited about. Again, no dates promised. But it will basically be as opposed to having to use NS predicate to query on your Realm objects, you would be able to instead use a type safe key path based query syntax, closer to what Swift natively uses.
**Ian Ward**: Absolutely. We've got some new new types coming down the pike as well. We have a dictionary type for more unstructured key values, as well as a set we're looking to introduce very shortly and a mixed type as well, which I believe we have a different name for that. Don't we, Jason?
**Jason Flax**: Yes, it will follow-
**Ian Ward**: Any Realm value. There you go.
**Jason Flax**: ... what that does yeah. Any Realm value -
**Ian Ward**: Yeah, so we had a lot of feature requests for full tech search. And so if you have, let's say an inventory application that has a name and then the description, two fields on an object and that's a string field. We have just approved our product description for full text search. So, you'll hopefully be able to tokenize or we are working towards tokenizing that string fields. And so then you can search that string field, search the actual words in that string field to get a match at index level speeds. So, hopefully that will help individuals, especially when they're offline to search string fields.
**Jason Flax**: That's Richard's dictionary would be huge. Yeah. We're excited about that one. We're probably going to call it Map. So, yeah, that's an exciting one.
**Shane McAllister**: Excellent. Ian's squeezing in a question there, a feature request actually. Leads open multiple sync Realms targeting multiple partition keys. Okay.
**Ian Ward**: Yeah. So, we are actively working towards that. I don't know how many people are familiar with Legacy Realm. I recognize a couple faces here, but we did have something called query based sync. And we are looking to have a reimagination of that inquiry-based sync 2.0 or we're also calling it flexible sync, which will have a very analogous usage where you'd be able to send queries to the server side, have those queries run and returned the results set down to the client. And this will remove the partition key requirement. And so yes, we are definitely working on that and it's definitely needed for our sync users for sure.
**Shane McAllister**: Excellent. That got a yay and a what, cool emoji from Ian. Thank you, Ian, appreciate it. Excellent. I think that probably look we're just after the hour or two hours for those of you that joined at the earlier start time that we decided we were going to do this at. For wrap-up for me at, from an advocacy point of View, we love to reach out to the community. So, I'm going to plug again, developer.mongodb.com. Please come on board there to our forums and our developer hub, where we write about round content all the time. We want to grow this community. So, live.mongodb.com will lead you to the Realm global community, where if you sign-up, if you haven't already, and if you sign-up, you'll get instant notification of any of these future meetups that we're doing.
**Shane McAllister**: So, they're not all Swift. We're covering all of our other SDKs as well too. And then we have general meetups. So, please sign-up there, share the word. And also on Twitter, the app Realm Twitter handle. If you enjoyed this, please share that on Twitter with everybody. We love to see that feedback come through and we want to be part of that community. We want to engage on Twitter as well too. So, our developer hub, our forums and Twitter. And then obviously as Jason mentioned, the round master case or open source, you can contribute on our repos if you like. We love to see the participation of the wider community as well, too. Ian, anything to add?
**Ian Ward**: No, it's just it's really great to see so many people joining and giving great questions. And so thank you so much for coming and we love to see your feedback. So, please try out our new property wrappers, give us feedback. We want to hear from the community and thank you so much, Jason and team for putting this together. It's been a pleasure
**Shane McAllister**: Indeed. Excellent. Thank you, everyone. Take care.
**Jason Flax**: Thank you everyone for joining.
**Ian Ward**: Thank you. Have a great week. Bye.
| md | {
"tags": [
"Realm",
"Swift"
],
"pageDescription": "Missed Realm SwiftUI Property wrappers and MVI architecture meetup event? Don't worry, you can catch up here.",
"contentType": "Article"
} | Realm SwiftUI Property wrappers and MVI architecture Meetup | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/java-azure-spring-apps | created | # Getting Started With Azure Spring Apps and MongoDB Atlas: A Step-by-Step Guide
## Introduction
Embrace the fusion of cloud computing and modern application development as we delve into the integration of Azure
Spring Apps
and MongoDB. In this tutorial, we'll guide you through the process of creating
and deploying a Spring Boot
application in the Azure Cloud, leveraging the strengths of Azure's platform, Spring Boot's simplicity, and MongoDB's
capabilities.
Whether you're a developer venturing into the cloud landscape or looking to refine your cloud-native skills, this
step-by-step guide provides a concise roadmap. By the end of this journey, you'll have a fully functional Spring Boot
application seamlessly running on Azure Spring
Apps, with MongoDB handling your data storage needs and a REST API ready
for interaction. Let's explore the synergy of these technologies and propel your cloud-native endeavors forward.
## Prerequisites
- Java 17
- Maven 3.8.7
- Git (or you can download the zip folder and unzip it locally)
- MongoDB Atlas cluster (the M0 free tier is enough for this tutorial). If you don't have
one, you can create one for free.
- Access to your Azure account with enough permissions to start a new Spring App.
- Install the Azure CLI to be
able to deploy your Azure Spring App.
I'm using Debian, so I just had to run a single command line to install the Azure CLI. Read the documentation for your
operating system.
```shell
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
```
Once it's installed, you should be able to run this command.
```shell
az --version
```
It should return something like this.
```
azure-cli 2.56.0
core 2.56.0
telemetry 1.1.0
Extensions:
spring 1.19.2
Dependencies:
msal 1.24.0b2
azure-mgmt-resource 23.1.0b2
Python location '/opt/az/bin/python3'
Extensions directory '/home/polux/.azure/cliextensions'
Python (Linux) 3.11.5 (main, Jan 8 2024, 09:08:48) GCC 12.2.0]
Legal docs and information: aka.ms/AzureCliLegal
Your CLI is up-to-date.
```
> Note: It's normal if you don't have the Spring extension yet. We'll install it in a minute.
You can log into your Azure account using the following command.
```shell
az login
```
It should open a web browser in which you can authenticate to Azure. Then, the command should print something like this.
```json
[
{
"cloudName": "AzureCloud",
"homeTenantId": "",
"id": "",
"isDefault": true,
"managedByTenants": [],
"name": "MDB-DevRel",
"state": "Enabled",
"tenantId": "",
"user": {
"name": "maxime.beugnet@mongodb.com",
"type": "user"
}
}
]
```
Once you are logged into your Azure account, you can type the following command to install the Spring extension.
```shell
az extension add -n spring
```
## Create a new Azure Spring App
To begin with, on the home page of Azure, click on `Create a resource`.
![Create a resource][1]
Then, select Azure Spring Apps in the marketplace.
![Azure Spring Apps][2]
Create a new Azure Spring App.
![Create a new Azure Spring App][3]
Now, you can select your subscription and your resource group. Create a new one if necessary. You can also create a
service name and select the region.
![Basics to create an Azure Spring App][4]
For the other options, you can use your best judgment depending on your situation but here is what I did for this
tutorial, which isn't meant for production use...
- Basics:
- Hosting: "Basic" (not for production use, but it's fine for me)
- Zone Redundant: Disable
- Deploy sample project: No
- Diagnostic settings:
- Enable by default.
- Application Insights:
- Disable (You probably want to keep this in production)
- Networking:
- Deploy in your own virtual network: No
- Tags:
- I didn't add any
Here is my `Review and create` summary:
![Review and create][5]
Once you are happy, click on `Create` and wait a minute for your deployment to be ready to use.
## Prepare our Spring application
In this tutorial, we are deploying
this [Java, Spring Boot, and MongoDB template,
available on GitHub. If you want to learn more about this template, you can read
my article, but in a few words:
It's a simple CRUD Spring application that manages
a `persons` collection, stored in MongoDB with a REST API.
- Clone or download a zip of this repository.
```shell
git clone git@github.com:mongodb-developer/java-spring-boot-mongodb-starter.git
```
- Package this project in a fat JAR.
```shell
cd java-spring-boot-mongodb-starter
mvn clean package
```
If everything went as planned, you should now have a JAR file available in your `target` folder
named `java-spring-boot-mongodb-starter-1.0.0.jar`.
## Create our microservice
In Azure, you can now click on `Go to resource` to access your new Azure Spring App.
for
the Java driver. It should look like this:
```
mongodb+srv://user:password@free.ab12c.mongodb.net/?retryWrites=true&w=majority
```
- Create a new environment variable in your configuration.
, it's time
> to create one and use the login and password in your connection string.
## Atlas network access
MongoDB Atlas clusters only accept TCP connections from known IP addresses.
As our Spring application will try to connect to our MongoDB cluster, we need to add the IP address of our microservice
in the Atlas Network Access list.
- Retrieve the outbound IP address in the `Networking` tab of our Azure Spring App.
,
you can access the Swagger UI here:
```
https:///swagger-ui/index.html
```
and start exploring all the features
MongoDB Atlas has to offer.
Got questions or itching to share your success? Head over to
the MongoDB Community Forum – we're all ears and ready to help!
Cheers to your successful deployment, and here's to the exciting ventures ahead! Happy coding! 🚀
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt85b83d544dd0ca8a/65b1e83a5cdaec024a3b7504/1_Azure_create_resource.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbb51a0462dcbee8f/65b1e83a60a275d0957fb596/2_Azure_marketplace.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf3dd72fc38c1ebb6/65b1e83a24ea49f803de48b9/3_Azure_create_spring_app.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltba32459974d4333e/65b1e83a5f12edbad7e207d2/4_Azure_create_spring_app_basics.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc13b9d359d2bea5d/65b1e83ad2067b1eef8c361a/5_Azure_review_create.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6c07487ae8a39b3a/65b1e83ae5c1f348ced943b7/6_Azure_go_to_resource.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt37a00fe46791bb41/65b1e83a292a0e1bf887c012/7_Azure_create_app.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt402b5ddcf552ae28/65b1e83ad2067bb08b8c361e/8_Azure_create_app_details.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf0d1d7e1a2d3aa85/65b1e83a7d4ae76ad397f177/9_Azure_access_new_microservice.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb238d1ab52de83f7/65b1e83a292a0e6c2c87c016/10_Azure_env_variable_mdb_uri.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt67bc373d0b45280e/65b1e83a5cdaec4f253b7508/11_Azure_networking_outbound.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt18a1a75423ce4fb4/65b1e83bc025eeec67b86d13/12_Azure_networking_Atlas.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdeb250a7e04c891b/65b1e83a41400c0b1b4571e0/13_Azure_deploy_app_tab.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt390946eab49df898/65b1e83a7d4ae73fbb97f17b/14_Azure_deploy_app.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7a33fb264f9fdc6f/65b1e83ac025ee08acb86d0f/15_Azure_app_deployed.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d899245c14fb006/65b1e83b92740682adeb573b/16_Azure_assign_endpoint.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc531bc65df82614b/65b1e83a450fa426730157f0/17_Azure_endpoint.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6bf26487ac872753/65b1e83a41400c1b014571e4/18_Azure_Atlas_doc.png
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39ced6072046b686/65b1e83ad2067b0f2b8c3622/19_Azure_Swagger.png
| md | {
"tags": [
"Java",
"Atlas",
"Azure",
"Spring"
],
"pageDescription": "Learn how to deploy your first Azure Spring Apps connected to MongoDB Atlas.",
"contentType": "Tutorial"
} | Getting Started With Azure Spring Apps and MongoDB Atlas: A Step-by-Step Guide | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/change-streams-with-kafka | created | # Migrating PostgreSQL to MongoDB Using Confluent Kafka
In today's data-driven world, businesses are continuously seeking innovative ways to harness the full potential of their data. One critical aspect of this journey involves data migration – the process of transferring data from one database system to another, often driven by evolving business needs, scalability requirements, or the desire to tap into new technologies.
In this era of digital transformation, where agility and scalability are paramount, organizations are increasingly turning to NoSQL databases like MongoDB for their ability to handle unstructured or semi-structured data at scale. On the other hand, relational databases like PostgreSQL have long been trusted for their robustness and support for structured data.
As businesses strive to strike the right balance between the structured and unstructured worlds of data, the question arises: How can you seamlessly migrate from a relational database like PostgreSQL to the flexible documented-oriented model of MongoDB while ensuring data integrity, minimal downtime, and efficient synchronization?
The answer lies in an approach that combines the power of Confluent Kafka, a distributed streaming platform, with the agility of MongoDB. In this article, we'll explore the art and science of migrating from PostgreSQL to MongoDB Atlas, leveraging Confluent Kafka as our data streaming bridge. We'll delve into the step-by-step tutorial that can make this transformation journey a success, unlocking new possibilities for your data-driven initiatives.
## Kafka: a brief introduction
### What is Apache Kafka?
Apache Kafka is an open-source distributed streaming platform developed by the Apache Software Foundation that is designed to handle real-time data streams.
To understand Kafka, imagine a busy postal system in a bustling city. In this city, there are countless businesses and individuals sending packages and letters to one another, and it's essential that these messages are delivered promptly and reliably.
Apache Kafka is like the central hub of this postal system, but it's not an ordinary hub; it's a super-efficient, high-speed hub with a memory that never forgets. When someone sends a message (data) to Kafka, it doesn't get delivered immediately. Instead, it's temporarily stored within Kafka's memory. Messages within Kafka are not just one-time deliveries. They can be read and processed by multiple parties. Imagine if every package or letter sent through the postal system had a copy available for anyone who wanted it. This is the core concept of Kafka: It's a distributed, highly scalable, and fault-tolerant message streaming platform.
From maintaining real-time inventory information for e-commerce to supporting real-time patient monitoring, Kafka has varied business use cases in different industries and can be used for log aggregation and analysis, event sourcing, real-time analytics, data integration, etc.
## Kafka Topics
In the same analogy of the postal system, the system collects and arranges its letters and packages into different sections and organizes them into compartments for each type of item. Kafka does the same. The messages it receives from the producer of data are arranged and organized into Kafka topics. Kafka topics are like different mailboxes where messages with a similar theme are placed, and various programs can send and receive these messages to exchange information. This helps keep data well-organized and ensures that the right people or systems can access the information they need from the relevant topic.
## Kafka connectors
Kafka connectors are like special mailboxes that format and prepare letters (data) in a way that Kafka can understand, making it easier for data to flow between different systems. Say the sender (system) wants to send a letter (data) to the receiver (another system) using our postal system (Kafka). Instead of just dropping the letter in the regular mailbox, the sender places it in a special connector mailbox outside their house. This connector mailbox knows how to format the letter properly. So connectors basically act as a bridge that allows data to flow between Kafka and various other data systems.
## Confluent Kafka
Confluent is a company that builds tools and services. It has built tools and services for Apache Kafka to make it more robust and feature-rich. It is like working with a more advanced post office that not only receives and delivers letters but also offers additional services like certified mail, tracking, and package handling. The migration in this article is done using Confluent Kafka through its browser user interface.
## Setting up a Confluent Kafka account
To begin with, you can set up an account on Confluent Kafka by registering on the Confluent Cloud website. You can sign up with your email account or using GitHub.
Once you log in, this is how the home page looks:
This free account comes with free credits worth $400 which you can use to utilize the resources in the Confluent Cloud. If your database size is small, your migration could also be completed within this free credit limit. If you go to the billing section, you can see the details regarding the credits.
To create a new cluster, topics, and connectors for your migration, click on the Environments tab from the side menu and create a new environment and cluster.
You can select the type of cluster. Select the type “basic” which is the free tier with basic configuration. If you want to have a higher configuration for the cluster, you can select the “standard”, “enterprise,” or “dedicated” cluster types which have higher storage, partition, and uptime SLA respectively with hourly rates.
Next, you can select the region/zone where your cluster has to be deployed along with the cloud provider you want for your cluster (AWS, GCP, or Azure ). The prerequisite for your data migration to work through Kafka connectors is that the Kafka cluster where you create your connectors should be in the same region as your MongoDB Atlas cluster to where you will migrate your PostgreSQL data.
Then, you can provide your payment information and launch your cluster.
Once your cluster is launched, this is how the cluster menu looks with options to have a cluster overview and create topics and connectors, among other features.
With this, we are ready with the basic Kafka setup to migrate your data from PostgreSQL to MongoDB Atlas.
## Setting up PostgreSQL test data
For this example walkthrough, if you do not have an existing PostgreSQL database that you would like to migrate to a MongoDB Atlas instance using Confluent Kafka, you can create a sample database in PostgreSQL by following the below steps and then continue with this tutorial.
1. Download PostgreSQL Database Server from the official website and start your instance locally.
2. Download the pgadmin tool and connect to your local instance.
3. Create a database ```mytestdb``` and table ```users``` and put some sample data into the employee table.
```sql
-- Create the database mytestdb
CREATE DATABASE mytestdb;
-- Connect to the mytestdb database
\c org;
-- Create the users table
CREATE TABLE users (
id SERIAL PRIMARY KEY,
firstname VARCHAR(50),
lastname VARCHAR(50),
age INT
);
-- Insert sample data into the 'users' table
INSERT INTO users (firstname, lastname, age)
VALUES
('John', 'Doe', 25),
('Jane', 'Smith', 30),
('Bob', 'Johnson', 22);
```
Keep in mind that the host where your PostgreSQL is running — in this case, your local machine — should have Confluent Kafka whitelisted in a firewall. Otherwise, the source connector will not be able to reach the PostgreSQL instance.
## Steps for data migration using Confluent Kafka
To migrate the data from PostgreSQL to MongoDB Atlas, we have to configure a source connector to connect to PostgreSQL that will stream the data into the Confluent Cloud topic. Then, we will configure a sink connector for MongoDB Atlas to read the data from the created topic and write to the respective database in the MongoDB Atlas cluster.
### Configuring the PostgreSQL source connector
To configure the PostgreSQL source connector, follow the below steps:
1. Click on the Connectors tab in your newly created cluster in Confluent. It will list popular plugins available in the Confluent Cloud. You can search for the “postgres source” connector plugin and use that to create your custom connector to connect to your PostgreSQL database.
2. Next, you will be prompted for the topic prefix. Provide the name of the topic into which you want to stream your PostgreSQL data. If you leave it empty, the topic will be created with the table name for you.
3. You can then specify the access levels for the new connector you are creating. You can keep it global and also download the API credentials that you can use in your applications, if needed to connect to your cluster. For this migration activity, you will not need it — but you will need to create it to move to the next step.
4. Next, you will be prompted for connection details of PostgreSQL.You can provide the connection params, schema context, transaction isolation levels, poll intervals, etc. for the connection.
5. Select the output record type as JSON. MongoDB natively uses the JSON format. You will also have to provide the name of the table that you are trying to migrate.
6. In the next screen, you will be redirected to an overview page with all the configurations you provided in JSON format along with the cost for running this source connector per hour.
7. Once you create your source connector, you can see its status in the
Connectors tab and if it is running or has failed. The source
connector will start syncing the data to the Confluent Cloud topic
immediately after starting up. You can check the number of messages
processed by the connector by clicking on the new connector. If the
connector has failed to start, you can check connector logs and
rectify any issues by reconfiguring the connector settings.
### Validating data in the new topic
Once your Postgres source connector is running, you can switch to the Topics tab to list all the topics in your cluster, and you will be able to view the new topic created by the source connector.
If you click on the newly created topic and navigate to the “Messages” tab, you will be able to view the processed messages. If you are not able to see any recent messages, you can check them by selecting the “Jump to time” option, selecting the default partition 0, and providing a recent past time from the date picker. Here, my topic name is “users.”
Below, you can see the messages processed into my “users” topic from the users table in PostgreSQL.
### Configuring the MongoDB Atlas sink connector
Now that we have the data that you wanted to migrate (one table, in our example) in our Confluent Cloud topic, we can create a sink connector to stream that data into your MongoDB Atlas cluster. Follow the below steps to configure the data inflow:
1. Go to the Connectors tab and search for “MongoDB Atlas Sink” to find the MongoDB Atlas connector plugin that you will use to create your custom sink connector.
2. You will then be asked to select the topic for which you are creating this sink connector. Select the respective topic and click on “Continue.”
3. You can provide the access levels for the sink connector and also download the API credentials if needed, as in the case of the source connector.
4. In the next section, you will have to provide the connection details for your MongoDB Atlas cluster — including the hostname, username/password, database name, and collection name — into which you want to push the data. The connection string for Atlas will be in the format ```mongodb+srv://:@```, so you can get the details from this format. Remember that the Atlas cluster should be in the same region and hosted on the same cloud provider for the Kafka connector to be able to communicate with it. You have to add your Confluent cluster static IP address into the firewall’s allowlist of MongoDB Atlas to allow the connections to your Altas cluster from Confluent Cloud. For non-prod environments, you can also add 0.0.0.0/0 to allow access from anywhere, but it is not recommended for a production environment as it is a security concern allowing any IP access.
5. You can select the Kafka input message type as JSON as in the case of the source connector and move to the final review page to view the configuration and cost for your new sink connector.
6. Once the connector has started, you can query the collection mentioned in your sink connector configuration and you would be able to see the data from your PostgreSQL table in the new collection of your MongoDB Atlas cluster.
### Validating PostgreSQL to Atlas data migration
This data is synced in real-time from PostgreSQL to MongoDB Atlas using the source and sink connectors, so if you try adding a new record or updating/deleting existing records in PostgreSQL, you can see it reflect real-time in your MongoDB Atlas cluster collection, as well.
If your data set is huge, the connectors will catch up and process all the data in due time according to the data size. After completion of the data transfer, you can validate your MongoDB Atlas DB and stop the data flow by stopping the source and sink connectors directly from the Confluent Cloud Interface.
Using Kafka, not only can you sync the data using its event-driven architecture, but you can also transform the data in transfer in real-time while migrating it from PostgreSQL to MongoDB. For example, if you would like to rename a field or concat two fields into one for the new collection in Atlas, you can do that while configuring your MongoDB Atlas sink connector.
Let’s say PostgreSQL had the fields “firstname” and “lastname” for your “users” table, and in MongoDB Atlas post-migration, you only want the “name” field which would be a concatenation of the two fields. This can be done using the “transform” attribute in the sink connector configuration. This provides a list of transformations to apply to your data before writing it to the database. Below is an example configuration.
```json
{
"name": "mongodb-atlas-sink",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"tasks.max": "1",
"topics": "your-topic-name",
"connection.uri": "mongodb+srv://:@cluster.mongodb.net/test",
"database": "your-database",
"collection": "your-collection",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter.schemas.enable": "false",
"transforms": "addFields,unwrap",
"transforms.addFields.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.addFields.static.field": "name",
"transforms.addFields.static.value": "${r:firstname}-${r:lastname}",
"transforms.unwrap.type": "io.debezium.transforms.UnwrapFromEnvelope",
"transforms.unwrap.drop.tombstones": "false",
"transforms.unwrap.delete.handling.mode": "none"
}
}
```
## Relational Migrator: an intro
As we are discussing data migration from relational to MongoDB, it’s worth mentioning the MongoDB Relational Migrator. This is a tool designed natively by MongoDB to simplify the process of moving data from relational databases into MongoDB. Relational Migrator analyzes your relational schema and gives recommendations for mapping to a new MongoDB schema.
Its features — including schema analysis, data extraction, indexing, and validation — make it a valuable asset for organizations seeking to harness the benefits of MongoDB's NoSQL platform while preserving their existing relational data assets. Whether for application modernization, data warehousing, microservices, or big data analytics, this tool is a valuable asset for those looking to make the shift from relational to NoSQL databases. It helps to migrate from major relational database technologies including Oracle, SQL Server, MySQL, and PostgreSQL.
Get more information and download and use relational migrator.
## Conclusion
In the ever-evolving landscape of data management, MongoDB has emerged as a leading NoSQL database, known for its flexibility, scalability, and document-oriented structure. However, many organizations still rely on traditional relational databases to store their critical data. The challenge often lies in migrating data between these disparate systems efficiently and accurately.
Confluent Kafka acts as a great leverage in this context with its event driven architecture and native support for major database engines including MongoDB Atlas.The source and sink connectors would have inbound and outbound data through Topics and acts as a platform for a transparent and hassle free data migration from relational to MongoDB Atlas cluster.
| md | {
"tags": [
"Atlas",
"Java",
"Kafka"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Migrating PostgreSQL to MongoDB Using Confluent Kafka | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/vector-search-with-csharp-driver | created | # Adding MongoDB Atlas Vector Search to a .NET Blazor C# Application
When was the last time you could remember the rough details of something but couldn’t remember the name of it? That happens to quite a few people, so being able to search semantically instead of with exact text searches is really important.
This is where MongoDB Atlas Vector Search comes in useful. It allows you to perform semantic searches against vector embeddings in your documents stored inside MongoDB Atlas. Because the embeddings are stored inside Atlas, you can create the embeddings against any type of data, both structured and unstructured.
In this tutorial, you will learn how to add vector search with MongoDB Atlas Vector Search, using the MongoDB C# driver, to a .NET Blazor application. The Blazor application uses the sample_mflix database, available in the sample dataset anyone can load into their Atlas cluster. You will add support for searching semantically against the plot field, to find any movies that might fit the plot entered into the search box.
## Prerequisites
In order to follow along with this tutorial, you will need a few things in place before you start:
1. .NET 8 SDK installed on your computer
2. An IDE or text editor that can support C# and Blazor for the most seamless development experience, such as Visual Studio, Visual Studio Code with the C# DevKit Extension installed, or JetBrains Rider
3. An Atlas M0 cluster, our free forever tier, perfect for development
4. Your cluster connection string
5. A local copy of the Hugging Face Dataset Upload tool
6. A fork and clone of the See Sharp Movies GitHub repo that we will be adding search to
7. An OpenAI account and a free API key generated — you will use the OpenAI API to create a vector embedding for our search term
> Once you have forked and then cloned the repo and have it locally, you will need to add your connection string into ```appsettings.Development.json``` and ```appsettings.json``` in the placeholder section in order to connect to your cluster when running the project.
> If you don’t want to follow along, the repo has a branch called “vector-search” which has the final result implemented. However, you will need to ensure you have the embedded data in your Atlas cluster.
## Getting our embedded data into Atlas
The first thing you need is some data stored in your cluster that has vector embeddings available as a field in your documents. MongoDB has already provided a version of the movies collection from sample_mflix, called embedded_movies, which has 1500 documents, using a subset of the main movies collection which has been uploaded as a dataset to Hugging Face that will be used in this tutorial.
This is where the Hugging Face Dataset Uploader downloaded as part of the prerequisites comes in. By running this tool using ```dotnet run``` at the root of the project, and passing your connection string into the console when asked, it will go ahead and download the dataset from Hugging Face and then upload that into an ```embedded_movies``` collection inside the ```sample_mflix``` database. If you haven’t got the same dataset loaded so this database is missing, it will even just create it for you thanks to the C# driver!
You can generate vector embeddings for your own data using tools such as Hugging Face, OpenAI, LlamaIndex, and others. You can read more about generating embeddings using open-source models by reading a tutorial from Prakul Agarwal on Generative AI, Vector Search, and open-source models here on Developer Center.
## Creating the Vector Search index
Now you have a collection of movie documents with a ```plot_embedding``` field of vector embeddings for each document, it is time to create the Atlas Vector Search index. This is to enable vector search capabilities on the cluster and to let MongoDB know where to find the vector embeddings.
1. Inside Atlas, click “Browse Collections” to open the data explorer to view your newly loaded sample_mflix database.
2. Select the “Atlas Search” tab at the top.
3. Click the green “Create Search Index” button to load the index creation wizard.
4. Select JSON Editor under the Vector Search heading and then click “Next.”
5. Select the embedded_movies collection under sample_mflix from the left.
6. The name doesn’t matter hugely here, as long as you remember it for later but for now, leave it as the default value of ‘vector_index’.
7. Copy and paste the following JSON in, replacing the current contents of the box in the wizard:
```json
{
"fields":
{
"type": "vector",
"path": "plot_embedding",
"numDimensions": 1536,
"similarity": "dotProduct"
}
]
}
```
This contains a few fields you might not have seen before.
- path is the name of the field that contains the embeddings. In the case of the dataset from Hugging Face, this is plot_embedding.
- numDimensions refers to the dimensions of the model used.
- similarity refers to the type of function used to find similar results.
Check out the [Atlas Vector Search documentation to learn more about these configuration fields.
Click “Next” and on the next page, click “Create Search Index.”
After a couple of minutes, the vector search index will be set up, you will be notified by email, and the application will be ready to have vector search added.
## Adding the backend functionality
You have the data with plot embeddings and a vector search index created against that field, so it is time to start work on the application to add search, starting with the backend functionality.
### Adding OpenAI API key to appsettings
The OpenAI API key will be used to request embeddings from the API for the search term entered since vector search understands numbers and not text. For this reason, the application needs your OpenAI API key to be stored for use later.
1. Add the following into the root of your ```appsettings.Development.json``` and ```appsettings.json```, after the MongoDB section, replacing the placeholder text with your own key:
```json
"OpenAPIKey": ""
```
2. Inside ```program.cs```, after the creation of the var builder, add the following line of code to pull in the value from app config:
```csharp
var openAPIKey = builder.Configuration.GetValue("OpenAPIKey");
```
3. Change the code that creates the MongoDBService instance to also pass in the ```openAPIKey variable```. You will change the constructor of the class later to make use of this.
```csharp
builder.Services.AddScoped(service => new MongoDBService(mongoDBSettings, openAPIKey));
```
### Adding a new method to IMongoDBService.cs
You will need to add a new method to the interface that supports search, taking in the term to be searched against and returning a list of movies that were found from the search.
Open ```IMongoDBService.cs``` and add the following code:
```csharp
public IEnumerable MovieSearch(string textToSearch);
```
### Implementing the method in MongoDBService.cs
Now to make the changes to the implementation class to support the search.
1. Open ```MongoDBService.cs``` and add the following using statements to the top of the file:
```csharp
using System.Text;
using System.Text.Json;
```
2. Add the following new local variables below the existing ones at the top of the class:
```csharp
private readonly string _openAPIKey;
private readonly HttpClient _httpClient = new HttpClient();
```
3. Update the constructor to take the new openAPIKey string parameter, as well as the MongoDBSettings parameter. It should look like this:
```csharp
public MongoDBService(MongoDBSettings settings, string openAPIKey)
```
4. Inside the constructor, add a new line to assign the value of openAPIKey to _openAPIKey.
5. Also inside the constructor, update the collection name from “movies” to “embedded_movies” where it calls ```.GetCollection```.
The following is what the completed constructor should look like:
```csharp
public MongoDBService(MongoDBSettings settings, string openAPIKey)
{
_client = new MongoClient(settings.AtlasURI);
_mongoDatabase = _client.GetDatabase(settings.DatabaseName);
_movies = _mongoDatabase.GetCollection("embedded_movies");
_openAPIKey = openAPIKey;
}
```
### Updating the Movie model
The C# driver acts as an object document mapper (ODM), taking care of mapping between a plain old C# object (POCO) that is used in C# and the documents in your collection.
However, the existing movie model fields need updating to match the documents inside your embedded_movies collection.
Replace the contents of ```Models/Movie.cs``` with the following code:
```csharp
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
namespace SeeSharpMovies.Models;
public class Movie
{
BsonId]
[BsonElement("_id")]
public ObjectId Id { get; set; }
[BsonElement("plot")]
public string Plot { get; set; }
[BsonElement("genres")]
public string[] Genres { get; set; }
[BsonElement("runtime")]
public int Runtime { get; set; }
[BsonElement("cast")]
public string[] Cast { get; set; }
[BsonElement("num_mflix_comments")]
public int NumMflixComments { get; set; }
[BsonElement("poster")]
public string Poster { get; set; }
[BsonElement("title")]
public string Title { get; set; }
[BsonElement("fullplot")]
public string FullPlot { get; set; }
[BsonElement("languages")]
public string[] Languages { get; set; }
[BsonElement("directors")]
public string[] Directors { get; set; }
[BsonElement("writers")]
public string[] Writers { get; set; }
[BsonElement("awards")]
public Awards Awards { get; set; }
[BsonElement("year")]
public string Year { get; set; }
[BsonElement("imdb")]
public Imdb Imdb { get; set; }
[BsonElement("countries")]
public string[] Countries { get; set; }
[BsonElement("type")]
public string Type { get; set; }
[BsonElement("plot_embedding")]
public float[] PlotEmbedding { get; set; }
}
public class Awards
{
[BsonElement("wins")]
public int Wins { get; set; }
[BsonElement("nominations")]
public int Nominations { get; set; }
[BsonElement("text")]
public string Text { get; set; }
}
public class Imdb
{
[BsonElement("rating")]
public float Rating { get; set; }
[BsonElement("votes")]
public int Votes { get; set; }
[BsonElement("id")]
public int Id { get; set; }
}
```
This contains properties for all the fields in the document, as well as classes and properties representing subdocuments found inside the movie document, such as “critic.” You will also note the use of the BsonElement attribute, which tells the driver how to map between the field names and the property names due to their differing naming conventions.
### Adding an EmbeddingResponse model
It is almost time to start implementing the search on the back end. When calling the OpenAI API’s embedding endpoint, you will get back a lot of data, including the embeddings. The easiest way to handle this is to create an EmbeddingResponse.cs class that models this response for use later.
Add a new class called EmbeddingResponse inside the Model folder and replace the contents of the file with the following:
```csharp
namespace SeeSharpMovies.Models
{
public class EmbeddingResponse
{
public string @object { get; set; }
public List data { get; set; }
public string model { get; set; }
public Usage usage { get; set; }
}
public class Data
{
public string @object { get; set; }
public int index { get; set; }
public List embedding { get; set; }
}
public class Usage
{
public int prompt_tokens { get; set; }
public int total_tokens { get; set; }
}
}
```
### Adding a method to request embeddings for the search term
It is time to make use of the API key for OpenAI and write functionality to create vector embeddings for the searched term by calling the [OpenAI API Embeddings endpoint.
Inside ```MongoDBService.cs```, add the following code:
```csharp
private async Task> GetEmbeddingsFromText(string text)
{
Dictionary body = new Dictionary
{
{ "model", "text-embedding-ada-002" },
{ "input", text }
};
_httpClient.BaseAddress = new Uri("https://api.openai.com");
_httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {_openAPIKey}");
string requestBody = JsonSerializer.Serialize(body);
StringContent requestContent =
new StringContent(requestBody, Encoding.UTF8, "application/json");
var response = await _httpClient.PostAsync("/v1/embeddings", requestContent)
.ConfigureAwait(false);
if (response.IsSuccessStatusCode)
{
string responseBody = await response.Content.ReadAsStringAsync();
EmbeddingResponse embeddingResponse = JsonSerializer.Deserialize(responseBody);
return embeddingResponse.data0].embedding;
}
return new List();
}
```
The body dictionary is needed by the API to know the model used and what the input is. The text-embedding-ada-002 model is the default text embedding model.
### Implementing the SearchMovie function
The GetEmbeddingsFromText method returned the embeddings for the search term, so now it is available to be used by Atlas Vector Search and the C# driver.
Paste the following code to implement the search:
```csharp
public IEnumerable MovieSearch(string textToSearch)
{
var vector = GetEmbeddingsFromText(textToSearch).Result.ToArray();
var vectorOptions = new VectorSearchOptions()
{
IndexName = "vector_index",
NumberOfCandidates = 150
};
var movies = _movies.Aggregate()
.VectorSearch(movie => movie.PlotEmbedding, vector, 150, vectorOptions)
.Project(Builders.Projection
.Include(m => m.Title)
.Include(m => m.Plot)
.Include(m => m.Poster))
.ToList();
return movies;
}
```
> If you chose a different name when creating the vector search index earlier, make sure to update this line inside vectorOptions.
Vector search is available inside the C# driver as part of the aggregation pipeline. It takes four arguments: the field name with the embeddings, the vector embeddings of the searched term, the number of results to return, and the vector options.
Further methods are then chained on to specify what fields to return from the resulting documents.
Because the movie document has changed slightly, the current code inside the ```GetMovieById``` method is no longer correct.
Replace the current line that calls ```.Find``` with the following:
```csharp
var movie = _movies.Find(movie => movie.Id.ToString() == id).FirstOrDefault();
```
The back end is now complete and it is time to move on to the front end, adding the ability to search on the UI and sending that search back to the code we just wrote.
## Adding the frontend functionality
The frontend functionality will be split into two parts: the code in the front end for talking to the back end, and the search bar in HTML for typing into.
### Adding the code to handle search
As this is an existing application, there is already code available for pulling down the movies and even pagination. This is where you will be adding the search functionality, and it can be found inside ```Home.razor``` in the ```Components/Pages``` folder.
1. Inside the ```@code``` block, add a new string variable for searchTerm:
```csharp
string searchTerm;
```
2. Paste the following new method into the code block:
```csharp
private void SearchMovies()
{
if (string.IsNullOrWhiteSpace(searchTerm))
{
movies = MongoDBService.GetAllMovies();
}
else
{
movies = MongoDBService.MovieSearch(searchTerm);
}
}
```
This is quite straightforward. If the searchTerm string is empty, then show everything. Otherwise, search on it.
### Adding the search bar
Adding the search bar is really simple. It will be added to the header component already present on the home page.
Replace the existing header tag with the following HTML:
```html
See Sharp Movies
Search
```
This creates a search input with the value being bound to the searchTerm string and a button that, when clicked, calls the SearchMovies method you just called.
### Making the search bar look nicer
At this point, the functionality is implemented. But if you ran it now, the search bar would be in a strange place in the header, so let’s fix that, just for prettiness.
Inside ```wwwroot/app.css```, add the following code:
```css
.search-bar {
padding: 5%;
}
.search-bar button {
padding: 4px;
}
```
This just gives the search bar and the button a bit of padding to make it position more nicely within the header. Although it’s not perfect, CSS is definitely not my strong suit. C# is my favorite language!
## Testing the search
Woohoo! We have the backend and frontend functionality implemented, so now it is time to run the application and see it in action!
Run the application, enter a search term in the box, click the “Search” button, and see what movies have plots semantically close to your search term.
![Showing movie results with a plot similar to three young men and a sword
## Summary
Amazing! You now have a working Blazor application with the ability to search the plot by meaning instead of exact text. This is also a great starting point for implementing more vector search capabilities into your application.
If you want to learn more about Atlas Vector Search, you can read our documentation.
MongoDB also has a space on Hugging Face where you can see some further examples of what can be done and even play with it. Give it a go!
There is also an amazing article on using Vector Search for audio co-written by Lead Developer Advocate at MongoDB Pavel Duchovny.
If you have questions or feedback, join us in the Community Forums.
| md | {
"tags": [
"C#",
".NET"
],
"pageDescription": "Learn how to get started with Atlas Vector Search in a .NET Blazor application with the C# driver, including embeddings and adding search functionality.\n",
"contentType": "Tutorial"
} | Adding MongoDB Atlas Vector Search to a .NET Blazor C# Application | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/modernizing-rdbms-schemas-mongodb-document | created | # Modernizing RDBMS Schemas With a MongoDB Document Model
Welcome to the exciting journey of transitioning from the traditional realm of relational databases to the dynamic world of MongoDB! This is the first entry in a series of tutorials helping you migrate from relational databases to MongoDB. Buckle up as we embark on a thrilling adventure filled with schema design, data modeling, and the wonders of the document model. Say goodbye to the rigid confines of tables and rows, and hello to the boundless possibilities of collections and documents. In this tutorial, we'll unravel the mysteries of MongoDB's schema design, exploring how to harness its flexibility to optimize your data storage like never before, using the Relational Migrator tool!
The migration from a relational database to MongoDB involves several stages. Once you've determined your database and application requirements, the initial step is schema design. This process involves multiple steps, all centered around how you intend to access your data. In MongoDB, data accessed together should be stored together. Let's delve into the schema design process.
## Schema design
The most fundamental difference between the world of relational databases and MongoDB is how your data is modeled. There are some terminology changes to keep in mind when moving from relational databases to MongoDB:
| RDBMS | MongoDB |
|--------------------|--------------------------------------------------|
| Database | Database |
| Table | Collection |
| Row | Document |
| Column | Field |
| Index | Index |
| JOIN | Embedded document, document references, or $lookup to combine data from different collections |
Transitioning from a relational database to MongoDB offers several advantages due to the flexibility of JSON (JavaScript Object Notation) documents. MongoDB's BSON (Binary JSON) encoding extends JSON's capabilities by including additional data types like int, decimal, dates, and more, making it more efficient for representing complex data structures.
Documents in MongoDB, with features such as sub-documents (embedded documents) and arrays, align well with the structure of application-level objects. This alignment simplifies data mapping for developers, as opposed to the complexities of mapping object representations to tabular structures in relational databases, which can slow down development, especially when using Object Relational Mappers (ORMs).
When designing schemas for MongoDB, it's crucial to consider the application's requirements and leverage the document model's flexibility. While mirroring a relational database's flat schema in MongoDB might seem straightforward, it undermines the benefits of MongoDB's embedded data structures. For instance, MongoDB allows collapsing (embedding) data belonging to a parent-child relationship in relational databases into a single document, enhancing efficiency and performance. It's time to introduce a powerful tool that will streamline your transition from relational databases to MongoDB: the Relational Migrator.
### Relational Migrator
The transition from a relational database to MongoDB is made significantly smoother with the help of the Relational Migrator. The first step in this process is a comprehensive analysis of your existing relational schema. The Relational Migrator examines your database, identifying tables, relationships, keys, and other elements that define the structure and integrity of your data. You can connect to a live database or load a .SQL file containing Data Defining Language (DDL) statements. For this tutorial, I’m just going to use the sample schema available when you click **create new project**.
The first screen you’ll see is a diagram of your relational database relationships. This lays the groundwork by providing a clear picture of your current data model, which is instrumental in devising an effective migration strategy. By understanding the intricacies of your relational schema, the Relational Migrator can make informed suggestions on how to best transition this structure into MongoDB's document model.
.
In MongoDB, data that is accessed together should be stored together. This allows the avoidance of resource-intensive `$lookup` operations where not necessary. Evaluate whether to embed or reference data based on how it's accessed and updated. Remember, embedding can significantly speed up read operations but might complicate updates if the embedded data is voluminous or frequently changed. Use the Relational Migrator's suggestions as a starting point but remain flexible. Not every recommendation will be applicable, especially as you project data growth and access patterns into the future.
You may be stuck, staring at the daunting representation of your tables, wondering how to reduce this to a manageable number of collections that best meets your needs. Select any collection to see a list of suggestions for how to represent your data using embedded arrays or documents. Relational Migrator will show all the relationships in your database and how you can represent them in MongoDB, but they might not all be appropriate for application. In my example, I have selected the products collection.
. Use the migrator’s suggestions to iteratively refine your new schema, and understand the suggestions are useful, but not all will make sense for you.
contains all the information you need.
### Data modeling templates
It can be difficult to understand how best to store your data in your application, especially if you’re new to MongoDB. MongoDB Atlas offers a variety of data modeling templates that are designed to demonstrate best practices for various use cases. To find them, go to your project overview and you'll see the "Data Toolkit." Under this header, click the "Data Modeling Templates." These templates are there to serve as a good starting point to demonstrate best practices depending on how you plan on interacting with your data.
— or pop over to our community forums to see what others are doing with MongoDB.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e10c8890fd54c23/65e8774877faff0e5a5a5cb8/image8.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt06b47b571be972b5/65e877485fd476466274f9ba/image5.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6119b9e0d49180e0/65e877478b9c628cfa46feed/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5542e55fba7a3659/65e8774863ec424da25d87e0/image4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4e6f30cb08ace8c0/65e877478b9c62d66546fee9/image2.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03ad163bd859e14c/65e877480395e457c2284cd8/image3.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1343d17dd36fd642/65e8774803e4602da8dc3870/image7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt68bfa90356269858/65e87747105b937781a86cca/image6.png | md | {
"tags": [
"MongoDB"
],
"pageDescription": "Move from a relational database to MongoDB, and learn to use the document model.",
"contentType": "Tutorial"
} | Modernizing RDBMS Schemas With a MongoDB Document Model | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/query-analytics-part-2 | created | # Query Analytics Part 2: Tuning the System
In Part 1: Know Your Queries][1], we demonstrated the importance of monitoring and tuning your search system and the dramatic effect it can have on your business. In this second part, we are going to delve into the technical techniques available to tune and adjust based on your query result analysis.
> [Query Analytics][2] is available in public preview for all MongoDB Atlas clusters on an M10 or higher running MongoDB v5.0 or higher to view the analytics information for the tracked search terms in the Atlas UI. Atlas Search doesn't track search terms or display analytics for queries on free and shared-tier clusters.
[Atlas Search Query Analytics][3] focuses entirely on the frequency and number of results returned from each $search call. There are also a number of [search metrics available for operational monitoring including CPU, memory, index size, and other useful data points.
# Insightful actions
There are a few big categories of actions we can take based on search query analysis insights, which are not mutually exclusive and often work in synergy with one another.
## User experience
Let’s start with the user experience itself, from the search box down to the results presentation. You’re following up on a zero-results query: What did the user experience when this occurred? Are you only showing something like “Sorry, nothing found. Try again!”? Consider showing documents the user has previously engaged with, providing links, or automatically searching for looser queries, perhaps removing some of the user's query terms and asking, “Did you mean this?” While the user was typing the query, are you providing autosuggest/typeahead so that typos get corrected in the full search query?
For queries that return results, is there enough information provided in the user interface to allow the user to refine the results?
Consider these improvements:
* Add suggestions as the user is typing, which can be facilitated by leveraging ngrams via the autocomplete operator or building a specialized autocomplete collection and index for this purpose.
* Add faceted navigation, allowing the user to drill into specific categories and narrow the results shown.
* Provide moreLikeThis queries to broaden results.
## Query construction
How the queries are constructed is half the trick to getting great search results. (The other half is how your content is indexed.) The search terms the user entered are the key to the Query Analytics tracking, but behind the scenes, there’s much more to the full search request.
Your user interface provides the incoming search terms and likely, additional parameters. It’s up to the application tier to construct the $search-using aggregation pipeline from those parameters.
Here are some querying techniques that can influence the quality of the search results:
* Incorporate synonyms, perhaps in a relevancy-weighted fashion where non-synonymed clauses are boosted higher than clauses with synonyms added.
* Leverage compound.should clauses to allow the underlying relevancy computations to work their magic. Spreading query terms across multiple fields — with independent scoring boosts representing the importance, or weight, of each field — allows the best documents to bubble up higher in the results but still provides all matching documents to be returned. For example, a query of “the matrix” in the movies collection would benefit from boosting `title` higher than `plot`.
* Use multi-analysis querying. Take advantage of a field being analyzed in multiple ways. Boost exact matches highest, and have less exact and fuzzier matches weighted lower. See the “Index configuration” section below.
## Index configuration
Index configuration is the other half of great search results and relies on how the underlying search indexes are built from your documents. Here are some index configuration techniques to consider:
* Multi-analysis: Configure your main content fields to be analyzed/tokenized in various ways, ranging from exact (`token` type) to near-exact (lowercased `token`, diacritics normalized) to standard tokenized (whitespace and special characters ignored) to language-specific analysis, down to fuzzy.
* Language considerations: If you know the language of the content, use that to your advantage by using the appropriate language analyzer. Consider doing this in a multi-analysis way so that at query time, you can incorporate language-specific considerations into the relevancy computations.
We’re going to highlight a few common Atlas Search-specific adjustments to consider.
## Adding synonyms
Why didn’t “Jacky Chan” match any of the numerous movies that should have matched? First of all, his name is spelled “Jackie Chan,” so the user made a spelling mistake and we have no exact match of the misspelled name. (This is where $match will always fail, and a fuzzier search option is needed.) It turns out our app was doing `phrase` queries. We loosened this by adding in some additional `compound.should` clauses using a fuzzy `text` operator, and also went ahead and added a “jacky”/“jackie” synonym equivalency for good measure. By making these changes, over time, we will see that the number of occurrences for “Jacky Chan'' in the “Tracked Queries with No Results” will go down.
The `text` operator provides query-time synonym expansion. Synonyms can be bi-directional or unidirectional. Bi-directional synonyms are called `equivalent` in Atlas Search synonym mappings) — for example, “car,” “automobile,” and “vehicle” — so a query containing any one of those terms would match documents containing any of the other terms, as well. These words are “equivalent” because they can all be used interchangeably. Uni-directional synonyms are `explicit` mappings — say “animal” -> “dog” and “animal” -> “cat” — such that a query for “animal” will match documents with “cat” or “dog,” but a query for “dog” will only be for just that: “dog.”
## Enhancing query construction
Using a single operator, like `text` over a wildcard path, facilitates findability (“recall” in information retrieval speak) but does not help with *relevancy* where the best matching documents bubble to the top of the results. An effective way to improve relevancy is to add variously boosted clauses to weight some fields higher than others.
It’s generally a good idea to include a `text` operator within a `compound.should` to allow for synonyms to come into play (the `phrase` operator currently does not support synonym expansion) along with additional `phrase` clauses that more precisely match what the user typed. Add `fuzzy` to the `text` operator to match in spite of slight typos/variations of words.
You may note that Search Tester currently goes really *wild* with a wildcard `*` path to match across all textually analyzed fields; consider the field(s) that really make the most sense to be searched, and whether separate boosts should be assigned to them for fine-tuning relevancy. Using a `*` wildcard is not going to give you the best relevancy because each field has the same boost weight. It can cause objectively bad results to get higher relevancy than they should. Further, a wildcard’s performance is impacted by how many fields you have across your entire collection, which may increase as you add documents.
As an example, let’s suppose our search box powers movie search. Here’s what a relevancy-educated first pass looks like for a query of “purple rain,” generated from our application, first in prose: Consider query term (OR’d) matches in `title`, `cast`, and `plot` fields, boosting matches in those fields in that order, and let’s boost it all the way up to 11 when the query matches a phrase (the query terms in sequential order) of any of those fields.
Now, in Atlas $search syntax, the main query operator becomes a `compound` of a handful of `should`s with varying boosts:
```
"compound": {
"should":
{
"text": {
"query": "purple rain",
"path": "title",
"score": {
"boost": {
"value": 3.0
}
}
}
},
{
"text": {
"query": "purple rain",
"path": "cast",
"score": {
"boost": {
"value": 2.0
}
}
}
},
{
"text": {
"query": "purple rain",
"path": "plot",
"score": {
"boost": {
"value": 1.0
}
}
}
},
{
"phrase": {
"query": "purple rain",
"path": [
"title",
"phrase",
"cast"
],
"score": {
"boost": {
"value": 11.0
}
}
}
}
]
}
```
Note the duplication of the user’s query in numerous places in that $search stage. This deserves a little bit of coding on your part, parameterizing values, providing easy, top-of-the code or config file adjustments to these boosting values, field names, and so on, to make creating these richer query clauses straightforward in your environment.
This kind of spreading a query across independently boosted fields is the first key to unlocking better relevancy in your searches. The next key is to query with different analyses, allowing various levels of exactness to fuzziness to have independent boosts, and again, these could be spread across differently weighted paths of fields.
The next section details creating multiple analyzers for fields; imagine plugging those into the `path`s of another bunch of `should` clauses! Yes, you can get carried away with this technique, though you should start simple. Often, boosting fields independently and appropriately for your domain is all one needs for Pretty Good Findability and Relevancy.
## Field analysis configuration
How your data is indexed determines whether, and how, it can be matched with queries, and thus affects the results your users experience. Adjusting field index configuration could change a search request from finding no documents to matching as expected (or vice versa!). Your index configuration is always a work in progress, and Query Analytics can help track that progress. It will evolve as your querying needs change.
If you’ve set up your index entirely with dynamic mappings, you’re off to a great start! You’ll be able to query your fields in data type-specific ways — numerically, by date ranges, filtering and matching, even regexing on string values. Most interesting is the query-ability of analyzed text. String field values are _analyzed_. By default, in dynamic mapping settings, each string field is analyzed using the `lucene.standard` analyzer. This analyzer does a generally decent job of splitting full-text strings into searchable terms (i.e., the “words” of the text). This analyzer doesn’t do any language-specific handling. So, for example, the words “find,” “finding,” and “finds” are all indexed as unique terms with standard/default analysis but would be indexed as the same stemmed term when using `lucene.english`.
### What’s in a word?
Applying some domain- and data-specific knowledge, we can fine-tune how terms are indexed and thus how easily findable and relevant they are to the documents. Knowing that our movie `plot` is in English, we can switch the analyzer to `lucene.english`, opening up the findability of movies with queries that come close to the English words in the actual `plot`. Atlas Search has over 40 [language-specific analyzers available.
### Multi-analysis
Query Analytics will point you to underperforming queries, but it’s up to you to make adjustments. To emphasize an important point that is being reiterated here in several ways, how your content is indexed affects how it can be queried, and the combination of both how content is indexed and how it is queried controls the order in which results are returned (also referred to as relevancy). One really useful technique available with Atlas Search is called Multi Analyzer, empowering each field to be indexed using any number of analyzer configurations. Each of these configurations is indexed independently (its own inverted index, term dictionary, and all that).
For example, we could index the title field for autocomplete purposes, and we could also index it as English text, then phonetically. We could also use our custom defined analyzer (see below) for term shingling, as well as our index-wide analyzer, defaulting to `lucene.standard` if not specified.
```
"title":
{
"foldDiacritics": false,
"maxGrams": 7,
"minGrams": 3,
"tokenization": "nGram",
"type": "autocomplete"
},
{
"multi": {
"english": {
"analyzer": "lucene.english",
"type": "string"
},
"phonetic": {
"analyzer": "custom.phonetic",
"type": "string"
},
"shingles": {
"analyzer": "custom.shingles",
"type": "string"
}
},
"type": "string"
}
```
As they are indexed independently, they are also queryable independently. With this configuration, titles can be queried phonetically (“kat in the hat”), using English-aware stemming (“find nemo”), or with shingles (such that “the purple rain” queries can create “purple rain” phrase queries).
Explore the available built-in [analyzers and give multi-indexing and querying a try. Sometimes, a little bit of custom analysis can really do the trick, so keep that technique in mind for a potent way to improve findability and relevancy. Here are our `custom.shingles` and `custom.phonetic` analyzer definitions, but please don’t blindly copy this. Make sure you’re testing and understanding these adjustments as it relates to your data and types of queries:
```
"analyzers":
{
"charFilters": [],
"name": "standard.shingles",
"tokenFilters": [
{
"type": "lowercase"
},
{
"maxShingleSize": 3,
"minShingleSize": 2,
"type": "shingle"
}
],
"tokenizer": {
"type": "standard"
}
},
{
"name": "phonetic",
"tokenFilters": [
{
"originalTokens": "include",
"type": "daitchMokotoffSoundex"
}
],
"tokenizer": {
"type": "standard"
}
}
]
```
Querying will naturally still query the inverted index set up as the default for a field, unless the path specifies a [“multi”.
A straightforward example to query specifically the `custom.phonetic` multi as we have defined it here looks like this:
```
$search: {
"text": {
"query": "kat in the hat",
"path": { "value": "title", "multi": "custom.phonetic" }
}
}
```
Now, imagine combining this “multi” analysis with variously boosted `compound.should` clauses to achieve fine-grained findability and relevancy controls that are as nuanced as your domain deserves.
Relevancy tuning pro-tip: Use a few clauses, one per multi-analyzed field independently, to boost from most exact (best!) to less exact, down to as fuzzy matching as needed.
All of these various tricks — from language analysis, stemming words, and a fuzzy parameter to match words that are close but not quite right and broadcasting query terms across multiple fields — are useful tools.
# Tracking Atlas Search queries
How do you go about incorporating Atlas Search Query Analytics into your application? It’s a fairly straightforward process of adding a small “tracking” section to your $search stage.
Queries containing the `tracking.searchTerms` structure are tracked (caveat to qualified cluster tier):
```
{
$search: {
"tracking": {
"searchTerms": ""
}
}
}
```
In Java, the tracking SearchOptions are constructed like this:
```
SearchOptions opts = SearchOptions.searchOptions()
.option("scoreDetails", BsonBoolean.TRUE)
.option("tracking", new Document("searchTerms", query_string));
```
If you’ve got a straightforward search box and that’s the only input provided for a search query, that query string is the best fit for the `searchTerms` value. In some cases, the query to track is more complicated or deserves more context. In doing some homework for this article, we met with one of our early adopters of the Query Analytics feature who was using tracking codes for the `searchTerms` value, corresponding to another collection containing the full query context, such as a list of IP addresses being used for network intrusion detection.
A simple addition of this tracking information opens the door to a greater understanding of the queries happening in your search system.
# Conclusion
The specific adjustments that work best for your particular query challenges are where the art of this craft comes into play. There are many ways to improve a particular query’s results. We’ve shown several techniques to consider here. The main takeaways:
* Search is the gateway used to drive revenue, research, and engage users.
* Know what your users are experiencing, and use that insight to iterate improvements.
* Matching fuzzily and relevancy ranking results is both an art and science, and there are many options.
Atlas Search Query Analytics is a good first step in the virtuous search query management process.
Want to continue the conversation? Head over to the MongoDB Developer Community Forums!
[1]: https://www.mongodb.com/developer/products/atlas/query-analytics-part-1/
[2]: https://www.mongodb.com/docs/atlas/atlas-search/view-query-analytics/
[3]: https://www.mongodb.com/docs/atlas/atlas-search/view-query-analytics/ | md | {
"tags": [
"Atlas"
],
"pageDescription": "Techniques to tune and adjust search results based on Query Analytics",
"contentType": "Article"
} | Query Analytics Part 2: Tuning the System | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/php/laravel-mongodb-4-2-released-laravel-11-support | created | # Laravel MongoDB 4.2 Released, With Laravel 11 Support
The PHP team is happy to announce that version 4.2 of the Laravel MongoDB integration is now available!
## Highlights
**Laravel 11 support**
The MongoDB Laravel integration now supports Laravel 11, ensuring compatibility with the latest framework version and enabling developers to leverage its new features and enhancements. To apply transformation on model attributes, the new recommended way is to declare the Model::casts method.
**Fixed transaction issue with firstOrCreate()**
Previously, using firstOrCreate() in a transaction would result in an error. This problem has been resolved by implementing the underlying Model::createOrFirst() method with the atomic operation findOneAndUpdate.
**Support for whereAll and whereAny**
The library now supports the new methods whereAll and whereAny, introduced in Laravel 10.47.
## Installation
This library may be installed or upgraded with:
```
composer require mongodb/laravel-mongodb:4.2.0
```
## Resources
Documentation and other resources to get you started with Laravel and MongoDB databases are shared below:
- Laravel MongoDB documentation
- Quick Start with MongoDB and Laravel
- Release notes
Give it a try today and let us know what you think! Please report any ideas, bugs, or feedback in the GitHub repository or the PHPORM Jira project, as we continue to improve and enhance the integration. | md | {
"tags": [
"PHP"
],
"pageDescription": "",
"contentType": "News & Announcements"
} | Laravel MongoDB 4.2 Released, With Laravel 11 Support | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/java-spring-boot-vector-search | created | # Unlocking Semantic Search: Building a Java-Powered Movie Search Engine with Atlas Vector Search and Spring Boot
In the rapidly evolving world of technology, the quest to deliver more relevant, personalized, and intuitive search results has led to the rise in popularity of semantic search.
MongoDB's Vector Search allows you to search your data related semantically, making it possible to search your data by meaning, not just keyword matching.
In this tutorial, we'll delve into how we can build a Spring Boot application that can perform a semantic search on a collection of movies by their plot descriptions.
## What we'll need
Before you get started, there are a few things you'll need.
- Java 11 or higher
- Maven or Gradle, but this tutorial will reference Maven
- Your own MongoDB Atlas account
- An OpenAI account, to generate our embeddings
## Set up your MongoDB cluster
Visit the MongoDB Atlas dashboard and set up your cluster. In order to take advantage of the `$vectorSearch` operator in an aggregation pipeline, you need to run MongoDB Atlas 6.0.11 or higher.
Selecting your MongoDB Atlas version is available at the bottom of the screen when configuring your cluster under "Additional Settings."
.
For this project, we're going to use the sample data MongoDB provides. When you first log into the dashboard, you will see an option to load sample data into your database.
in your database to automatically embed your data.
## Create a Vector Search Index
In order to use the `$vectorSearch` operator on our data, we need to set up an appropriate search index. Select the "Search" tab on your cluster and click the "Create Search Index."
for more information on these configuration settings.
## Setting up a Spring Boot project
To set up our project, let's use the Spring Initializr. This will generate our **pom.xml** file which will contain our dependencies for our project.
For this project, you want to select the options in the screenshot below, and create a JAR:
. Feel free to use a more up to date version in order to make use of some of the most up to date features, such as the `vectorSearch()` method. You will also notice that throughout this application we use the MongoDB Java Reactive Streams. This is because we are creating an asynchronous API. AI operations like generating embeddings can be compute-intensive and time-consuming. An asynchronous API allows these tasks to be processed in the background, freeing up the system to handle other requests or operations simultaneously. Now, let’s get to coding!
To represent our document in Java, we will use Plain Old Java Objects (POJOs). The data we're going to handle are the documents from the sample data you just loaded into your cluster. For each document and subdocument, we need a POJO. MongoDB documents bear a lot of resemblance to POJOs already and are straightforward to set up using the MongoDB driver.
In the main document, we have three subdocuments: `Imdb`, `Tomatoes`, and `Viewer`. Thus, we will need four POJOs for our `Movie` document.
We first need to create a package called `com.example.mdbvectorsearch.model` and add our class `Movie.java`.
We use the `@BsonProperty("_id")` to assign our `_id` field in JSON to be mapped to our `Id` field in Java, so as to not violate Java naming conventions.
```java
public class Movie {
@BsonProperty("_id")
private ObjectId Id;
private String title;
private int year;
private int runtime;
private Date released;
private String poster;
private String plot;
private String fullplot;
private String lastupdated;
private String type;
private List directors;
private Imdb imdb;
private List cast;
private List countries;
private List genres;
private Tomatoes tomatoes;
private int num_mflix_comments;
private String plot_embeddings;
// Getters and setters for Movie fields
}
```
Add another class called `Imdb`.
```java
public static class Imdb {
private double rating;
private int votes;
private int id;
// Getters and setters for Imdb fields
}
```
Yet another called `Tomatoes`.
```java
public static class Tomatoes {
private Viewer viewer;
private Date lastUpdated;
// Getters and setters for Tomatoes fields
}
```
And finally, `Viewer`.
```java
public static class Viewer {
private double rating;
private int numReviews;
// Getters and setters for Viewer fields
}
```
> Tip: For creating the getters and setters, many IDEs have shortcuts.
### Connect to your database
In your main file, set up a package `com.example.mdbvectorsearch.config` and add a class, `MongodbConfig.java`. This is where we will connect to our database, and create and configure our client. If you're used to using Spring Data MongoDB, a lot of this is usually obfuscated. We are doing it this way to take advantage of some of the latest features of the MongoDB Java driver to support vectors.
From the MongoDB Atlas interface, we'll get our connection string and add this to our `application.properties` file. We'll also specify the name of our database here.
```
mongodb.uri=mongodb+srv://:@.mongodb.net/
mongodb.database=sample_mflix
```
Now, in your `MongodbConfig` class, import these values, and denote this as a configuration class with the annotation `@Configuration`.
```java
@Configuration
public class MongodbConfig {
@Value("${mongodb.uri}")
private String MONGODB_URI;
@Value("${mongodb.database}")
private String MONGODB_DATABASE;
```
Next, we need to create a Client and configure it to handle the translation to and from BSON for our POJOs. Here we configure a `CodecRegistry` to handle these conversions, and use a default codec as they are capable of handling the major Java data types. We then wrap these in a `MongoClientSettings` and create our `MongoClient`.
```java
@Bean
public MongoClient mongoClient() {
CodecRegistry pojoCodecRegistry = CodecRegistries.fromRegistries(
MongoClientSettings.getDefaultCodecRegistry(),
CodecRegistries.fromProviders(
PojoCodecProvider.builder().automatic(true).build()
)
);
MongoClientSettings settings = MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(MONGODB_URI))
.codecRegistry(pojoCodecRegistry)
.build();
return MongoClients.create(settings);
}
```
Our last step will then be to get our database, and we're done with this class.
```java
@Bean
public MongoDatabase mongoDatabase(MongoClient mongoClient) {
return mongoClient.getDatabase(MONGODB_DATABASE);
}
}
```
### Embed your data with the OpenAI API
We are going to send the prompt given from the user to the OpenAI API to be embedded.
An embedding is a series (vector) of floating point numbers. The distance between two vectors measures their relatedness. Small distances suggest high relatedness and large distances suggest low relatedness.
This will transform our natural language prompt, such as `"Toys that come to life when no one is looking"`, to a large array of floating point numbers that will look something like this `-0.012670076, -0.008900887, ..., 0.0060262447, -0.031987168]`.
In order to do this, we need to create a few files. All of our code to interact with OpenAI will be contained in our `OpenAIService.java` class and go to `com.example.mdbvectorsearch.service`. The `@Service` at the top of our class dictates to Spring Boot that this belongs to this service layer and contains business logic.
```java
@Service
public class OpenAIService {
private static final String OPENAI_API_URL = "https://api.openai.com";
@Value("${openai.api.key}")
private String OPENAI_API_KEY;
private WebClient webClient;
@PostConstruct
void init() {
this.webClient = WebClient.builder()
.clientConnector(new ReactorClientHttpConnector())
.baseUrl(OPENAI_API_URL)
.defaultHeader("Content-Type", MediaType.APPLICATION_JSON_VALUE)
.defaultHeader("Authorization", "Bearer " + OPENAI_API_KEY)
.build();
}
public Mono> createEmbedding(String text) {
Map body = Map.of(
"model", "text-embedding-ada-002",
"input", text
);
return webClient.post()
.uri("/v1/embeddings")
.bodyValue(body)
.retrieve()
.bodyToMono(EmbeddingResponse.class)
.map(EmbeddingResponse::getEmbedding);
}
}
```
We use the Spring WebClient to make the calls to the OpenAI API. We then create the embeddings. To do this, we pass in our text and specify our embedding model (e.g., `text-embedding-ada-002`). You can read more about the OpenAI API parameter options [in their docs.
To pass in and receive the data from the Open AI API, we need to specify our models for the data being received. We're going to add two models to our `com.example.mdbvectorsearch.model` package, `EmbeddingData.java` and `EmbeddingResponse.java`.
```java
public class EmbeddingData {
private List embedding;
public List getEmbedding() {
return embedding;
}
public void setEmbedding(List embedding) {
this.embedding = embedding;
}
}
```
```java
public class EmbeddingResponse {
private List data;
public List getEmbedding() {
return data.get(0).getEmbedding();
}
public List getData() {
return data;
}
public void setData(List data) {
this.data = data;
}
}
```
### Your vector search aggregation pipeline in Spring Boot
We have our database. We are able to embed our data. We are ready to send and receive our movie documents. How do we actually perform our semantic search?
The data access layer of our API implementation takes place in the repository. Create a package `com.example.mdbvectorsearch.repository` and add the interface `MovieRepository.java`.
```java
public interface MovieRepository {
Flux findMoviesByVector(List embedding);
}
```
Now, we implement the logic for our `findMoviesByVector` method in the implementation of this interface. Add a class `MovieRepositoryImpl.java` to the package. This method implements the data logic for our application and takes the embedding of user's inputted text, embedded using the OpenAI API, then uses the `$vectorSearch` aggregation stage against our `embedded_movies` collection, using the index we set up earlier.
```java
@Repository
public class MovieRepositoryImpl implements MovieRepository {
private final MongoDatabase mongoDatabase;
public MovieRepositoryImpl(MongoDatabase mongoDatabase) {
this.mongoDatabase = mongoDatabase;
}
private MongoCollection getMovieCollection() {
return mongoDatabase.getCollection("embedded_movies", Movie.class);
}
@Override
public Flux findMoviesByVector(List embedding) {
String indexName = "PlotVectorSearch";
int numCandidates = 100;
int limit = 5;
List pipeline = asList(
vectorSearch(
fieldPath("plot_embedding"),
embedding,
indexName,
numCandidates,
limit));
return Flux.from(getMovieCollection().aggregate(pipeline, Movie.class));
}
}
```
For the business logic of our application, we need to create a service class. Create a class called `MovieService.java` in our `service` package.
```java
@Service
public class MovieService {
private final MovieRepository movieRepository;
private final OpenAIService embedder;
@Autowired
public MovieService(MovieRepository movieRepository, OpenAIService embedder) {
this.movieRepository = movieRepository;
this.embedder = embedder;
}
public Mono> getMoviesSemanticSearch(String plotDescription) {
return embedder.createEmbedding(plotDescription)
.flatMapMany(movieRepository::findMoviesByVector)
.collectList();
}
}
```
The `getMoviesSemanticSearch` method will take in the user's natural language plot description, embed it using the OpenAI API, perform a vector search on our `embedded_movies` collection, and return the top five most similar results.
This service will take the user's inputted text, embed it using the OpenAI API, then use the `$vectorSearch` aggregation stage against our `embedded_movies` collection, using the index we set up earlier.
This returns a `Mono` wrapping our list of `Movie` objects. All that's left now is to actually pass in some data and call our function.
We’ve got the logic in our application. Now, let’s make it an API! First, we need to set up our controller. This will allow us to take in the user input for our application. Let's set up an endpoint to take in the users plot description and return our semantic search results. Create a `com.example.mdbvectorsearch.service` package and add the class `MovieController.java`.
```java
@RestController
public class MovieController {
private final MovieService movieService;
@Autowired
public MovieController(MovieService movieService) {
this.movieService = movieService;
}
@GetMapping("/movies/semantic-search")
public Mono> performSemanticSearch(@RequestParam("plotDescription") String plotDescription) {
return movieService.getMoviesSemanticSearch(plotDescription);
}
}
```
We define an endpoint `/movies/semantic-search` that handles get requests, captures the `plotDescription` as a query parameter, and delegates the search operation to the `MovieService`.
You can use your favorite tool to test the API endpoints but I'm just going to send a cURL command.
```console
curl -X GET "http://localhost:8080/movies/semantic-search?plotDescription=A%20cop%20from%20china%20and%20cop%20from%20america%20save%20kidnapped%20girl"
```
>Note: We use `%20` to indicate spaces in our URL.
Here we call our API with the query, `"A cop from China and a cop from America save a kidnapped girl"`. There's no title in there but I think it's a fairly good description of a particular action/comedy movie starring Jackie Chan and Chris Tucker. Here's a slightly abbreviated version of my output. Let's check our results!
```markdown
Movie title: Rush Hour
Plot: Two cops team up to get back a kidnapped daughter.
Movie title: Police Story 3: Supercop
Plot: A Hong Kong detective teams up with his female Red Chinese counterpart to stop a Chinese drug czar.
Movie title: Fuk sing go jiu
Plot: Two Hong-Kong cops are sent to Tokyo to catch an ex-cop who stole a large amount of money in diamonds. After one is captured by the Ninja-gang protecting the rogue cop, the other one gets ...
Movie title: Motorway
Plot: A rookie cop takes on a veteran escape driver in a death defying final showdown on the streets of Hong Kong.
Movie title: The Corruptor
Plot: With the aid from a NYC policeman, a top immigrant cop tries to stop drug-trafficking and corruption by immigrant Chinese Triads, but things complicate when the Triads try to bribe the policeman.
```
We found *Rush Hour* to be our top match. Just what I had in mind! If its premise resonates with you, there are a few other films you might enjoy.
You can test this yourself by changing the `plotDescription` we have in the cURL command.
## Conclusion
This tutorial walked through the comprehensive steps of creating a semantic search application using MongoDB Atlas, OpenAI, and Spring Boot.
Semantic search offers a plethora of applications, ranging from sophisticated product queries on e-commerce sites to tailored movie recommendations. This guide is designed to equip you with the essentials, paving the way for your upcoming project.
Thinking about integrating vector search into your next project? Check out this article — How to Model Your Documents for Vector Search — to learn how to design your documents for vector search.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltda8a1bd484272d2c/656d98d6d28c5a166c3e1879/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9570de3c5dcf3c0f/656d98d6ec7994571696ad1d/image6.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt531721a1672757f9/656d98d6d595490c07b6840b/image4.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt63feb14bcd48bc33/656d98d65af539247a5a12e5/image3.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt29c9e89933337056/656d98d6d28c5a4acb3e1875/image1.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfcb95ac6cfc4ef2b/656d98d68d1092ce5f56dd73/image5.png | md | {
"tags": [
"Atlas",
"Java"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Unlocking Semantic Search: Building a Java-Powered Movie Search Engine with Atlas Vector Search and Spring Boot | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-atlas-azure-functions-nodejs | created | # Getting Started with MongoDB Atlas and Azure Functions using Node.js
*This article was originally published on Microsoft's Tech Community.*
So you're building serverless applications with Microsoft Azure Functions, but you need to persist data to a database. What do you do about controlling the number of concurrent connections to your database from the function? What happens if the function currently connected to your database shuts down or a new instance comes online to scale with demand?
The concept of serverless in general, whether that be through a function or database, is great because it is designed for the modern application. Applications that scale on-demand reduce the maintenance overhead and applications that are pay as you go reduce unnecessary costs.
In this tutorial, we’re going to see just how easy it is to interact with MongoDB Atlas using Azure functions. If you’re not familiar with MongoDB, it offers a flexible document model that can be used to model your data for a variety of use cases and is easily integrated into most application development stacks. On top of the document model, MongoDB Atlas makes it just as easy to scale your database to meet demand as it does your Azure Function.
The language focus of this tutorial will be Node.js and as a result we will be using the MongoDB Node.js driver, but the same concepts can be carried between Azure Function runtimes.
## Prerequisites
You will need to have a few of the prerequisites met prior to starting the tutorial:
* A MongoDB Atlas database deployed and configured with appropriate network rules and user rules.
* The Azure CLI installed and configured to use your Azure account.
* The Azure Functions Core Tools installed and configured.
* Node.js 14+ installed and configured to meet Azure Function requirements.
For this particular tutorial we'll be using a MongoDB Atlas serverless instance since our interactions with the database will be fairly lightweight and we want to maintain scaling flexibility at the database layer of our application, but any Atlas deployment type, including the free tier, will work just fine so we recommend you evaluate and choose the option best for your needs. It’s worth noting that you can also configure scaling flexibility for our dedicated clusters with auto-scaling which allows you to select minimum and maximum scaling thresholds for your database.
We'll also be referencing the sample data sets that MongoDB offers, so if you'd like to follow along make sure you install them from the MongoDB Atlas dashboard.
When defining your network rules for your MongoDB Atlas database, use the outbound IP addresses for the Azure data centers as defined in the Microsoft Azure documentation.
## Create an Azure Functions App with the CLI
While we're going to be using the command line, most of what we see here can be done from the web portal as well.
Assuming you have the Azure CLI installed and it is configured to use your Azure account, execute the following:
```
az group create --name --location
```
You'll need to choose a name for your group as well as a supported Azure region. Your choice will not impact the rest of the tutorial as long as you're consistent throughout. It’s a good idea to choose a region closest to you or your users so you get the best possible latency for your application.
With the group created, execute the following to create a storage account:
```
az storage account create --name --location --resource-group --sku Standard_LRS
```
The above command should use the same region and group that you defined in the previous step. This command creates a new and unique storage account to use with your function. The storage account won't be used locally, but it will be used when we deploy our function to the cloud.
With the storage account created, we need to create a new Function application. Execute the following from the CLI:
```
az functionapp create --resource-group --consumption-plan-location --runtime node --functions-version 4 --name --storage-account
```
Assuming you were consistent and swapped out the placeholder items where necessary, you should have an Azure Function project ready to go in the cloud.
The commands used thus far can be found in the Microsoft documentation. We just changed anything .NET related to Node.js instead, but as mentioned earlier MongoDB Atlas does support a variety of runtimes including .NET and this tutorial can be referenced for other languages.
With most of the cloud configuration out of the way, we can focus on the local project where we'll be writing all of our code. This will be done with the Azure Functions Core Tools application.
Execute the following command from the CLI to create a new project:
```
func init MongoExample
```
When prompted, choose Node.js and JavaScript since that is what we'll be using for this example.
Navigate into the project and create your first Azure Function with the following command:
```
func new --name GetMovies --template "HTTP trigger"
```
The above command will create a Function titled "GetMovies" based off the "HTTP trigger" template. The goal of this function will be to retrieve several movies from our database. When the time comes, we'll add most of our code to the *GetMovies/index.js* file in the project.
There are a few more things that must be done before we begin writing code.
Our local project and cloud account is configured, but we’ve yet to link them together. We need to link them together, so our function deploys to the correct place.
Within the project, execute the following from the CLI:
```
func azure functionapp fetch-app-settings
```
Don't forget to replace the placeholder value in the above command with your actual Azure Function name. The above command will download the configuration details from Azure and place them in your local project, particularly in the project's *local.settings.json* file.
Next execute the following from the CLI:
```
func azure functionapp fetch-app-settings
```
The above command will add the storage details to the project's *local.settings.json* file.
For more information on these two commands, check out the Azure Functions documentation.
## Install and Configure the MongoDB Driver for Node.js within the Azure Functions Project
Because we plan to use the MongoDB Node.js driver, we will need to add the driver to our project and configure it. Neither of these things will be complicated or time consuming to do.
From the root of your local project, execute the following from the command line:
```
npm install mongodb
```
The above command will add MongoDB to our project and add it to our project's *package.json* file so that it can be added automatically when we deploy our project to the cloud.
By now you should have a "GetMovies" function if you're following along with this tutorial. Open the project's *GetMovies/index.j*s file so we can configure it for MongoDB:
```
const { MongoClient } = require("mongodb");
const mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);
module.exports = async function (context, req) {
// Function logic here ...
}
```
In the above snippet we are importing MongoDB and we are creating a new client to communicate with our cluster. We are making use of an environment variable to hold our connection information.
To find your URI, go to the MongoDB Atlas dashboard and click "Connect" for your cluster.
Bring this URI string into your project's *local.settings.json* file. Your file might look something like this:
```
{
"IsEncrypted": false,
"Values": {
// Other fields here ...
"MONGODB_ATLAS_URI": "mongodb+srv://demo:@examples.mx9pd.mongodb.net/?retryWrites=true&w=majority",
"MONGODB_ATLAS_CLUSTER": "examples",
"MONGODB_ATLAS_DATABASE": "sample_mflix",
"MONGODB_ATLAS_COLLECTION": "movies"
},
"ConnectionStrings": {}
}
```
The values in the *local.settings.json* file will be accessible as environment variables in our local project. We'll be completing additional steps later in the tutorial to make them cloud compatible.
The first phase of our installation and configuration of MongoDB Atlas is complete!
## Interact with Your Data using the Node.js Driver for MongoDB
We're going to continue in our projects *GetMovies/index.js* file, but this time we're going to focus on some basic MongoDB logic.
In the Azure Function code we should have the following as of now:
```
const { MongoClient } = require("mongodb");
const mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);
module.exports = async function (context, req) {
// Function logic here ...
}
```
When working with a serverless function you don't have control as to whether or not your function is available immediately. In other words you don't have control as to whether the function is ready to be consumed or if it has to be created. The point of serverless is that you're using it as needed.
We have to be cautious about how we use a serverless function with a database. All databases, not specific to MongoDB, can maintain a certain number of concurrent connections before calling it quits. In a traditional application you generally establish a single connection that lives on for as long as your application does. Not the case with an Azure Function. If you establish a new connection inside your function block, you run the risk of too many connections being established if your function is popular. Instead what we're doing is we are creating the MongoDB client outside of the function and we are using that same client within our function. This allows us to only create connections if connections don't exist.
Now we can skip into the function logic:
```
module.exports = async function (context, req) {
try {
const database = await mongoClient.db(process.env.MONGODB_ATLAS_DATABASE);
const collection = database.collection(process.env.MONGODB_ATLAS_COLLECTION);
const results = await collection.find({}).limit(10).toArray();
context.res = {
"headers": {
"Content-Type": "application/json"
},
"body": results
}
} catch (error) {
context.res = {
"status": 500,
"headers": {
"Content-Type": "application/json"
},
"body": {
"message": error.toString()
}
}
}
}
```
When the function is executed, we make reference to the database and collection we plan to use. These are pulled from our *local.settings.json* file when working locally.
Next we do a `find` operation against our collection with an empty match criteria. This will return all the documents in our collection so the next thing we do is limit it to ten (10) or less results.
Any results that come back we use as a response. By default the response is plaintext, so by defining the header we can make sure the response is JSON. If at any point there was an exception, we catch it and return that instead.
Want to see what we have in action?
Execute the following command from the root of your project:
```
func start
```
When it completes, you'll likely be able to access your Azure Function at the following local endpoint: http://localhost:7071/api/GetMovies
Remember, we haven't deployed anything and we're just simulating everything locally.
If the local server starts successfully, but you cannot access your data when visiting the endpoint, double check that you have the correct network rules in MongoDB Atlas. Remember, you may have added the Azure Function network rules, but if you're testing locally, you may be forgetting your local IP in the list.
## Deploy an Azure Function with MongoDB Support to the Cloud
If everything is performing as expected when you test your function locally, then you're ready to get it deployed to the Microsoft Azure cloud.
We need to ensure our local environment variables make it to the cloud. This can be done through the web dashboard in Azure or through the command line. We're going to do everything from the command line.
From the CLI, execute the following commands, replacing the placeholder values with your own values:
```
az functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_URI=
az functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_DATABASE=
az functionapp config appsettings set --name --resource-group --settings MONGODB_ATLAS_COLLECTION=
```
The above commands were taken almost exactly from the Microsoft documentation.
With the environment variables in place, we can deploy the function using the following command from the CLI:
```
func azure functionapp publish
```
It might take a few moments to deploy, but when it completes the CLI will provide you with a public URL for your functions.
Before you attempt to test them from cURL, Postman, or similar, make sure you obtain a "host key" from Azure to use in your HTTP requests.
## Conclusion
In this tutorial we saw how to connect MongoDB Atlas with Azure Functions using the MongoDB Node.js driver to build scalable serverless applications. While we didn't see it in this tutorial, there are many things you can do with the Node.js driver for MongoDB such as complex queries with an aggregation pipeline as well as basic CRUD operations.
To see more of what you can accomplish with MongoDB and Node.js, check out the MongoDB Developer Center.
With MongoDB Atlas on Microsoft Azure, developers receive access to the most comprehensive, secure, scalable, and cloud–based developer data platform in the market. Now, with the availability of Atlas on the Azure Marketplace, it’s never been easier for users to start building with Atlas while streamlining procurement and billing processes. Get started today through the Atlas on Azure Marketplace listing. | md | {
"tags": [
"Atlas",
"Azure",
"Node.js"
],
"pageDescription": "In this tutorial, we’re going to see just how easy it is to interact with MongoDB Atlas using Azure functions.",
"contentType": "Tutorial"
} | Getting Started with MongoDB Atlas and Azure Functions using Node.js | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/events/symfonylive-berlin-2024 | created | # SymfonyLive Berlin 2024
Come and meet our team at SymfonyLive Berlin!
## Sessions
Don't miss these talks by our team:
|Date| Titre| Speaker|
|---|---|---|
|June 20th|From Pickles to Pie: Sweeten Your PHP Extension Installs|Andreas Braun|
## Additional Resources
Dive deeper in your MongoDB exploration with the following resources:
* Tutorial MongoDB + Symfony
* Tutorial MongoDB + Doctrine | md | {
"tags": [
"MongoDB",
"PHP"
],
"pageDescription": "Join us at Symfony Live Berlin!",
"contentType": "Event"
} | SymfonyLive Berlin 2024 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/springio-2024 | created | # Spring I/O 2024
Come and meet our team at Spring I/O!
## Sessions
Don't miss these talks by our team:
|Date| Titre| Speaker|
|---|---|---|
| May 30th | MongoDB Sprout: Where Data Meets Spring | Tim Kelly |
## Additional Resources
Dive deeper in your MongoDB exploration with the following resources:
Check out how to add Vector Search to your Java Spring Boot application in this tutorial.
Integrating Spring Boot, Reactive, Spring Data, and MongoDB can be a challenge, especially if you are just starting out. Check out this code example to get started right away!
Need to deploy an application on K8s that connects to MongoDB Atlas? This tutorial will take you through the steps you need to get started in no time. | md | {
"tags": [
"MongoDB",
"Java"
],
"pageDescription": "Join us at Spring I/O!",
"contentType": "Event"
} | Spring I/O 2024 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/cppcon-2024 | created | # CppCon 2024
Come and meet our team at CppCon!
## Sessions
Don't miss these talks by our team:
|Date| Titre| Speaker|
|---|---|---|
|September 21st and 22nd|Workshop: C++ Testing like a Ninja for Novice Testers|Jorge Ortiz & Rishabh Bisht|
## Additional Resources
We will publish a repository with all of the code for the workshop, so remember to visit this page again and check if it is available.
Dive deeper in your MongoDB exploration with the following resources:
- MongoDB Resources for Cpp developers | md | {
"tags": [
"MongoDB",
"C++"
],
"pageDescription": "Join us at CppCon!",
"contentType": "Event"
} | CppCon 2024 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-melbourne | created | # Developer Day Melbourne
Welcome to MongoDB Developer Day Melbourne! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting
* Intro lab: hands-on exercises
* Data import tool
## Aggregation Pipelines Lab
* Slides
* Aggregations lab: hands-on exercises
## Search Lab
* Slides
* Search lab: hands-on exercises
## Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
## How was it?
Let us know what you liked about this day, and how we can improve (and get a cool 🧦 gift 🧦) by filling out this survey.
## Join the Community
Stay connected, and join our community:
* Join the Melbourne MongoDB User Group!
* Sign up for the MongoDB Community Forums. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Melbourne | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-sydney | created | # Developer Day Sydney
Welcome to MongoDB Developer Day Sydney! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting
* Intro lab: hands-on exercises
* Data import tool
## Aggregation Pipelines Lab
* Slides
* Aggregations lab: hands-on exercises
## Search Lab
* Slides
* Search lab: hands-on exercises
## Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
## How was it?
Let us know what you liked about this day, and how we can improve (and get a cool 🧦 gift 🧦) by filling out this survey.
## Join the Community
Stay connected, and join our community:
* Join the Sydney MongoDB User Group!
* Sign up for the MongoDB Community Forums. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Sydney | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-auckland | created | # Developer Day Auckland
Welcome to MongoDB Developer Day Auckland! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
## MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting
* Intro lab: hands-on exercises
* Data import tool
## Aggregation Pipelines Lab
* Slides
* Aggregations lab: hands-on exercises
## Search Lab
* Slides
* Search lab: hands-on exercises
## Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
## How was it?
Let us know what you liked about this day, and how we can improve by filling out this survey.
## Join the Community
Stay connected, and join our community:
* Join the Auckland MongoDB User Group!
* Sign up for the MongoDB Community Forums. | md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Auckland | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/deprecating-mongodb-atlas-graphql-hosting-services | created | # Deprecating MongoDB Atlas GraphQL and Hosting Services
As part of MongoDB’s ongoing commitment to innovation and providing the best possible developer experience, we have some important updates about our Atlas GraphQL and Atlas Hosting services. Our goal is always to offer developers the best services and tools on Atlas, whether built by MongoDB or delivered by our trusted partners so that builders can focus on providing the best application possible. In line with this vision, we strategically decided to deprecate the Atlas GraphQL API and Atlas Hosting services.
This blog post outlines what this means for users, the timeline for this transition, and how we plan to support you through this change.
**What’s Changing?**
New users cannot create apps with GraphQL / hosting enabled. Existing customers will have time to move off of the service and find an alternative solution by **March 12, 2025**.
**Why Are We Making This Change?**
The decision to streamline our services reflects our commitment to natively offering best-in-class services while collaborating with leading partners to provide the most comprehensive developer data platform.
**How We’re Supporting You**
We recognize that challenges can come with change, so our team will continue to provide comprehensive assistance and guidance to ensure a smooth migration process. As part of our commitment to providing developers the best services and tools, we have identified several MongoDB partners who offer best-in-class solutions with similar functionality to our GraphQL and hosting services.
We’ve collaborated with some of these partners to create official step by step migration guides in order to provide a seamless transition to our customers. We encourage you to explore these options here.
- **Migration Assistance**: Learn more about the MongoDB partner integrations that make it easy to connect to your Atlas database:
- **GraphQL Partners**: Apollo, Hasura, WunderGraph, Grafbase, AWS AppSync
- **Hosting Partners**: Vercel, Netlify, Koyeb, Northflank, DigitalOcean
- **Support and Guidance**: Our support team is available to assist you with any questions or concerns. We encourage you to reach out via the MongoDB Support Portal or contact your Account Executive for personalized assistance.
**Looking Forward**
We’re here to support you every step of the way as you explore and migrate to alternative solutions. Our team is working diligently to ensure this transition is as seamless as possible for all affected users. We’re also excited about what the future holds for the MongoDB Atlas, the industry’s leading developer data platform, and the new features we’re developing to enhance your experience. | md | {
"tags": [
"Atlas",
"GraphQL"
],
"pageDescription": "Guidance and resources on how to migrate from MongoDB Atlas GraphQL and Hosting services.",
"contentType": "News & Announcements"
} | Deprecating MongoDB Atlas GraphQL and Hosting Services | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/go/http-server-persist-data | created | # HTTP Servers Persisting Data in MongoDB
# HTTP Servers Persisting Data in MongoDB
In the previous article and the corresponding video, we wrote a basic HTTP server from scratch. We used Go 1.22's new capabilities to deal with different HTTP verbs and we deserialized data that was sent from an HTTP client.
Exchanging data is worthless if you forget it right away. We are going to persist that data using MongoDB. You will need a MongoDB Atlas cluster. The free one is more than enough. If you don't have an account, you can find guidance on how this is done on this workshop or YouTube. You don't have to do the whole lab, just the parts "Create an Account" and "Create a Cluster" in the "MongoDB Atlas" section. Call your cluster "NoteKeeper" in a **FREE** cluster. Create a username and password which you will use in a moment. Verify that your IP address is included. Verify that your server's IP address is allowed access. If you use the codespace, include the address 0.0.0.0 to indicate that access is allowed to any IP.
## Connect to MongoDB Atlas from Go
1. So far, we have used packages of the standard library, but we would like to use the MongoDB driver to connect to our Atlas cluster. This adds the MongoDB Go driver to the dependencies of our project, including entries in `go.mod` for it and all of its dependencies. It also keeps hashes of the dependencies in `go.sum` to ensure integrity and downloads all the code to be able to include it in the program.
```shell
go get go.mongodb.org/mongo-driver/mongo
```
2. MongoDB uses BSON to serialize and store the data. It is more efficient and supports more types than JSON (we are looking at you, dates, but also BinData). And we can use the same technique that we used for deserializing JSON for converting to BSON, but in this case, the conversion will be done by the driver. We are going to declare a global variable to hold the connection to MongoDB Atlas and use it from the handlers. That is **not** a best practice. Instead, we could define a type that holds the client and any other dependencies and provides methods –which will have access to the dependencies– that can be used as HTTP handlers.
```go
var mdbClient *mongo.Client
```
3. If your editor has any issues importing the MongoDB driver packages, you need to have these two in your import block.
```go
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
```
4. In the `main` function, we initialize the connection to Atlas. Notice that this function returns two things. For the first one, we are using a variable that has already been defined at the global scope. The second one, `err`, isn't defined in the current scope, so we could potentially use the short variable declaration here. However, if we do, it will ignore the global variable that we created for the client (`mdbClient`) and define a local one only for this scope. So let's use a regular assignment and we need `err` to be declared to be able to assign a value to it.
```go
var err error
mdbClient, err = mongo.Connect(ARG1, ARG2)
```
5. The first argument of that `Connect()` call is a context that allows sharing data and cancellation requests between the main function and the client. Let's create one that is meant to do background work. You could add a cancellation timer to this context, among other things.
```go
ctxBg := context.Background()
```
6. The second argument is a struct that contains the options used to create the connection. The bare minimum is to have a URI to our Atlas MongoDB cluster. We get that URI from the cluster page by clicking on "Get Connection String." We create a constant with that connection string. **Don't** use this one. It won't work. Get it from **your** cluster. Having the connection URI with user the and password as a constant isn't a best practice either. You should pass this data using an environment variable instead.
```go
const connStr string = "mongodb+srv://yourusername:yourpassword@notekeeper.xxxxxx.mongodb.net/?retryWrites=true&w=majority&appName=NoteKeeper"
```
7. We can now use that constant to create the second argument in place.
```go
var err error
mdbClient, err = mongo.Connect(ctxBg, options.Client().ApplyURI(connStr))
```
8. If we cannot connect to Atlas, there is no point in continuing, so we log the error and exit. `log.Fatal()` takes care of both things.
```go
if err != nil {
log.Fatal(err)
}
```
9. If the connection has been successful, the first thing that we want to do is to ensure that it will be closed if we leave this function. We use `defer` for that. Everything that we defer will be executed when it exits that function scope, even if things go badly and a panic takes place. We enclose the work in an anonymous function and we call it because defer is a statement. This way, we can use the return value of the `Disconnect()` method and act accordingly.
```go
defer func() {
if err = mdbClient.Disconnect(ctxBg); err != nil {
panic(err)
}
}()
```
## Persist data in MongoDB Atlas from Go
has all the code for this series and the next ones so you can follow along.
Stay curious. Hack your code. See you next time!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt10326f71fc7c76c8/6630dc2086ffea48da8e43cb/persistence.jpg | md | {
"tags": [
"Go"
],
"pageDescription": "This tutorial explains how to persist data obtained from an HTTP endpoint into Atlas MongoDB.",
"contentType": "Tutorial"
} | HTTP Servers Persisting Data in MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-singapore | created | # Developer Day Singapore
Welcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
### Hands-on exercises setup and troubleshooting
* Self-paced content -- Atlas cluster creation and loading sample data.
* Data import tool
* If CodeSpaces doesn't work, try downloading the code.
* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.
### Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
## Aggregation Pipelines Lab
* Aggregations hands-on exercises
* Slides
## Search Lab
* Slides
* Search lab hands-on content
### Dive deeper
Do you want to learn more about Atlas Search? Check these out.
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Singapore | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/mongodb-day-kementerian-kesehatan | created | # MongoDB Day with Kementerian Kesehatan
Welcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
### Hands-on exercises setup and troubleshooting
* Self-paced content -- Atlas cluster creation and loading sample data.
* Data import tool
* If CodeSpaces doesn't work, try downloading the code.
* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.
### Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
## Aggregation Pipelines Lab
* Aggregations hands-on exercises
* Slides
## Search Lab
* Slides
* Search lab hands-on content
### Dive deeper
Do you want to learn more about Atlas Search? Check these out.
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | MongoDB Day with Kementerian Kesehatan | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-jakarta | created | # Developer Day Jakarta
Welcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
### Hands-on exercises setup and troubleshooting
* Self-paced content -- Atlas cluster creation and loading sample data.
* Data import tool
* If CodeSpaces doesn't work, try downloading the code.
* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.
### Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
## Aggregation Pipelines Lab
* Aggregations hands-on exercises
* Slides
## Search Lab
* Slides
* Search lab hands-on content
### Dive deeper
Do you want to learn more about Atlas Search? Check these out.
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Jakarta | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-kl | created | # Developer Day Kuala Lumpur
Welcome to MongoDB Developer Day! Below you can find all the resources you will need for the day.
## Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
### Hands-on exercises setup and troubleshooting
* Self-paced content -- Atlas cluster creation and loading sample data.
* Data import tool
* If CodeSpaces doesn't work, try downloading the code.
* Import tool not working? Try downloading the dataset, and ask an instructor for help on importing the data.
### Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
## Aggregation Pipelines Lab
* Aggregations hands-on exercises
* Slides
## Search Lab
* Slides
* Search lab hands-on content
### Dive deeper
Do you want to learn more about Atlas Search? Check these out.
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Join us for a full day of hands-on sessions about MongoDB. An event for developer by developers.",
"contentType": "Event"
} | Developer Day Kuala Lumpur | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/migration-411-50 | created | # Java Driver: Migrating From 4.11 to 5.0
## Introduction
The MongoDB Java driver 5.0.0 is now available!
While this version doesn't include many new features, it's removing a lot of deprecated methods and is preparing for the
future.
## How to upgrade
- Ensure your server version is compatible with Java Driver 5.0.
- Compile against the 4.11 version of the driver with deprecation warnings enabled.
- Remove deprecated classes and methods.
### Maven
```xml
org.mongodb
mongodb-driver-sync
5.0.0
```
### Gradle
```
implementation group: 'org.mongodb', name: 'mongodb-driver-sync', version: '5.0.0'
```
## New features
You can
read the full list of new features
but here is a summary.
### getElapsedTime()
The behavior of the method `getElapsedTime()` was modified in the following classes:
```text
com.mongodb.event.ConnectionReadyEvent
com.mongodb.event.ConnectionCheckedOutFailedEvent
com.mongodb.event.ConnectionCheckedOutEvent
```
If you are using one of these methods, make sure to recompile and
read the details.
### authorizedCollection option
5.0.0 adds support for the `authorizedCollection` option of the `listCollections` command.
### Scala
The `org.mongodb.scala.Observable.completeWithUnit()` method is now marked deprecated.
## Breaking changes
One of the best ways to identify if your code will require any changes following the upgrade to Java Driver 5.0 is to compile against 4.11.0 with deprecation warnings enabled and remove the use of any deprecated methods and classes.
You can read the full list of breaking changes but here is a summary.
### StreamFactoryFactory and NettyStreamFactoryFactory
The following methods and classes have been removed in 5.0.0:
- `streamFactoryFactory()` method from `MongoClientSettings.Builder`
- `getStreamFactoryFactory()` method from `MongoClientSettings`
- `NettyStreamFactoryFactory` class
- `NettyStreamFactory` class
- `AsynchronousSocketChannelStreamFactory` class
- `AsynchronousSocketChannelStreamFactoryFactory` class
- `BufferProvider` class
- `SocketStreamFactory` class
- `Stream` class
- `StreamFactory` class
- `StreamFactoryFactory` class
- `TlsChannelStreamFactoryFactory` class
If you configure Netty using the `streamFactoryFactory()`, your code is probably like this:
```java
import com.mongodb.connection.netty.NettyStreamFactoryFactory;
// ...
MongoClientSettings settings = MongoClientSettings.builder()
.streamFactoryFactory(NettyStreamFactoryFactory.builder().build())
.build();
```
Now, you should use the `TransportSettings.nettyBuilder()`:
```java
import com.mongodb.connection.TransportSettings;
// ...
MongoClientSettings settings = MongoClientSettings.builder()
.transportSettings(TransportSettings.nettyBuilder().build())
.build();
```
### ConnectionId
In 4.11, the class `ConnectionId` was using integers.
```java
@Immutable
public final class ConnectionId {
private static final AtomicInteger INCREMENTING_ID = new AtomicInteger();
private final ServerId serverId;
private final int localValue;
private final Integer serverValue;
private final String stringValue;
// ...
}
```
```java
@Immutable
public final class ConnectionId {
private static final AtomicLong INCREMENTING_ID = new AtomicLong();
private final ServerId serverId;
private final long localValue;
@Nullable
private final Long serverValue;
private final String stringValue;
// ...
}
```
While this should have a very minor impact on your code, it's breaking binary and source compatibility. Make sure to
rebuild your binary and you should be good to go.
### Package update
Three record annotations moved from:
```text
org.bson.codecs.record.annotations.BsonId
org.bson.codecs.record.annotations.BsonProperty
org.bson.codecs.record.annotations.BsonRepresentation
```
To:
```text
org.bson.codecs.pojo.annotations.BsonId
org.bson.codecs.pojo.annotations.BsonProperty
org.bson.codecs.pojo.annotations.BsonRepresentation
```
So if you are using these annotations, please make sure to update the imports and rebuild.
### SocketSettings is now using long
The first parameters of the two following builder methods in `SocketSettings` are now using a long instead of an
integer.
```java
public Builder connectTimeout(final long connectTimeout, final TimeUnit timeUnit) {/*...*/}
public Builder readTimeout(final long readTimeout, final TimeUnit timeUnit){/*...*/}
```
This breaks binary compatibility but shouldn't require a code change in your code.
### Filters.eqFull()
`Filters.eqFull()` was only released in `Beta` for vector search. It's now deprecated. Use `Filters.eq()` instead when
instantiating a `VectorSearchOptions`.
```java
VectorSearchOptions opts = vectorSearchOptions().filter(eq("x", 8));
```
### ClusterConnectionMode
The way the driver is computing the `ClusterConnectionMode` is now more consistent by using a specified replica set
name, regardless of how it's configured.
In the following example, both the 4.11 and 5.0.0 drivers were returning the same
thing: `ClusterConnectionMode.MULTIPLE`.
```java
ClusterSettings.builder()
.applyConnectionString(new ConnectionString("mongodb://127.0.0.1:27017/?replicaSet=replset"))
.build()
.getMode();
```
But in this example, the 4.11 driver was returning `ClusterConnectionMode.SINGLE` instead
of `ClusterConnectionMode.MULTIPLE`.
```java
ClusterSettings.builder()
.hosts(Collections.singletonList(new ServerAddress("127.0.0.1", 27017)))
.requiredReplicaSetName("replset")
.build()
.getMode();
```
### BsonDecimal128
The behaviour of `BsonDecimal128` is now more consistent with the behaviour of `Decimal128`.
```java
BsonDecimal128.isNumber(); // returns true
BsonDecimal128.asNumber(); // returns the BsonNumber
```
## Conclusion
With the release of MongoDB Java Driver 5.0.0, it's evident that the focus has been on refining existing functionalities, removing deprecated methods, and ensuring compatibility for future enhancements. While the changes may necessitate some adjustments in your codebase, they pave the way for a more robust and efficient development experience.
Ready to upgrade? Dive into the latest version of the MongoDB Java drivers and start leveraging its enhanced capabilities today!
To finish with, don't forget to enable virtual threads in your Spring Boot 3.2.0+ projects! You just need to add this in your `application.properties` file:
```properties
spring.threads.virtual.enabled=true
```
Got questions or itching to share your success? Head over to the MongoDB Community Forum – we're all ears and ready to help!
| md | {
"tags": [
"Java",
"MongoDB"
],
"pageDescription": "Learn how to migrate smoothly your MongoDB Java project from 4.11 to 5.0.",
"contentType": "Article"
} | Java Driver: Migrating From 4.11 to 5.0 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/lambda-nodejs | created | # Using the Node.js MongoDB Driver with AWS Lambda
JavaScript has come a long way since its modest debut in the 1990s. It has been the most popular language, according to the Stack Overflow Developer Survey, for 10 years in a row now. So it's no surprise that it has emerged as the most popular language for writing serverless functions.
Writing a serverless function using JavaScript is straightforward and similar to writing a route handler in Express.js. The main difference is how the server will handle the code. As a developer, you only need to focus on the handler itself, and the cloud provider will maintain all the infrastructure required to run this function. This is why serverless is getting more and more traction. There is almost no overhead that comes with server management; you simply write your code and deploy it to the cloud provider of your choice.
This article will show you how to write an AWS Lambda serverless function that connects to MongoDB Atlas to query some data and how to avoid common pitfalls that would cause poor performance.
## Prerequisites
For this article, you will need basic JavaScript knowledge. You will also need:
- A MongoDB Atlas database loaded with sample data (a free tier is good).
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
- An AWS account.
## Creating your first Lambda function
To get started, let's create a basic lambda function. This function will be used later on to connect to our MongoDB instance.
In AWS, go to the Lambda service. From there, you can click on the "Create Function" button. Fill in the form with a name for your function, and open the advanced settings.
Because you'll want to access this function from a browser, you will need to change these settings:
- Check the "Enable function URL" option.
- Under "Auth Type," pick "NONE."
- Check the "Configure cross-origin resource sharing (CORS)" box.
Now click "Create Function" and you're ready to go. You will then be presented with a screen similar to the following.
You can see a window with some code. This function will return a 200 (OK) status code, and the body of the request will be "Hello from Lambda!".
You can test this function by going to the "Configuration" above the code editor. Then choose "Function URL" from the left navigation menu. You will then see a link labeled "Function URL." Clicking this link will open a new tab with the expected message.
If you change the code to return a different body, click "Deploy" at the top, and refresh that second tab, you will see your new message.
You've just created your first HTTPS endpoint that will serve the response generated from your function.
## Common pitfalls with the Node.js driver for MongoDB
While it can be trivial to write simple functions, there are some considerations that you'll want to keep in mind when dealing with AWS Lambda and MongoDB.
### Storing environment variables
You can write your functions directly in the code editor provided by AWS Lambda, but chances are you will want to store your code in a repository to share with your team. When you push your code, you will want to be careful not to upload some of your secret keys. With your database, for example, you wouldn't want to push your connection string accidentally. You could use an environment variable for this.
From the AWS Lambda screen, go into the "Configuration" tab at the top, and pick "Environment Variables" from the left navigation bar. Click "Edit," and you will be presented with the option to add a new environment variable. Fill in the form with the following values:
- Key: MONGODB_CONNECTION_STRING
- Value: This is a connection string
Now go back to the code editor, and use the `process.env` to return the newly created environment variable as the body of your request.
```javascript
export const handler = async(event) => {
const response = {
statusCode: 200,
body: process.env.MONGODB_CONNECTION_STRING,
};
return response;
};
```
If you refresh the tab you opened earlier, you will see the value of that environment variable. In the example below, you will change the value of that environment variable to connect to your MongoDB Atlas database.
### Connection pool
When you initialize a `MongoClient` with the Node.js driver, it will create a pool of connections that can be used by your application. The MongoClient ensures that those connections are closed after a while so you don't reach your limit.
A common mistake when using MongoDB Atlas with AWS Lambda is creating a new connection pool every time your function gets a request. A poorly written function can lead to new connections being created every time, as displayed in the following diagram from the Atlas monitoring screen.
That sudden peak in connections comes from hitting a Lambda function every second for approximately two minutes.
The secret to fixing this is to move the creation of the MongoDB client outside the handler. This will be shown in the example below. Once the code has been fixed, you can see a significant improvement in the number of simultaneous connections.
Now that you know the pitfalls to avoid, it's time to create a function that connects to MongoDB Atlas.
## Using the MongoDB Node.js driver on AWS Lambda
For this example, you can use the same function you created earlier. Go to the "Environment Variables" settings, and put the connection string for your MongoDB database as the value for the "MONGODB_CONNECTION_STRING" environment variable. You can find your connection string in the Atlas UI.
Because you'll need additional packages to run this function, you won't be able to use the code editor anymore.
Create a new folder on your machine, initialize a new Node.js project using `npm`, and install the `mongodb` package.
```bash
npm init -y
npm install mongodb
```
Create a new `index.mjs` file in this directory, and paste in the following code.
```javascript
import { MongoClient } from "mongodb";
const client = new MongoClient(process.env.MONGODB_CONNECTION_STRING);
export const handler = async(event) => {
const db = await client.db("sample_mflix");
const collection = await db.collection("movies");
const body = await collection.find().limit(10).toArray();
const response = {
statusCode: 200,
body
};
return response;
};
```
This code will start by creating a new MongoClient. Note how the client is declared *outside* the handler function. This is how you'll avoid problems with your connection pool. Also, notice how it uses the connection string provided in the Lambda configuration rather than a hard-coded value.
Inside the handler, the code connects to the `sample_mflix` database and the `movies` collection. It then finds the first 10 results and converts them into an array.
The 10 results are then returned as the body of the Lambda function.
Your function is now ready to be deployed. This time, you will need to zip the content of this folder. To do so, you can use your favorite GUI or the following command if you have the `zip` utility installed.
```bash
zip -r output.zip .
```
Go back to the Lambda code editor, and look for the "Upload from" button in the upper right corner of the editor. Choose your newly created `output.zip` file, and click "Save."
Now go back to the tab with the result of the function, and hit refresh. You should see the first 10 documents from the `movies` collection.
## Summary
Using AWS Lambda is a great way to write small functions that can run efficiently without worrying about configuring servers. It's also a very cost-effective way to host your application since you only pay per usage. You can find more details on how to build Lambda functions to connect to your MongoDB database in the documentation.
If you want a fully serverless solution, you can also run MongoDB as a serverless service. Like the Lambda functions, you will only pay for a serverless database instance based on usage.
If you want to learn more about how to use MongoDB, check out our Community Forums. | md | {
"tags": [
"JavaScript",
"Atlas",
"AWS"
],
"pageDescription": "In this article, you will learn how to use the MongoDB Node.js driver in AWS Lambda functions.",
"contentType": "Tutorial"
} | Using the Node.js MongoDB Driver with AWS Lambda | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/events/react-summit | created | # React Summit
Come and meet our team at React Summit!
## Sessions
Don't miss these talks by our team:
|Date| Titre| Speaker|
|---|---|---|
## Additional Resources
| md | {
"tags": [
"MongoDB",
"JavaScript"
],
"pageDescription": "Join us at React Summit!",
"contentType": "Event"
} | React Summit | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/insurance-data-model-relational-migrator-refactor | created | # Modernize your insurance data models with MongoDB Relational Migrator
In the 70s and 80s, there were few commercial off-the-shelf solutions available for many core insurance functions, so insurers had to build their own applications. Such applications are often host-based, meaning that they are mainframe technologies. These legacy platforms include software languages such as COBOL and CICS. Many insurers are still struggling to replace these legacy technologies due to a confluence of variables such as a lack of developers with programming skills in these older technologies and complicated insurance products. This results in high maintenance costs and difficulty in making changes. In brief, legacy systems are a barrier to progress in the insurance industry.
Whether you’re looking to maintain and improve existing applications or push new products and features to market, the data trapped inside those systems is a drag on innovation.
This is particularly true when we think about the data models that sit at the core of application systems (e.g., underwriting), defining entities and relationships between them.
In this tutorial, we will demonstrate:
- Why the document model simplifies the data model of a standard insurance system.
- How MongoDB's Relational Migrator effortlessly transforms an unwieldy 21-table schema into a lean five-collection MongoDB model.
This will ultimately prove that with MongoDB, insurers will launch new products faster, swiftly adapt to regulatory changes, and enhance customer experiences.
To do this, we will focus on the Object Management Group’s Party Role model and how the model can be ported from a relational structure to MongoDB’s document model.
In particular, we will describe the refactoring of Party in the context of Policy and Claim & Litigation. For each of them, a short description, a simplified Hackolade model (Entity Relationship Diagrams - ERD), and the document refactoring using Relational Migrator are provided.
Relational Migrator is a tool that allows you to:
- Design an effective MongoDB schema, derived from an existing relational schema.
- Migrate data from Oracle, SQL Server, MySQL, PostgreSQL, or Sybase ASE to MongoDB, while transforming to the target schema.
- Generate code artifacts to reduce the time required to update application code.
At the end of this tutorial, you will have learned how to use Relational Migrator to refactor the Party Role relational data model and migrate the data into MongoDB collections.
## Connect to Postgres and set up Relational Migrator
### Prerequisites
- **MongoDB Relational Migrator** (version 1.4.3 or higher): MongoDB Relational Migrator is a powerful tool to help you migrate relational workloads to MongoDB. Download and install the latest version.
- **PostgreSQL** (version 15 or higher): PostgreSQL is a relational database management system. It will serve as the source database for our migration task. Download the last version of PostgreSQL.
- **MongoDB** (version 7.0 or higher): You will need access to a MongoDB instance with write permissions to create the new database to where we are going to migrate the data. You can install the latest version of the MongoDB Community Server or simply deploy a free MongoDB Atlas cluster in less than three minutes!
In this tutorial, we are going to use PostgreSQL as the RDBMS to hold the original tabular schema to be migrated to MongoDB. In order to follow it, you will need to have access to a PostgreSQL database server instance with permissions to create a new database and user. The instance may be in the cloud, on-prem, or in your local machine. You just need to know the URL, port, user, and password of the PostgreSQL instance of your choice.
We will also use two PostgreSQL Client Applications: psql and pg_restore. These terminal-based applications will allow us to interact with our PostgreSQL database server instance. The first application, `psql`, enables you to type in queries interactively, issue them to PostgreSQL, and see the query results. It will be useful to create the database and run queries to verify that the schema has been successfully replicated. On the other hand, we will use `pg_restore` to restore the PostgreSQL database from the archive file available in the GitHub repository. This archive file contains all the tables, relationships, and sample data from the Party Role model in a tabular format. It will serve as the starting point in our data migration journey.
The standard ready-to-use packages will already include both the server and these client tools. We recommend using version 15 or higher. You can download it from the official PostgreSQL Downloads site or, if you are a macOS user, just run the command below in your terminal.
```
brew install postgresql@15
```
>Note: Verify that Postgres database tools have been successfully installed by running `psql --version` and `pg_restore --version`. If you see an error message, make sure the containing directory of the tools is added to your `PATH`.
### Replicate the Party Role model in PostgreSQL
First, we need to connect to the PostgreSQL database.
```
psql -h -p -U -d
```
If it’s a newly installed local instance with the default parameters, you can use `127.0.0.1` as your host, `5432` as the port, `postgres` as database, and type `whoami` in your terminal to get your default username if no other has been specified during the installation of the PostgreSQL database server.
Once you are connected, we need to create a database to load the data.
```
CREATE DATABASE mongodb_insurance_model;
```
Then, we will create the user that will have access to the new database, so we don’t need to use the root user in the relational migrator. Please remember to change the password in the command below.
```
CREATE USER istadmin WITH PASSWORD '';
ALTER DATABASE mongodb_insurance_model OWNER TO istadmin;
```
Finally, we will populate the database with the Party Role model, a standard widely used in the insurance industry to define how people, organizations, and groups are involved in agreements, policies, claims, insurable objects, and other major entities. This will not only replicate the table structure, relationships, and ownership, but it will also load some sample data.
1. First, download the .tar file that contains the backup of the database.
2. Navigate to the folder where the file is downloaded using your terminal.
3. Run the command below in your terminal to load the data. Please remember to change the host, port, and user before executing the command.
```
pg_restore -h -p -U -d mongodb_insurance_model mongodb_insurance_model.tar
```
After a few seconds, our new database will be ready to use. Verify the successful restore by running the command below:
```
psql -h -p -U -d mongodb_insurance_model -c "SELECT * FROM pg_catalog.pg_tables WHERE schemaname='omg';"
```
You should see a list of 21 tables similar to the one in the figure below.
If all looks good, you are ready to connect your data to MongoDB Relational Migrator.
### Connect to Relational Migrator
Open the Relational Migrator app and click on the “New Project” button. We will start a new project from scratch by connecting to the database we just created. Click on “Connect database,” select “PostgreSQL” as the database type, and fill in the connection details. Test the connection before proceeding and if the connection test is successful, click “Connect.” If a “no encryption” error is thrown, click on SSL → enable SSL.
In the next screen, select all 21 tables from the OMG schema and click “Next.” On this new screen, you will need to define your initial schema. We will start with a MongoDB schema that matches your relational schema. Leave the other options as default. Next, give the project a name and click “Done.”
This will generate a schema that matches the original one. That is, we will have one collection per table in the original schema. This is a good starting point, but as we have seen, one of the advantages of the document model is that it is able to reduce this initial complexity. To do so, we will take an object-modeling approach. We will focus on four top-level objects that will serve as the starting point to define the entire schema: Party, Policy, Claim, and Litigation.
By default, you will see a horizontal split view of the Relational (upper part) and MongoDB (lower part) model. You can change the view model from the bottom left corner “View” menu. Please note that all the following steps in the tutorial will be done in the MongoDB view (MDB). Feel free to change the view mode to “MDB” for a more spacious working view.
## Party domain
The Party Subject Area (Figure 3.) shows that all persons, organizations, and groups can be represented as “parties” and parties can then be related to other major objects with specified roles. The Party design also provides a common approach to describing communication identifiers, relationships between parties, and legal identifiers.
To illustrate the process in a simpler and clearer way, we reduced the number of objects and built a new ERD in Relational Migrator (Figure 4). Such models are most often implemented in run-time transactional systems. Their impact and dependencies can be found across multiple systems and domains. Additionally, they can result in very large physical database objects, and centralized storage and access patterns can be bottlenecks.
The key Party entities are:
Party represents people, organizations, and groups. In the original schema, this is represented through one-to-one relationships. Party holds the common attributes for all parties, while each of the other three tables stores the particularities of each party class. These differences result in distinct fields for each class, which forces tabular schemas to create new tables. The inherent flexibility of the document model allows embedding this information in a single document. To do this, follow the steps below:
- Select the "party" collection in the MDB view of Relational Migrator. At the moment, this collection has the same fields as the original matched table.
- On the right-hand side, you will see the mappings menu (Figure 5). Click on the “Add” button, select “Embedded documents,” and choose "person" in the “Source table” dropdown menu. Click “Save and close” and repeat this process for the "organization" and "grouping" tables.
- After this, you can remove the "person," "organization," and "grouping" collections. Right-click on them, select “Remove Entity,” and confirm “Remove from the MongoDB model.” You have already simplified your original model by three tables, and we’re just getting started.
Looking at Figure 4, we can see that there is another entity that could be easily embedded in the party collection: location addresses. In this case, this table has a many-to-many relationship facilitated by the "party_location_address" table. As a party can have many location addresses, instead of an embedded document, we will use an embedded array. You can do it in the following way:
- Select the collection "party" again, click the “Add” button, select “Embedded array,” and choose "party_location_address" in the “Source table” dropdown. Under the “All fields” checkbox, uncheck the `partyIdentifier` field. We are not going to need it. Addresses will be contained in the “party” document anyway. Leave the other fields as default and click the “Save and close” button.
- We have now established the relationship, but we want to have the address details too. From the “party” mapping menu, click the “Add” button again. Then, select “Embedded documents,” choose “location_address,” and in the “Root path” section, check the box that says “Merge fields into the parent.” This will ensure that we don’t have more nested fields than necessary. Click “Save and close.”
- You can now delete the “party_location_address” collection, but don’t delete “location_address” as it still has an existing relationship with “insurable_object.”
You are done. The “party” entity is ready to go. We have not only reduced six tables to just one, but the “person,” “organization,” and “grouping” embedded documents will only show up if that party is indeed a person, organization, or grouping. One collection can contain documents with different schemas for each of these classes.
At the beginning of the section, we also spoke about the “party role” entity. It represents the role a party plays in a specific context such as policy, claim, or litigation. In the original schema, this many-to-many relationship is facilitated via intermediate tables like “policy_party_role,” “claim_party_role,” and “litigation_party_role” respectively. These intermediate tables will be embedded in other collections, but the “party_role” table can be left out as a reference collection on its own. In this way, we avoid having to update one by one all policy, claim, and litigation documents if one of the attributes of “party role” changes.
Let’s see next how we can model the “policy” entity.
## Policy Domain
The key entities of Policy are:
From a top-level perspective, we can observe that the “policy” entity is composed of policy coverage parts and the agreements of each of the parties involved with their respective roles. A policy can have both several parts to cover and several parties agreements involved. Therefore, similarly to what happened with party location addresses, they will be matched to array embeddings.
Let’s start with the party agreements. A policy may have many parties involved, and each party may be part of many policies. This results in a many-to-many relationship facilitated by the “policy_party_role” table. This table also covers the relationships between roles and agreements, as each party will play a role and will have an agreement in a specific policy.
- From the MDB view, select the “policy” collection. Click on the “Add” button, select “embedded array,” and choose “policy_party_role” in the source table dropdown. Uncheck the `policyIdentifier` field, leave the other fields as default, and click “Save and close.”
- We will leave the party as a referenced object to the “party” collection we created earlier, so we don’t need to take any further action on this. The relationship remains in the new model through the `partyIdentifier` field acting as a foreign key. However, we need to include the agreements. From the “policy” mapping menu, click “Add,” select “Embedded document,” pick “agreement” as the source table, leave the other options as default, and click “Save and close.”
- At this point, we can remove the collections “policy_party_role” and “agreement.” Remember that we have defined “party_role” as a separate reference collection, so just having `partyRoleCode` as an identifier in the destination table will be enough.
Next, we will include the policy coverage parts.
- From the “policy” mapping menu, click “Add,” select “Embedded array,” pick “policy_coverage_part” as the source table, uncheck the `policyIdentifier` field, leave the other options as default, and click “Save and close.”
- Each coverage part has details included in the “policy_coverage_detail”. We will add this as an embedded array inside of each coverage part. In the “policy” mapping menu, click “Add,” select “Embedded array,” pick “policy_coverage_detail,” and make sure that the prefix selected in the “Root path” section is `policyCoverageParts`. Remove `policyIdentifier` and `coveragePartCode` fields and click “Save and close.”
- Coverage details include “limits,” “deductibles,” and “insurableObjects.” Let’s add them in! Click “Add” in the “policy” mapping menu, “Embedded Array,” pick “policy_limit,” remove the `policyCoverageDetailIdentifier`, and click “Save and close.” Repeat the process for “policy_deductible.” For “insurable_object,” repeat the process but select “Embedded document” instead of “Embedded array.”
- As you can see in Figure 8, insurable objects have additional relationships to specify the address and roles played by the different parties. To add them, we just need to embed them in the same fashion we have done so far. Click “Add” in the “policy” mapping menu, select “Embedded array,” and pick “insurable_object_party_role.” This is the table used to facilitate the many-to-many relationship between insurable objects and party roles. Uncheck `insurableObjectIdentifier` and click “Save and close.” Party will be referenced by the `partyIdentifier` field. For the sake of simplicity, we won’t embed address details here, but remember in a production environment, you would need to add it in a similar way as we did before in the “party” collection.
- After this, we can safely remove the collections “policy_coverage_part,” “policy_coverage_detail,” “policy_deductible,” and “policy_limit.”
By now, we should have a collection similar to the one below and five fewer tables from our original model.
## Claim & Litigation Domain
The key entities of Claim and Litigation are:
In this domain, we have already identified the two main entities: claim and litigation. We will use them as top-level documents to refactor the relationships shown in Figure 10 in a more intuitive way. Let’s see how you can model claims first.
- We’ll begin embedding the parties involved in a claim with their respective roles. Select “claim” collection, click “Add” in the mapping menu, select “Embedded array,” and pick “claim_party_role” as the source table. You can uncheck `claimIdentifier` from the field list. Last, click the “Save and close” button.
- Next, we will integrate the insurable object that is part of the claim. Repeat the previous step but choose “Embedded documents” as the table migration option and “insurable_object” as the source table. Again, we will not embed the “location_address” entity to keep it simple.
- Within `insurableObject`, we will include the policy coverage details establishing the link between claims and policies. Add a new mapping, select “Embedded array,” choose “policy_coverage_detail” as the source table, and uncheck the field `insurableObjectIdentifier`. Leave the other options as default.
- Lastly, we will recreate the many-to-many relationship between litigation and claim. As we will have a separate litigation entity, we just need to reference that entity from the claims document, which means that just having an array of litigation identifiers will be enough. Repeat the previous step by selecting “Embedded array,” “litigation_party_role,” and unchecking all fields except `litigationIdentifier` in the field list.
The claim model is ready to go. We can now remove the collection “claimPartyRole.”
Let’s continue with the litigation entity. Litigations may have several parties involved, each playing a specific role and with a particular associated claim. This relationship is facilitated through the “litigation_party_role” collection. We will represent it using an embedded array. Additionally, we will include some fields in the claim domain apart from its identifier. This is necessary so we can have a snapshot of the claim details at the time the litigation was made, so even if the claim details change, we won’t lose the original claim data associated with the litigation. To do so, follow the steps below:
- From the “litigation” mapping menu, click on the “Add” button, select “Embedded array,” and pick “litigation_party_role” as the source table. Remove `litigationIdentifier` from the field list and click “Save and Close.”
- In a similar way, add claim details by adding “claim” as an “Embedded document.”
- Repeat the process again but choose “insurable_object” as the source table for the embedded document. Make sure the root path prefix is set to `litigationPartyRoles.claim`.
- Finally, add “insurable_object_party_role” as an “Embedded array.” The root path prefix should be `litigationPartyRoles.claim.insurableObject`.
And that’s it. We have modeled the entire relationship schema in just five collections: “party,” “partyRole,” “policy,” “claim,” and “litigation.” You can remove the rest of the collections and compare the original tabular schema composed of 21 tables to the resulting five collections.
## Migrate your data to MongoDB
Now that our model is complete, we just need to migrate the data to our MongoDB instance. First, verify that you have “dbAdmin” permissions in the destination OMG database. You can check and update permissions from the Atlas left-side security menu in the “Database Access” section.
Once this is done, navigate to the “Data Migration” tab in the top navigation bar and click “Create sync job.” You will be prompted to add the source and destination database details. In our case, these are PostgreSQL and MongoDB respectively. Fill in the details and click “Connect” in both steps until you get to the “Migration Options” step. In this menu, we will leave all options as default. This will migrate our data in a snapshot mode, which means it will load all our data at once. Feel free to check our documentation for more sync job alternatives.
Finally, click the “Start” button and wait until the migration is complete. This can take a couple of minutes. Once ready, you will see the “Completed” tag in the snapshot state card. You can now connect to your database in MongoDB Atlas or Compass and check how all your data is now loaded in MongoDB ready to leverage all the advantages of the document model.
## Additional resources
Congratulations, you’ve just completed your data migration! We've not just simplified the data model of a standard insurance system; we've significantly modernized how information flows in the industry.
On the technical side, MongoDB's Relational Migrator truly is a game-changer, effortlessly transforming an unwieldy 21-table schema into a lean five-collection MongoDB model. This translates to quicker, more efficient data operations, making it a dream for developers and administrators alike.
On the business side, imagine the agility gained — faster time-to-market for new insurance products, swift adaptation to regulatory changes, and enhanced customer experiences.
The bottom line? MongoDB's document model and Relational Migrator aren't just tools; they're the catalysts for a future-ready, nimble insurance landscape.
If you want to learn how MongoDB can help you modernize, move to any cloud, and embrace the AI-driven future of insurance, check the resources below. What will you build next?
- MongoDB for Insurance
- Relational Migrator: Migrate to MongoDB with confidence
- From RDBMS to NoSQL at Enterprise Scale
>Access our GitHub repository for DDL scripts, Hackolade models, and more!
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "This tutorial walks you through the refactoring of the OMG Party Role data model, a widely used insurance standard. With the help of MongoDB Relational Migrator you’ll be able to refactor your relational tables into MongoDB collections and reap all the document model benefits.",
"contentType": "Tutorial"
} | Modernize your insurance data models with MongoDB Relational Migrator | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/beyond-basics-enhancing-kotlin-ktor-api-vector-search | created | # Beyond Basics: Enhancing Kotlin Ktor API With Vector Search
In this article, we will delve into advanced MongoDB techniques in conjunction with the Kotlin Ktor API, building upon the foundation established in our previous article, Mastering Kotlin: Creating an API With Ktor and MongoDB Atlas. Our focus will be on integrating robust features such as Hugging Face, Vector Search, and MongoDB Atlas triggers/functions to augment the functionality and performance of our API.
We will start by providing an overview of these advanced MongoDB techniques and their critical role in contemporary API development. Subsequently, we will delve into practical implementations, showcasing how you can seamlessly integrate Hugging Face for natural language processing, leverage Vector Search for rapid data retrieval, and automate database processes using triggers and functions.
## Prerequisites
- MongoDB Atlas account
- Note: Get started with MongoDB Atlas for free! If you don’t already have an account, MongoDB offers a free-forever Atlas cluster.
- Hugging Face account
- Source code from the previous article
- MongoDB Tools
## Demonstration
We'll begin by importing a dataset of fitness exercises into MongoDB Atlas as documents. Then, we'll create a trigger that activates upon insertion. For each document in the dataset, a function will be invoked to request Hugging Face's API. This function will send the exercise description for conversion into an embedded array, which will be saved into the exercises collection as *descEmbedding*:
to create your key:
to import the exercises.json file via the command line. After installing MongoDB Tools, simply paste the "exercises.json" file into the "bin" folder and execute the command, as shown in the image below:
. Our objective is to create an endpoint **/processRequest** to send an input to HuggingFace, such as:
*"**I need an exercise for my shoulders and to lose my belly fat**."*
.
If you have any questions or want to discuss further implementations, feel free to reach out to the MongoDB Developer Community forum for support and guidance.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt634af19fb7ed14c5/65fc4b9c73d0bc30f7f3de73/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd30af9afe5366352/65fc4bb7f2a29205cfbf725b/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdf42f1df193dbd24/65fc4bd6e55fcb1058237447/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta874d80fcc78b8bd/65fc4bf5d467d22d530bd73a/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7547c6fcc6e1f2d2/65fc4c0fd95760d277508123/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt96197cd32df66580/65fc4c38d4e0c0250b2947b4/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38c3724da63f3c95/65fc4c56fc863105d7d732c1/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt49218ab4f7a3cb91/65fc4c8ca1e8152dccd5da77/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cae67683fad5f9c/65fc4ca3f2a2920d57bf7268/9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fc6270e5e2f8665/65fc4cbb5fa1c6c4db4bfb01/10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf428bc700f44f2b5/65fc4cd6f4a4cf171d150bb2/11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad71144e071e11af/65fc4cf0d467d2595d0bd74a/12.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9009a43a7cd07975/65fc4d6d039fddd047339cbe/13.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd016d8390bd80397/65fc4d83d957609ea9508134/14.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcb767717bc6af497/65fc4da49b2cda321e9404bd/15.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc2e0005d6df9a273/65fc4db80780b933c761f14f/16.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt17850b744335f8f7/65fc4dce39973e99456eab16/17.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6915b7c63ea2bf5d/65fc4de754369a8839696baf/18.png
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt75993bebed24f8ff/65fc4df9a93acb7b58313f7d/19.png
[20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc68892874fb5cafc/65fc4e0f55464dd4470e2097/20.png
[21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdbd1c93b61b7535/65fc4e347a44b0822854bc61/21.png
[22]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01ebdac5cf78243d/65fc4e4a54369ac59e696bbe/22.png
[23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte13c91279d8805ef/65fc4e5dfc8631011ed732e7/23.png
[24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb17a754e566be42b/65fc4e7054369ac0c5696bc2/24.png
[25]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt03f0581b399701e8/65fc4e8bd4e0c0e18c2947e2/25.png
[26]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56c5fae14d1a2b6a/65fc4ea0d95760693a508145/26.png | md | {
"tags": [
"Atlas",
"Kotlin",
"AI"
],
"pageDescription": "Learn how to integrate Vector Search into your Kotlin with Ktor application using MongoDB.",
"contentType": "Tutorial"
} | Beyond Basics: Enhancing Kotlin Ktor API With Vector Search | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-inventory-management-system-using-mongodb-atlas | created | # Build an Inventory Management System Using MongoDB Atlas
In the competitive retail landscape, having the right stock in the right place at the right time is crucial. Too little inventory when and where it’s needed can create unhappy customers. However, a large inventory can increase costs and risks associated with its storage. Companies of all sizes struggle with inventory management. Solutions such as a single view of inventory, real-time analytics, and event-driven architectures can help your businesses overcome these challenges and take your inventory management to the next level. By the end of this guide, you'll have inventory management up and running, capable of all the solutions mentioned above.
We will walk you through the process of configuring and using MongoDB Atlas as your back end for your Next.js app, a powerful framework for building modern web applications with React.
The architecture we're about to set up is depicted in the diagram below:
Let's get started!
## Prerequisites
Before you begin working with this project, ensure that you have the following prerequisites set up in your development environment:
- **Git** (version 2.39 or higher): This project utilizes Git for version control. Make sure you have Git installed on your system. You can download and install the latest version of Git from the official website: Git Downloads.
- **Node.js** (version 20 or higher) and **npm** (version 9.6 or higher): The project relies on the Node.js runtime environment and npm (Node Package Manager) to manage dependencies and run scripts. You need to have them both installed on your machine. You can download Node.js from the official website: Node.js Downloads. After installing Node.js, npm will be available by default.
- **jq** (version 1.6 or higher): jq is a lightweight and flexible command-line JSON processor. We will use it to filter and format some command outputs to better identify the values we are interested in. Visit the official Download jq page to get the latest version.
- **mongorestore** (version 100.9.4 or higher): The mongorestore tool loads data from a binary database dump. The dump directory in the GitHub repository includes a demo database with preloaded collections, views, and indexes, to get you up and running in no time. This tool is part of the MongoDB Database Tools package. Follow the Database Tools Installation Guide to install mongorestore. When you are done with the installation, run mongorestore --version in your terminal to verify the tool is ready to use.
- **App Services CLI** (version 1.3.1 or higher): The Atlas App Services Command Line Interface (appservices) allows you to programmatically manage your applications. We will use it to speed up the app backend setup by using the provided template in the app_services directory in the GitHub repository. App Services CLI is available on npm. To install the CLI on your system, ensure that you have Node.js installed and then run the following command in your shell: npm install -g atlas-app-services-cli.
- **MongoDB Atlas cluster** (M0 or higher): This project uses a MongoDB Atlas cluster to manage the database. You should have a MongoDB Atlas account and a minimum free tier cluster set up. If you don't have an account, you can sign up for free at MongoDB Atlas. Once you have an account, follow these steps to set up a minimum free tier cluster or follow the Getting Started guide:
- Log into your MongoDB Atlas account.
- Create a new project or use an existing one, and then click “Create a new database.”
- Choose the free tier option (M0).
- You can choose the cloud provider of your choice but we recommend using the same provider and region both for the cluster and the app hosting in order to improve performance.
- Configure the cluster settings according to your preferences and then click “finish and close” on the bottom right.
## Initial configuration
### Obtain your connection string
Once the MongoDB Atlas cluster is set up, locate your newly created cluster, click the "Connect" button, and select the "Compass" section. Copy the provided connection string. It should resemble something like this:
```
mongodb+srv://:@cluster-name.xxxxx.mongodb.net/
```
> Note: You will need the connection string to set up your environment variables later (`MONGODB_URI`).
### Cloning the GitHub repository
Now, it's time to clone the demo app source code from GitHub to your local machine:
1. Open your terminal or command prompt.
2. Navigate to your preferred directory where you want to store the project using the cd command. For example:
```
cd /path/to/your/desired/directory
```
3. Once you're in the desired directory, use the `git clone` command to clone the repository. Copy the repository URL from the GitHub repository's main page:
```
git clone git@github.com:mongodb-industry-solutions/Inventory_mgmt.git
```
4. After running the `git clone` command, a new directory with the repository's name will be created in your chosen directory. To navigate into the cloned repository, use the cd command:
```
cd Inventory_mgmt
```
## MongoDB Atlas configuration
### Replicate the sample database
The database contains:
- Five collections
- **Products**: The sample database contains 17 products corresponding to T-shirts of different colors. Each product has five variants that represent five different sizes, from XS to XL. These variants are stored as an embedded array inside the product. Each variant will have a different SKU and therefore, its own stock level. Stock is stored both at item (`items.stock`) and product level (`total_stock_sum`).
- **Transactions**: This collection will be empty initially. Transactions will be generated using the app, and they can be of inbound or outbound type. Outbound transactions result in a decrease in the product stock such as a sale. On the other hand, inbound transactions result in a product stock increase, such as a replenishment order.
- **Locations**: This collection stores details of each of the locations where we want to keep track of the product stock. For the sake of this guide, we will just have two stores to demonstrate a multi-store scenario, but this could be scaled to thousands of locations. Warehouses and other intermediate locations could be also included. In this case, we assume a single warehouse, and therefore, we don’t need to include a location record for it.
- **Users**: Our app will have three users: two store managers and one area manager. Store managers will be in charge of the inventory for each of the store locations. Both stores are part of the same area, and the area manager will have an overview of the inventory in all stores assigned to the area.
- **Counters**: This support collection will keep track of the number of documents in the transactions collection so an auto-increment number can be assigned to each transaction. In this way, apart from the default _id field, we can have a human-readable transaction identifier.
- One view:
- Product area view: This view is used by the area manager to have an overview of the inventory in the area. Using the aggregation pipeline, the product and item stock levels are grouped for all the locations in the same area.
- One index:
- The number of transactions can grow quickly as we use the app. To improve performance, it is a good practice to set indexes that can be leveraged by common queries. In this case, the latest transactions are usually more relevant and therefore, they are displayed first. We also tend to filter them by type — inbound/outbound — and product. These three fields — `placement_timestamp`, type, and `product.name` — are part of a compound index that will help us to improve transaction retrieval time.
To replicate the sample database on your MongoDB Atlas cluster, run the following command in your terminal:
```
mongorestore --uri dump/
```
Make sure to replace `` with your MongoDB Atlas connection string. If you've already followed the initial configuration steps, you should have obtained this connection string. Ensure that the URI includes the username, password, and cluster details.
After executing these commands, you can verify the successful restoration of the demo database by checking the last line of the command output, which should display "22 document(s) restored successfully." These correspond to the 17 products, three users, and two locations mentioned earlier.
are fully managed backend services and APIs that help you build apps, integrate services, and connect to your Atlas data faster.
Atlas’s built-in device-to-cloud-synchronization service — Device Sync — will enable real-time low-stock alerts. Triggers and functions can execute serverless application and database logic in response to these events to automatically issue replenishment orders. And by using the Data API and Custom HTTPS Endpoints, we ensure a seamless and secure integration with the rest of the components in our inventory management solution.
Check how the stock is automatically replenished when a low-stock event occurs.
pair to authenticate your CLI calls. Navigate to MongoDB Cloud Access Manager, click the "Create API Key" button, and select the `Project Owner` permission level. For an extra layer of security, you can add your current IP address to the Access List Entry.
3. Authenticate your CLI user by running the command below in your terminal. Make sure you replace the public and private API keys with the ones we just generated in the previous step.
```
appservices login --api-key="" --private-api-key=""
```
4. Import the app by running the following command. Remember to replace `` by your preferred name.
```
appservices push --local ./app_services/ --remote
```
You will be prompted to configure the app options. Set them according to your needs. If you are unsure which options to choose, the default ones are usually a good way to start! For example, this is the configuration I've used.
```
? Do you wish to create a new app? Yes
? App Name inventory-management-demo
? App Deployment Model LOCAL
? Cloud Provider aws
? App Region aws-eu-west-1
? App Environment testing
? Please confirm the new app details shown above Yes
```
Once the app is successfully created, you will be asked to confirm some changes. These changes will load the functions, triggers, HTTP endpoints, and other configuration parameters our inventory management system will use.
After a few seconds, you will see a success message like “Successfully pushed app up: ``”. Take note of the obtained app ID.
5. In addition to the app ID, our front end will also need the base URL to send HTTP requests to the back end. Run the command below in your terminal to obtain it. Remember to replace `` with your own value. The jq tool will help us to get the appropriate field and format. Take note of the obtained URI.
```
appservices apps describe --app -f json | jq -r '.doc.http_endpoints0].url | split("/") | (.[0] + "//" + .[2])'
```
6. Finally, our calls to the back end will need to be authenticated. For this reason, we will create an API key that will be used by the server side of our inventory management system to generate an access token. It is only this access token that will be passed to the client side of the system to authenticate the calls to the back end.
> Important: This API key is not the same as the key used to log into the `appservices` CLI.
Again, before running the command, remember to replace the placeholder ``.
```
appservices users create --type=api-key --app= --name=tutorial-key
```
After a few seconds, you should see the message “Successfully created API Key,” followed by a JSON object. Copy the content of the field `key` and store it in a secure place. Remember that if you lose this key, you will need to create a new one.
> Note: You will need the app ID, base App Services URI, and API key to set up your environment variables later (`REALM_APP_ID`, `APP_SERVICES_URI`, `API_KEY`).
### Set up Atlas Search and filter facets
Follow these steps to configure search indexes for full-text search and filter facets:
1. Navigate to the "Data Services" section within Atlas. Select your cluster and click on "Atlas Search" located next to "Collections."
2. If you are in the M0 tier, you can create two search indexes for the products collection. This will allow you to merely search across the products collection. However, if you have a tier above M0, you can create additional search indexes. This will come in handy if you want to search and filter not only across your product catalog but also your transaction records, such as sales and replenishment orders.
3. Let's begin with creating the indexes for full-text search:
1. Click "Create Search Index."
2. You can choose to use either the visual or JSON editor. Select "JSON Editor" and click "Next."
3. Leave the index name as `default`.
4. Select your newly created database and choose the **products** collection. We will leave the default index definition, which should look like the one below.
```
{
"mappings": {
"dynamic": true
}
}
```
5. Click "Next" and on the next screen, confirm by clicking "Create Search Index."
6. After a few moments, your index will be ready for use. While you wait, you can create the other search index for the **transactions** collection. You need to repeat the same process but change the selected collection in the "Database and Collection" menu next to the JSON Editor.
> Important: The name of the index (default) must be the same in order for the application to be able to work properly.
4. Now, let's proceed to create the indexes required for the filter facets. Note that this process is slightly different from creating default search indexes:
1. Click "Create Index" again, select the JSON Editor, and click "Next."
2. Name this index `facets`.
3. Select your database and the **products** collection. For the index definition, paste the code below.
**Facets index definition for products**
```javascript
{
"mappings": {
"dynamic": false,
"fields": {
"items": {
"fields": {
"name": {
"type": "stringFacet"
}
},
"type": "document"
},
"name": {
"type": "stringFacet"
}
}
}
}
```
Click "Next" and confirm by clicking "Create Search Index." The indexing process will take some time. You can create the **transactions** index while waiting for the indexing to complete. In order to do that, just repeat the process but change the selected collection and the index definition by the one below:
**Facets index definition for transactions**
```javascript
{
"mappings": {
"dynamic": false,
"fields": {
"items": {
"fields": {
"name": {
"type": "stringFacet"
},
"product": {
"fields": {
"name": {
"type": "stringFacet"
}
},
"type": "document"
}
},
"type": "document"
}
}
}
}
```
> Important: The name of the index (`facets`) must be the same in order for the application to be able to work properly.
By setting up these search indexes and filter facets, your application will gain powerful search and filtering capabilities, making it more user-friendly and efficient in managing inventory data.
### Set up Atlas Charts
Enhance your application's visualization and analytics capabilities with Atlas Charts. Follow these steps to set up two dashboards — one for product information and another for general analytics:
1. Navigate to the "Charts" section located next to "App Services."
2. Let's begin by creating the product dashboard:
1. If this is your first time using Atlas Charts, click on “Chart builder.” Then, select the relevant project, the database, and the collection.
2. If you’ve already used Atlas Charts (i.e., you’re not a first-time user), then click on "Add Dashboard" in the top right corner. Give the dashboard a name and an optional description. Choose a name that clearly reflects the purpose of the dashboard. You don't need to worry about the charts in the dashboard for now. You'll configure them after the app is ready to use.
3. Return to the Dashboards menu, click on the three dots in the top right corner of the newly created dashboard, and select "Embed."
4. Check the "Enable unauthenticated access" option. In the "Allowed filter fields" section, edit the fields and select "Allow all fields in the data sources used in this dashboard." Choose the embedding method through the JavaScript SDK, and copy both the "Base URL" and the "Dashboard ID." Click “Close.”
5. Repeat the same process for the general dashboard. Select products again, as we will update this once the app has generated data. Note that the "Base URL" will be the same for both dashboards but the “dashboard ID” will be different so please take note of it.
> Note: You will need the base URL and dashboard IDs to set up your environment variables later (`CHARTS_EMBED_SDK_BASEURL`, `DASHBOARD_ID_PRODUCT`, `DASHBOARD_ID_GENERAL`).
Setting up Atlas Charts will provide you with visually appealing and insightful dashboards to monitor product information and overall analytics, enhancing your decision-making process and improving the efficiency of your inventory management system.
## Frontend configuration
### Set up environment variables
Copy the `env.local.example` file in this directory to `.env.local` (which will be ignored by Git), as seen below:
```
cp .env.local.example .env.local
```
Now, open this file in your preferred text editor and update each variable on .env.local.
Remember all of the notes you took earlier? Grab them because you’ll use them now! Also, remember to remove any spaces after the equal sign.
- `MONGODB_URI` — This is your MongoDB connection string to [MongoDB Atlas. You can find this by clicking the "Connect" button for your cluster. Note that you will have to input your Atlas password into the connection string.
- `MONGODB_DATABASE_NAME` — This is your MongoDB database name for inventory management.
- `REALM_APP_ID` — This variable should contain the app ID of the MongoDB Atlas App Services app you've created for the purpose of this project.
- `APP_SERVICES_URI` — This is the base URL for your MongoDB App Services. It typically follows the format `https://..data.mongodb-api.com`.
- `API_KEY` — This is your API key for authenticating calls using the MongoDB Data API.
- `CHARTS_EMBED_SDK_BASEURL` — This variable should hold the URL of the charts you want to embed in your application.
- `DASHBOARD_ID_PRODUCT` — This variable should store the Atlas Charts dashboard ID for product information.
- `DASHBOARD_ID_GENERAL` — This variable should store the Atlas Charts dashboard ID for the general analytics tab.
> Note: You may observe that some environment variables in the .env.local.example file are commented out. Don’t worry about them for now. These variables will be used in the second part of the inventory management tutorial series.
Please remember to save the updated file.
### Run locally
Execute the following commands to run your app locally:
```
npm ci
npm run dev
```
Your app should be up and running on http://localhost:3000! If it doesn't work, ensure that you have provided the correct environment variables.
Also, make sure your local IP is in the Access List of your project. If it’s not, just click the “Add IP address” button in the top right corner. This will display a popup menu. Within the menu, select “Add current IP address,” and click “Confirm.”
.
### Enable real-time analytics
1. To create a general analytics dashboard based on sales, we will need to generate sales data. Navigate to the control panel in your app by clicking http://localhost:3000/control.
2. Then, click the “start selling” button. When you start selling, remember to not close this window as selling will only work when the window is open. This will simulate a sale every five seconds, so we recommend letting it run for a couple of minutes.
3. In the meantime, navigate back to Atlas Charts to create a general analytics dashboard. For example, you can create a line graph that displays sales over the last hour, minute by minute. Now, you’ll see live data coming in, offering you real-time insights!
To achieve this, from the general dashboard, click “Add Chart” and select `transactions` as the data source. Select “Discrete Line” in the chart type dropdown menu. Then, you will need to add `timestamp` in the X axis and `quantity` in the Y axis.
of this guide to learn how to enable offline inventory management with Atlas Edge Server.
Curious for more? Learn how MongoDB is helping retailers to build modern consumer experiences. Check the additional resources below:
- MongoDB for Retail Innovation
- How to Enhance Inventory Management With Real-Time Data Strategies
- Radial Powers Retail Sales With 10x Higher Performance on MongoDB Atlas
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfbb152b55b18d55f/66213da1ac4b003831c3fdee/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5c9cf9349644772a/66213dc851b16f3fd6c4b39d/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0346d4fd163ccf8f/66213de5c9de46299bd456c3/3.gif
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcbb13b4ab1197d82/66213e0aa02ad73b34ee6aa5/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt70eebbb5ceb633ca/66213e29b054413a7e99b163/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf4880b64b44d0e59/66213e45a02ad7144fee6aaa/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7501da1b41ea0574/66213e6233301d04c488fb0f/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd493259a179fefa3/66213eaf210d902e8c3a2157/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt902ff936d063d450/66213ecea02ad76743ee6ab9/9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd99614dbd04fd894/66213ee545f9898396cf295c/10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt993d7e3579fb930e/66213efffb977c3ce2368432/11.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9a5075ae0de6f94b/6621579d81c884e44937d10f/12-fixed.gif | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "This tutorial takes you through the process of building a web app capable of efficiently navigating through your product catalog, receiving alerts, and automating restock workflows, all while maintaining control of your inventory through real-time analytics.",
"contentType": "Tutorial"
} | Build an Inventory Management System Using MongoDB Atlas | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/csharp-crud-tutorial | created | # MongoDB & C Sharp: CRUD Operations Tutorial
In this Quick Start post, I'll show how to set up connections between C# and MongoDB. Then I'll walk through the database Create, Read, Update, and Delete (CRUD) operations. As you already know, C# is a general-purpose language and MongoDB is a general-purpose data platform. Together, C# and MongoDB are a powerful combination.
## Series Tools & Versions
The tools and versions I'm using for this series are:
- MongoDB Atlas with an M0 free cluster,
- MongoDB Sample Dataset loaded, specifically the `sample_training` and `grades` dataset,
- Windows 10,
- Visual Studio Community 2019,
- NuGet packages,
- MongoDB C# Driver: version 2.9.1,
- MongoDB BSON Library: version 2.9.1.
>C# is a popular language when using the .NET framework. If you're going to be developing in .NET and using MongoDB as your data layer, the C# driver makes it easy to do so.
## Setup
To follow along, I'll be using Visual Studio 2019 on Windows 10 and connecting to a MongoDB Atlas cluster. If you're using a different OS, IDE, or text editor, the walkthrough might be slightly different, but the code itself should be fairly similar. Let's jump in and take a look at how nicely C# and MongoDB work together.
>Get started with an M0 cluster on MongoDB Atlas today. It's free forever and you'll be able to work alongside this blog series.
For this demonstration, I've chosen a Console App (.NET Core), and I've named it `MongoDBConnectionDemo`. Next, we need to install the MongoDB Driver for C#/.NET for a Solution. We can do that quite easily with NuGet. Inside Visual Studio for Windows, by going to *Tools* -> *NuGet Package Manager* -> Manage NuGet Packages for Solution... We can browse for *MongoDB.Driver*. Then click on our Project and select the driver version we want. In this case, the latest stable version is 2.9.1. Then click on *Install*. Accept any license agreements that pop up and head into `Program.cs` to get started.
### Putting the Driver to Work
To use the `MongoDB.Driver` we need to add a directive.
``` csp
using MongoDB.Driver;
```
Inside the `Main()` method we'll establish a connection to MongoDB Atlas with a connection string and to test the connection we'll print out a list of the databases on the server. The Atlas cluster to which we'll be connecting has the MongoDB Atlas Sample Dataset installed, so we'll be able to see a nice database list.
The first step is to pass in the MongoDB Atlas connection string into a MongoClient object, then we can get the list of databases and print them out.
``` csp
MongoClient dbClient = new MongoClient(<>);
var dbList = dbClient.ListDatabases().ToList();
Console.WriteLine("The list of databases on this server is: ");
foreach (var db in dbList)
{
Console.WriteLine(db);
}
```
When we run the program, we get the following out showing the list of databases:
``` bash
The list of databases on this server is:
{ "name" : "sample_airbnb", "sizeOnDisk" : 57466880.0, "empty" : false }
{ "name" : "sample_geospatial", "sizeOnDisk" : 1384448.0, "empty" : false }
{ "name" : "sample_mflix", "sizeOnDisk" : 45084672.0, "empty" : false }
{ "name" : "sample_supplies", "sizeOnDisk" : 1347584.0, "empty" : false }
{ "name" : "sample_training", "sizeOnDisk" : 73191424.0, "empty" : false }
{ "name" : "sample_weatherdata", "sizeOnDisk" : 4427776.0, "empty" : false }
{ "name" : "admin", "sizeOnDisk" : 245760.0, "empty" : false }
{ "name" : "local", "sizeOnDisk" : 1919799296.0, "empty" : false }
```
The whole program thus far comes in at just over 20 lines of code:
``` csp
using System;
using MongoDB.Driver;
namespace test
{
class Program
{
static void Main(string] args)
{
MongoClient dbClient = new MongoClient(<>);
var dbList = dbClient.ListDatabases().ToList();
Console.WriteLine("The list of databases on this server is: ");
foreach (var db in dbList)
{
Console.WriteLine(db);
}
}
}
}
```
With a connection in place, let's move on and start doing CRUD operations inside the MongoDB Atlas database. The first step there is to *Create* some data.
## Create
### Data
MongoDB stores data in JSON Documents. Actually, they are stored as Binary JSON (BSON) objects on disk, but that's another blog post. In our sample dataset, there is a `sample_training` with a `grades` collection. Here's what a sample document in that collection looks like:
``` json
{
"_id":{"$oid":"56d5f7eb604eb380b0d8d8ce"},
"student_id":{"$numberDouble":"0"},
"scores":[
{"type":"exam","score":{"$numberDouble":"78.40446309504266"}},
{"type":"quiz","score":{"$numberDouble":"73.36224783231339"}},
{"type":"homework","score":{"$numberDouble":"46.980982486720535"}},
{"type":"homework","score":{"$numberDouble":"76.67556138656222"}}
],
"class_id":{"$numberDouble":"339"}
}
```
### Connecting to a Specific Collection
There are 10,000 students in this collection, 0-9,999. Let's add one more by using C#. To do this, we'll need to use another package from NuGet, `MongoDB.Bson`. I'll start a new Solution in Visual Studio and call it `MongoDBCRUDExample`. I'll install the `MongoDB.Bson` and `MongoDB.Driver` packages and use the connection string provided from MongoDB Atlas. Next, I'll access our specific database and collection, `sample_training` and `grades`, respectively.
``` csp
using System;
using MongoDB.Bson;
using MongoDB.Driver;
namespace MongoDBCRUDExample
{
class Program
{
static void Main(string[] args)
{
MongoClient dbClient = new MongoClient(<>);
var database = dbClient.GetDatabase("sample_training");
var collection = database.GetCollection("grades");
}
}
}
```
#### Creating a BSON Document
The `collection` variable is now our key reference point to our data. Since we are using a `BsonDocument` when assigning our `collection` variable, I've indicated that I'm not going to be using a pre-defined schema. This utilizes the power and flexibility of MongoDB's document model. I could define a plain-old-C#-object (POCO) to more strictly define a schema. I'll take a look at that option in a future post. For now, I'll create a new `BsonDocument` to insert into the database.
``` csp
var document = new BsonDocument
{
{ "student_id", 10000 },
{ "scores", new BsonArray
{
new BsonDocument{ {"type", "exam"}, {"score", 88.12334193287023 } },
new BsonDocument{ {"type", "quiz"}, {"score", 74.92381029342834 } },
new BsonDocument{ {"type", "homework"}, {"score", 89.97929384290324 } },
new BsonDocument{ {"type", "homework"}, {"score", 82.12931030513218 } }
}
},
{ "class_id", 480}
};
```
### Create Operation
Then to *Create* the document in the `sample_training.grades` collection, we can do an insert operation.
``` csp
collection.InsertOne(document);
```
If you need to do that insert asynchronously, the MongoDB C# driver is fully async compatible. The same operation could be done with:
``` csp
await collection.InsertOneAsync(document);
```
If you have a need to insert multiple documents at the same time, MongoDB has you covered there as well with the `InsertMany` or `InsertManyAsync` methods.
We've seen how to structure a BSON Document in C# and then *Create* it inside a MongoDB database. The MongoDB C# Driver makes it easy to do with the `InsertOne()`, `InsertOneAsync()`, `InsertMany()`, or `InsertManyAsync()` methods. Now that we have *Created* data, we'll want to *Read* it.
## Read
To *Read* documents in MongoDB, we use the [Find() method. This method allows us to chain a variety of methods to it, some of which I'll explore in this post. To get the first document in the collection, we can use the `FirstOrDefault` or `FirstOrDefaultAsync` method, and print the result to the console.
``` csp
var firstDocument = collection.Find(new BsonDocument()).FirstOrDefault();
Console.WriteLine(firstDocument.ToString());
```
returns...
``` json
{ "_id" : ObjectId("56d5f7eb604eb380b0d8d8ce"),
"student_id" : 0.0,
"scores" :
{ "type" : "exam", "score" : 78.404463095042658 },
{ "type" : "quiz", "score" : 73.362247832313386 },
{ "type" : "homework", "score" : 46.980982486720535 },
{ "type" : "homework", "score" : 76.675561386562222 }
],
"class_id" : 339.0 }
```
You may wonder why we aren't using `Single` as that returns one document too. Well, that has to also ensure the returned document is the only document like that in the collection and that means scanning the whole collection.
### Reading with a Filter
Let's find the [document we created and print it out to the console. The first step is to create a filter to query for our specific document.
``` csp
var filter = Builders.Filter.Eq("student_id", 10000);
```
Here we're setting a filter to look for a document where the `student_id` is equal to `10000`. We can pass the filter into the `Find()` method to get the first document that matches the query.
``` csp
var studentDocument = collection.Find(filter).FirstOrDefault();
Console.WriteLine(studentDocument.ToString());
```
returns...
``` json
{ "_id" : ObjectId("5d88f88cec6103751b8a0d7f"),
"student_id" : 10000,
"scores" :
{ "type" : "exam", "score" : 88.123341932870233 },
{ "type" : "quiz", "score" : 74.923810293428346 },
{ "type" : "homework", "score" : 89.979293842903246 },
{ "type" : "homework", "score" : 82.129310305132179 }
],
"class_id" : 480 }
```
If a document isn't found that matches the query, the `Find()` method returns null. Finding the first document in a collection, or with a query is a frequent task. However, what about situations when all documents need to be returned, either in a collection or from a query?
### Reading All Documents
For situations in which the expected result set is small, the `ToList()` or `ToListAsync()` methods can be used to retrieve all documents from a query or in a collection.
``` csp
var documents = collection.Find(new BsonDocument()).ToList();
```
Filters can be passed in here as well, for example, to get documents with exam scores equal or above 95. The filter here looks slightly more complicated, but thanks to the MongoDB driver syntax, it is relatively easy to follow. We're filtering on documents in which inside the `scores` array there is an `exam` subdocument with a `score` value greater than or equal to 95.
``` csp
var highExamScoreFilter = Builders.Filter.ElemMatch(
"scores", new BsonDocument { { "type", "exam" },
{ "score", new BsonDocument { { "$gte", 95 } } }
});
var highExamScores = collection.Find(highExamScoreFilter).ToList();
```
For situations where it's necessary to iterate over the documents that are returned there are a couple of ways to accomplish that as well. In a synchronous situation, a C# `foreach` statement can be used with the `ToEnumerable` adapter method. In this situation, instead of using the `ToList()` method, we'll use the `ToCursor()` method.
``` csp
var cursor = collection.Find(highExamScoreFilter).ToCursor();
foreach (var document in cursor.ToEnumerable())
{
Console.WriteLine(document);
}
```
This can be accomplished in an asynchronous fashion with the `ForEachAsync` method as well:
``` csp
await collection.Find(highExamScoreFilter).ForEachAsync(document => Console.WriteLine(document));
```
### Sorting
With many documents coming back in the result set, it is often helpful to sort the results. We can use the [Sort() method to accomplish this to see which student had the highest exam score.
``` csp
var sort = Builders.Sort.Descending("student_id");
var highestScores = collection.Find(highExamScoreFilter).Sort(sort);
```
And we can append the `First()` method to that to just get the top student.
``` csp
var highestScore = collection.Find(highExamScoreFilter).Sort(sort).First();
Console.WriteLine(highestScore);
```
Based on the Atlas Sample Data Set, the document with a `student_id` of 9997 should be returned with an exam score of 95.441609472871946.
You can see the full code for both the *Create* and *Read* operations I've shown in the gist here.
The C# Driver for MongoDB provides many ways to *Read* data from the database and supports both synchronous and asynchronous methods for querying the data. By passing a filter into the `Find()` method, we are able to query for specific records. The syntax to build filters and query the database is straightforward and easy to read, making this step of CRUD operations in C# and MongoDB simple to use.
With the data created and being able to be read, let's take a look at how we can perform *Update* operations.
## Update
So far in this C# Quick Start for MongoDB CRUD operations, we have explored how to *Create* and *Read* data into a MongoDB database using C#. We saw how to add filters to our query and how to sort the data. This section is about the *Update* operation and how C# and MongoDB work together to accomplish this important task.
Recall that we've been working with this `BsonDocument` version of a student record:
``` csp
var document = new BsonDocument
{
{ "student_id", 10000 },
{ "scores", new BsonArray
{
new BsonDocument{ {"type", "exam"}, {"score", 88.12334193287023 } },
new BsonDocument{ {"type", "quiz"}, {"score", 74.92381029342834 } },
new BsonDocument{ {"type", "homework"}, {"score", 89.97929384290324 } },
new BsonDocument{ {"type", "homework"}, {"score", 82.12931030513218 } }
}
},
{ "class_id", 480}
};
```
After getting part way through the grading term, our sample student's instructor notices that he's been attending the wrong class section. Due to this error the school administration has to change, or *update*, the `class_id` associated with his record. He'll be moving into section 483.
### Updating Data
To update a document we need two bits to pass into an `Update` command. We need a filter to determine *which* documents will be updated. Second, we need what we're wanting to update.
### Update Filter
For our example, we want to filter based on the document with `student_id` equaling 10000.
``` csp
var filter = Builders.Filter.Eq("student_id", 10000)
```
### Data to be Changed
Next, we want to make the change to the `class_id`. We can do that with `Set()` on the `Update()` method.
``` csp
var update = Builders.Update.Set("class_id", 483);
```
Then we use the `UpdateOne()` method to make the changes. Note here that MongoDB will update at most one document using the `UpdateOne()` method. If no documents match the filter, no documents will be updated.
``` csp
collection.UpdateOne(filter, update);
```
### Array Changes
Not all changes are as simple as changing a single field. Let's use a different filter, one that selects a document with a particular score type for quizes:
``` csp
var arrayFilter = Builders.Filter.Eq("student_id", 10000) & Builders
.Filter.Eq("scores.type", "quiz");
```
Now if we want to make the change to the quiz score we can do that with `Set()` too, but to identify which particular element should be changed is a little different. We can use the positional $ operator to access the quiz `score` in the array. The $ operator on its own says "change the array element that we matched within the query" - the filter matches with `scores.type` equal to `quiz` and that's the element will get updated with the set.
``` csp
var arrayUpdate = Builders.Update.Set("scores.$.score", 84.92381029342834);
```
And again we use the `UpdateOne()` method to make the changes.
``` csp
collection.UpdateOne(arrayFilter , arrayUpdate);
```
### Additional Update Methods
If you've been reading along in this blog series I've mentioned that the C# driver supports both sync and async interactions with MongoDB. Performing data *Updates* is no different. There is also an `UpdateOneAsync()` method available. Additionally, for those cases in which multiple documents need to be updated at once, there are `UpdateMany()` or `UpdateManyAsync()` options. The `UpdateMany()` and `UpdateManyAsync()` methods match the documents in the `Filter` and will update *all* documents that match the filter requirements.
`Update` is an important operator in the CRUD world. Not being able to update things as they change would make programming incredibly difficult. Fortunately, C# and MongoDB continue to work well together to make the operations possible and easy to use. Whether it's updating a student's grade or updating a user's address, *Update* is here to handle the changes. The code for the *Create*, *Read*, and *Update* operations can be found in this gist.
We're winding down this MongoDB C# Quick Start CRUD operation series with only one operation left to explore, *Delete*.
>Remember, you can get started with an M0 cluster on MongoDB Atlas today. It's free forever and you'll be able to work alongside this blog series.
## Delete
To continue along with the student story, let's take a look at how what would happen if the student dropped the course and had to have their grades deleted. Once again, the MongoDB driver for C# makes it a breeze. And, it provides both sync and async options for the operations.
### Deleting Data
The first step in the deletion process is to create a filter for the document(s) that need to be deleted. In the example for this series, I've been using a document with a `student_id` value of `10000` to work with. Since I'll only be deleting that single record, I'll use the `DeleteOne()` method (for async situations the `DeleteOneAsync()` method is available). However, when a filter matches more than a single document and all of them need to be deleted, the `DeleteMany()` or `DeleteManyAsync` method can be used.
Here's the record I want to delete.
``` json
{
{ "student_id", 10000 },
{ "scores", new BsonArray
{
new BsonDocument{ {"type", "exam"}, {"score", 88.12334193287023 } },
new BsonDocument{ {"type", "quiz"}, {"score", 84.92381029342834 } },
new BsonDocument{ {"type", "homework"}, {"score", 89.97929384290324 } },
new BsonDocument{ {"type", "homework"}, {"score", 82.12931030513218 } }
}
},
{ "class_id", 483}
};
```
I'll define the filter to match the `student_id` equal to `10000` document:
``` csp
var deleteFilter = Builders.Filter.Eq("student_id", 10000);
```
Assuming that we have a `collection` variable assigned to for the `grades` collection, we next pass the filter into the `DeleteOne()` method.
``` csp
collection.DeleteOne(deleteFilter);
```
If that command is run on the `grades` collection, the document with `student_id` equal to `10000` would be gone. Note here that `DeleteOne()` will delete the first document in the collection that matches the filter. In our example dataset, since there is only a single student with a `student_id` equal to `10000`, we get the desired results.
For the sake of argument, let's imagine that the rules for the educational institution are incredibly strict. If you get below a score of 60 on the first exam, you are automatically dropped from the course. We could use a `for` loop with `DeleteOne()` to loop through the entire collection, find a single document that matches an exam score of less than 60, delete it, and repeat. Recall that `DeleteOne()` only deletes the first document it finds that matches the filter. While this could work, it isn't very efficient as multiple calls to the database are made. How do we handle situations that require deleting multiple records then? We can use `DeleteMany()`.
### Multiple Deletes
Let's define a new filter to match the exam score being less than 60:
``` csp
var deleteLowExamFilter = Builders.Filter.ElemMatch("scores",
new BsonDocument { { "type", "exam" }, {"score", new BsonDocument { { "$lt", 60 }}}
});
```
With the filter defined, we pass it into the `DeleteMany()` method:
``` csp
collection.DeleteMany(deleteLowExamFilter);
```
With that command being run, all of the student record documents with low exam scores would be deleted from the collection.
Check out the gist for all of the CRUD commands wrapped into a single file.
## Wrap Up
This C# Quick Start series has covered the various CRUD Operations (Create, Read, Update, and Delete) operations in MongoDB using basic BSON Documents. We've seen how to use filters to match specific documents that we want to read, update, or delete. This series has, thus far, been a gentle introduction to C Sharp and MongoDB.
BSON Documents are not, however, the only way to be able to use MongoDB with C Sharp. In our applications, we often have classes defining objects. We can map our classes to BSON Documents to work with data as we would in code. I'll take a look at mapping in a future post. | md | {
"tags": [
"C#"
],
"pageDescription": "Learn how to perform CRUD operations using C Sharp for MongoDB databases.",
"contentType": "Quickstart"
} | MongoDB & C Sharp: CRUD Operations Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/harnessing-natural-language-mongodb-queries-google-gemini | created | # Harnessing Natural Language for MongoDB Queries With Google Gemini
In the digital age, leveraging natural language for database queries represents a leap toward more intuitive data management. Vertex AI Extensions, currently in **private preview**, help in interacting with MongoDB using natural language. This tutorial introduces an approach that combines Google Gemini's advanced natural language processing with MongoDB, facilitated by Vertex AI Extensions. These extensions address key limitations of large language models (LLMs) by enabling real-time data querying and modification, which LLMs cannot do due to their static knowledge base post-training. By integrating MongoDB Atlas with Vertex AI Extensions, we offer a solution that enhances the accessibility and usability of the database.
MongoDB's dynamic schema, scalability, and comprehensive querying capabilities render it exemplary for Generative AI applications. It is adept at handling the versatile and unpredictable nature of data that these applications generate and use. From personalized content generation, where user data shapes content in real time, to sophisticated, AI-driven recommendation systems leveraging up-to-the-minute data for tailored suggestions, MongoDB stands out. Furthermore, it excels in complex data analysis, allowing AI tools to interact with vast and varied datasets to extract meaningful insights, showcasing its pivotal role in enhancing the efficiency and effectiveness of Generative AI applications.
## Natural language to MongoDB queries
Natural language querying represents a paradigm shift in data interaction, allowing users to retrieve information without the need for custom query languages. By integrating MongoDB with a system capable of understanding and processing natural language, we streamline database operations, making them more accessible to non-technical users.
### Solution blueprint
The solution involves a synergy of several components, including MongoDB, the Google Vertex AI SDK, Google Secrets Manager, and OpenAPI 3 specifications. Together, these elements create a robust framework that translates natural language queries into MongoDB Data API calls. In this solution, we have explored basic CRUD operations with Vertex AI Extensions. We are closely working with Google to enable vector search aggregations in the near future.
### Components involved
1. **MongoDB**: A versatile, document-oriented database that stores data in JSON-like formats, making it highly adaptable to various data types and structures
2. **Google Vertex AI SDK**: Facilitates the creation and management of AI and machine learning models, including the custom extension for Google Vertex AI
3. **Vertex AI Extensions:** Enhance LLMs by allowing them to interact with external systems in real-time, extending their capabilities beyond static knowledge
4. **Google Secrets Manager**: Securely stores sensitive information, such as MongoDB API keys, ensuring the solution's security and integrity
5. **OpenAPI 3 Specification for MongoDB Data API**: Defines a standard, language-agnostic interface to MongoDB that allows for both easy integration and clear documentation of the API's capabilities
### Description of the solution
The solution operates by converting natural language queries into parameters that the MongoDB Data API can understand. This conversion is facilitated by a custom extension developed using the Google Vertex AI extension SDK, which is then integrated with Gemini 1.0 Pro. The extension leverages OpenAPI 3 specifications to interact with MongoDB, retrieving data based on the user's natural language input. Google Secrets Manager plays a critical role in securely managing API keys required for MongoDB access, ensuring the solution's security.
or to create a new project.
2. If you are new to MongoDB Atlas, you can sign up to MongoDB either through the Google Cloud Marketplace or with the Atlas registration page.
3. Vertex AI Extensions are not publicly available. Please sign up for the Extensions Trusted Tester Program.
4. Basic knowledge of OpenAPI specifications and how to create them for APIs will be helpful.
5. You’ll need a Google Cloud Storage bucket for storing the OpenAPI specifications.
Before we begin, also make sure you:
**Enable MongoDB Data API**: To enable the Data API from the Atlas console landing page, open the Data API section from the side pane, enable the Data API, and copy the URL Endpoint as shown below.
). To create a new secret on the Google Cloud Console, navigate to Secrets Manager, and click on **CREATE SECRET**. Paste the secret created from MongoDB to the secret value field and click on **Create**.
. This specification outlines how natural language queries will be translated into MongoDB operations.
## Create Vertex AI extensions
This tutorial uses the MongoDB default dataset from the **sample_mflix** database, **movies** collection. We will run all the below code on the Enterprise Colab notebook.
1. Vertex AI Extensions is a platform for creating and managing extensions that connect large language models to external systems via APIs. These external systems can provide LLMs with real-time data and perform data processing actions on their behalf.
```python
from google.colab import auth
auth.authenticate_user("GCP project id")
!gcloud config set project {"GCP project id"}
```
2. Install the required Python dependencies.
```python
!gsutil cp gs://vertex_sdk_private_releases/llm_extension/google_cloud_aiplatform-1.44.dev20240315+llm.extension-py2.py3-none-any.whl .
!pip install --force-reinstall --quiet google_cloud_aiplatform-1.44.dev20240315+llm.extension-py2.py3-none-any.whlextension]
!pip install --upgrade --quiet google-cloud-resource-manager
!pip install --force-reinstall --quiet langchain==0.0.298!pip install pytube
!pip install --upgrade google-auth
!pip install bigframes==0.26.0
```
3. Once the dependencies are installed, restart the kernel.
```python
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) # Re-run the Env variable cell again after Kernel restart
```
4. Initialize the environment variables.
```python
import os
## This is just a sample values please replace accordingly to your project
# Setting up the GCP project
os.environ['PROJECT_ID'] = 'gcp project id' # GCP Project ID
os.environ['REGION'] = "us-central1" # Project Region
## GCS Bucket location
os.environ['STAGING_BUCKET'] = "gs://vertexai_extensions"
## Extension Config
os.environ['EXTENSION_DISPLAY_HOME'] = "MongoDb Vertex API Interpreter"
os.environ['EXTENSION_DESCRIPTION'] = "This extension makes api call to mongodb to do all crud operations"
## OPEN API SPec config
os.environ['MANIFEST_NAME'] = "mdb_crud_interpreter"
os.environ['MANIFEST_DESCRIPTION'] = "This extension makes api call to mongodb to do all crud operations"
os.environ['OPENAPI_GCS_URI'] = "gs://vertexai_extensions/mongodbopenapispec.yaml"
## API KEY secret location
os.environ['API_SECRET_LOCATION'] = "projects/787220387490/secrets/mdbapikey/versions/1"
##LLM config
os.environ['LLM_MODEL'] = "gemini-1.0-pro"
```
5. Download the Open API specification from [GitHub and upload the YAML file to the Google Cloud Storage bucket.
```python
from google.cloud import aiplatformfrom google.cloud.aiplatform.private_preview import llm_extension
PROJECT_ID = os.environ'PROJECT_ID']
REGION = os.environ['REGION']
STAGING_BUCKET = os.environ['STAGING_BUCKET']
aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=STAGING_BUCKET,
)
```
6. To create the Vertex AI extension, run the below script. The manifest here is a structured JSON object containing several key components:
```python
mdb_crud = llm_extension.Extension.create(
display_name = os.environ['EXTENSION_DISPLAY_HOME'],
description = os.environ['EXTENSION_DESCRIPTION'], # Optional manifest = { "name": os.environ['MANIFEST_NAME'],
"description": os.environ['MANIFEST_DESCRIPTION'],
"api_spec": {
"open_api_gcs_uri": os.environ['OPENAPI_GCS_URI'],
}, "auth_config": {
"apiKeyConfig":{
"name":"api-key",
"apiKeySecret":os.environ['API_SECRET_LOCATION'],
"httpElementLocation": "HTTP_IN_HEADER"
},
"authType":"API_KEY_AUTH"
},
},
)
```
7. Validate the Created Extension, and print the Operation Schema and Parameters.
```python
print("Name:", mdb_crud.gca_resource.name)print("Display Name:", mdb_crud.gca_resource.display_name)print("Description:", mdb_crud.gca_resource.description)
import pprint
pprint.pprint(mdb_crud.operation_schemas())
```
## Extension in action
Once the extension is created, navigate to [Vertex AI UI and then Vertex UI Extension on the left pane.
with MongoDB Atlas on Google Cloud.
2. Connect models to APIs by using Vertex AI extensions.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5f6e6aa6cea13ba1/661471b70c47840e25a3437a/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltae31621998903a57/661471cd4180c1c4ede408cb/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58474a722f262f1a/661471e40d99455ada032667/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9cd6a0e4c6b2ed4c/661471f5da0c3a5c7ff77441/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3ac5e7c88ed9d678/661472114180c1f08ee408d1/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39e9b0f8b7040dab/661472241a0e49338babc9e1/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfc26acb17bfca16d/6614723b2b98e9f356100e6b/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bdaa1e8a1cf5a51/661472517cacdc0fbad4a075/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt272144a86fea7776/661472632b98e9562f100e6f/9.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt62198b1ba0785a55/66147270be36f54af2d96927/10.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4a7e5371abe658e1/66147281be36f5ed61d9692b/11.png | md | {
"tags": [
"Atlas",
"Python",
"AI",
"Google Cloud"
],
"pageDescription": "By integrating MongoDB Atlas with Vertex AI Extensions, we offer a solution that enhances the accessibility and usability of the database.",
"contentType": "Article"
} | Harnessing Natural Language for MongoDB Queries With Google Gemini | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/stream-data-aws-glue | created | # Stream Data Into MongoDB Atlas Using AWS Glue
In this tutorial, you'll find a tangible showcase of how AWS Glue, Amazon Kinesis, and MongoDB Atlas seamlessly integrate, creating a streamlined data streaming solution alongside extract, transform, and load (ETL) capabilities. This repository also harnesses the power of AWS CDK to automate deployment across diverse environments, enhancing the efficiency of the entire process.
To follow along with this tutorial, you should have intermediate proficiency with AWS and MongoDB services.
## Architecture diagram
installed and configured
- NVM/NPM installed and configured
- AWS CDK installed and configured
- MongoDB Atlas account, with the Organization set up
- Python packages
- Python3 - `yum install -y python3`
- Python Pip - `yum install -y python-pip`
- Virtualenv - `pip3 install virtualenv`
>This repo is developed taking us-east-1 as the default region. Please update the scripts to your specific region (if required). This repo will create a MongoDB Atlas project and a free-tier database cluster automatically. No need to create a database cluster manually. This repo is created for a demo purpose and IP access is not restricted (0.0.0.0/0). Ensure you strengthen the security by updating the relevant IP address (if required).
### Setting up the environment
#### Get the application code
`git clone https://github.com/mongodb-partners/Stream_Data_into_MongoDB_AWS_Glue
cd kinesis-glue-aws-cdk`
#### Prepare the dev environment to run AWS CDK
a. Set up the AWS Environment variable AWS Access Key ID, AWS Secret Access Key, and optionally, the AWS Session Token.
```
export AWS_ACCESS_KEY_ID = <"your AWS access key">
export AWS_SECRET_ACCESS_KEY =<"your AWS secret access key">
export AWS_SESSION_TOKEN = <"your AWS session token">
```
b. We will use CDK to make our deployments easier.
You should have npm pre-installed.
If you don’t have CDK installed:
`npm install -g aws-cdk`
Make sure you’re in the root directory.
`python3 -m venv .venv`
`source .venv/bin/activate`
`pip3 install -r requirements.txt`
> For development setup, use requirements-dev.txt.
c. Bootstrap the application with the AWS account.
`cdk bootstrap`
d. Set the ORG_ID as an environment variable in the .env file. All other parameters are set to default in global_args.py in the kinesis-glue-aws-cdk folder. MONGODB_USER and MONGODB_PASSWORD parameters are set directly in mongodb_atlas_stack.py and glue_job_stack.py
The below screenshot shows the location to get the Organization ID from MongoDB Atlas.
to create a new CloudFormation stack to create the execution role.
to create a new CloudFormation stack for the default profile that all resources will attempt to use unless a different override is specified.
#### Profile secret stack
to resolve some common issues encountered when using AWS CloudFormation/CDK with MongoDB Atlas Resources.
## Useful commands
`cdk ls` lists all stacks in the app.
`cdk synth` emits the synthesized CloudFormation template.
`cdk deploy` deploys this stack to your default AWS account/region.
`cdk diff` compares the deployed stack with the current state.
`cdk docs` opens CDK documentation.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt95d66e4812fd56ed/661e9e36e9e603e1aa392ef0/architecture-diagram.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt401e52a6c6ca2f6f/661ea008f5bcd1bf540c99bd/organization-settings.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltffa00dde10848a90/661ea1fe190a257fcfbc5b4e/cloudformation-stack.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc9e58d51f80206d/661ea245ad926e2701a4985b/registry-public-extensions.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc5a089f4695e6897/661ea2c60d5626cbb29ccdfb/cluster-organization-settings.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a0bed34c4fbaa4f/661ea2ed243a4fa958838c90/edit-api-key.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt697616247b2b7ae9/661ea35cdf48e744da7ea2bd/aws-cloud-formation-stack.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcba40f2c7e837849/661ea396f19ed856a2255c19/output-cloudformation-stack.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1beb900395ae5533/661ea3d6a7375b6a462d7ca2/creation-mongodb-atlas-cluster.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt298896d4b3f0ecbb/661ea427e9e6030914392f35/output-cloudformation.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt880cc957dbb069b9/661ea458a7375b89a52d7cb8/kinesis-stream.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4fd8a062813bee88/661ea4c0a3e622865f4be23e/output.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt355c545ee15e9222/661ea50645b6a80f09390845/s3-buckets-created.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb4c1cdd506b90172/661ea54ba7375b40172d7cc5/output-2.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta91c3d8a8bd9d857/661ea57a243a4fe9d4838caa/aws-glue-studio.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1993ccc1ca70cd57/661ea5bc061bb15fd5421300/aws-glue-parameters.png | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "In this tutorial, find a tangible showcase of how AWS Glue, Amazon Kinesis, and MongoDB Atlas seamlessly integrate.",
"contentType": "Tutorial"
} | Stream Data Into MongoDB Atlas Using AWS Glue | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/rag_with_claude_opus_mongodb | created | # How to Build a RAG System Using Claude 3 Opus And MongoDB
# Introduction
Anthropic, a provider of large language models (LLMs), recently introduced three state-of-the-art models classified under the Claude 3 model family. This tutorial utilises one of the Claude 3 models within a retrieval-augmented generation (RAG) system powered by the MongoDB vector database. Before diving into the implementation of the retrieval-augmented generation system, here's an overview of the latest Anthropic release:
**Introduction of the Claude 3 model family:**
- **Models**: The family comprises Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, each designed to cater to different needs and applications.
- **Benchmarks**: The Claude 3 models have established new standards in AI cognition, excelling in complex tasks, comprehension, and reasoning.
**Capabilities and features:**
- **Multilingual and multimodal support**: Claude 3 models can generate code and text in a non-English language. The models are also multimodal, with the ability to understand images.
- **Long context window**: The Claude 3 model initially has a 200K token context window, with the ability to extend up to one million tokens for specific use cases.
- **Near-perfect recall**: The models demonstrate exceptional recall capabilities when analyzing extensive amounts of text.
**Design considerations:**
- **Balanced attributes**: The development of the Claude 3 models was guided by three main factors — speed, intelligence, and cost-effectiveness. This gives consumers a variety of models to leverage for different use cases requiring a tradeoff on one of the factors for an increase in another.
That’s a quick update on the latest Anthropic release. Although the Claude 3 model has a large context window, a substantial cost is still associated with every call that reaches the upper thresholds of the context window provided. RAG is a design pattern that leverages a knowledge source to provide additional information to LLMs by semantically matching the query input with data points within the knowledge store.
This tutorial implements a chatbot prompted to take on the role of a venture capital tech analyst. The chatbot is a naive RAG system with a collection of tech news articles acting as its knowledge source.
**What to expect from this tutorial:**
- Gain insights into constructing a retrieval-augmented generation system by integrating Claude 3 models with MongoDB to enhance query response accuracy.
- Follow a comprehensive tutorial on setting up your development environment, from installing necessary libraries to configuring a MongoDB database.
- Learn efficient data handling methods, including creating vector search indexes and preparing data for ingestion and query processing.
- Understand how to employ Claude 3 models within the RAG system for generating precise responses based on contextual information retrieved from the database.
**All implementation code presented in this tutorial is located in this GitHub repository**
-----
## Step 1: Library installation, data loading, and preparation
This section covers the steps taken to prepare the development environment source and clean the data utilised as the knowledge base for the venture capital tech analyst chatbot.
The following code installs all the required libraries:
```pip install pymongo datasets pandas anthropic openai```
**Below are brief explanations of the tools and libraries utilised within the implementation code:**
- **anthropic:** This is the official Python library for Anthropic that enables access to state-of-the-art language models. This library provides access to the Claude 3 family models, which can understand text and images.
- **datasets**: This library is part of the Hugging Face ecosystem. By installing datasets, we gain access to several pre-processed and ready-to-use datasets, which are essential for training and fine-tuning machine learning models or benchmarking their performance.
- **pandas**: This data science library provides robust data structures and methods for data manipulation, processing, and analysis.
- **openai**: This is the official Python client library for accessing OpenAI's embedding models.
- **pymongo**: PyMongo is a Python toolkit for MongoDB. It enables interactions with a MongoDB database.
Tools like Pyenv and Conda can create isolated development environments to separate package versions and dependencies across your projects. In these environments, you can install specific versions of libraries, ensuring that each project operates with its own set of dependencies without interference. The implementation code presentation in this tutorial is best executed within a Colab or Notebook environment.
After importing the necessary libraries, the subsequent steps in this section involve loading the dataset that serves as the foundational knowledge base for the RAG system and chatbot. This dataset contains a curated collection of tech news articles from HackerNoon, supplemented with an additional column of embeddings. These embeddings were created by processing the descriptions of each article in the dataset. The embeddings for this dataset were generated using OpenAI’s embedding model "text-embedding-3-small," with an embedding dimension of 256. This information on the embedding model and dimension is crucial when handling and embedding user queries in later processes.
The tech-news-embedding dataset contains more than one million data points, mirroring the scale of data typically encountered in a production setting. However, for this particular application, only 228,012 data points are utilized.
```
import os
import requests
from io import BytesIO
import pandas as pd
from google.colab import userdata
def download_and_combine_parquet_files(parquet_file_urls, hf_token):
"""
Downloads Parquet files from the provided URLs using the given Hugging Face token,
and returns a combined DataFrame.
Parameters:
- parquet_file_urls: List of strings, URLs to the Parquet files.
- hf_token: String, Hugging Face authorization token.
Returns:
- combined_df: A pandas DataFrame containing the combined data from all Parquet files.
"""
headers = {"Authorization": f"Bearer {hf_token}"}
all_dataframes = ]
for parquet_file_url in parquet_file_urls:
response = requests.get(parquet_file_url, headers=headers)
if response.status_code == 200:
parquet_bytes = BytesIO(response.content)
df = pd.read_parquet(parquet_bytes)
all_dataframes.append(df)
else:
print(f"Failed to download Parquet file from {parquet_file_url}: {response.status_code}")
if all_dataframes:
combined_df = pd.concat(all_dataframes, ignore_index=True)
return combined_df
else:
print("No dataframes to concatenate.")
return None
```
The code snippet above executes the following steps:
**Import necessary libraries**:
- `os` for interacting with the operating system
- `requests` for making HTTP requests
- `BytesIO` from the io module to handle bytes objects like files in memory
- `pandas` (as pd) for data manipulation and analysis
- `userdata` from google.colab to enable access to environment variables stored in Google Colab secrets
**Function definition**: The `download_and_combine_parquet_files` function is defined with two parameters:
- `parquet_file_urls`: a list of URLs as strings, each pointing to a Parquet file that contains a sub-collection of the tech-news-embedding dataset
- `hf_token`: a string representing a Hugging Face authorization token; access tokens can be created or copied from the [Hugging Face platform
**Download and read Parquet files**: The function iterates over each URL in parquet\_file\_urls. For each URL, it:
- Makes a GET request using the requests.get method, passing the URL and the headers for authorization.
- Checks if the response status code is 200 (OK), indicating the request was successful.
- Reads (if successful) the content of the response into a BytesIO object (to handle it as a file in memory), then uses pandas.read\_parquet to read the Parquet file from this object into a Pandas DataFrame.
- Appends the DataFrame to the list `all_dataframes`.
**Combine DataFrames**: After downloading and reading all Parquet files into DataFrames, there’s a check to ensure that `all_dataframes` is not empty. If there are DataFrames to work with, then all DataFrames are concatenated into a single DataFrame using pd.concat, with `ignore_index=True` to reindex the new combined DataFrame. This combined DataFrame is the overall process output in the `download_and_combine_parquet_files` function.
Below is a list of the Parquet files required for this tutorial. The complete list of all files is located on Hugging Face. Each Parquet file represents approximately 45,000 data points.
```
# Commented out other parquet files below to reduce the amount of data ingested.
# One praquet file has an estimated 50,000 datapoint
parquet_files =
"https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0000.parquet",
# "https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0001.parquet",
# "https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0002.parquet",
# "https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0003.parquet",
# "https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0004.parquet",
# "https://huggingface.co/api/datasets/AIatMongoDB/tech-news-embeddings/parquet/default/train/0005.parquet",
]
hf_token = userdata.get("HF_TOKEN")
combined_df = download_and_combine_parquet_files(parquet_files, hf_token)
```
In the code snippet above, a subset of the tech-news-embeddings dataset is grouped into a single DataFrame, which is then assigned to the variable `combined_df`.
As a final phase in data preparation, the code snippet below shows the step to remove the `_id` column from the grouped dataset, as it is unnecessary for subsequent steps in this tutorial. Additionally, the data within the embedding column for each data point is converted from a numpy array to a Python list to prevent errors related to incompatible data types during the data ingestion.
```
# Remove the _id coloum from the intital dataset
combined_df = combined_df.drop(columns=['_id'])
# Convert each numpy array in the 'embedding' column to a normal Python list
combined_df['embedding'] = combined_df['embedding'].apply(lambda x: x.tolist())
```
## Step 2: Database and collection creation
An approach to composing an AI stack focused on handling large data volumes and reducing data siloed is to utilise the same database provider for your operational and vector data. MongoDB acts as both an operational and a vector database. It offers a database solution that efficiently stores queries and retrieves vector embeddings.
**To create a new MongoDB database, set up a database cluster:**
1. Register for a [free MongoDB Atlas account, or existing users can sign into MongoDB Atlas.
1. Select the “Database” option on the left-hand pane, which will navigate to the Database Deployment page with a deployment specification of any existing cluster. Create a new database cluster by clicking on the **+Create** button.
1. For assistance with database cluster setup and obtaining the unique resource identifier (URI), refer to our guide for setting up a MongoDB cluster and getting your connection string.
***Note: Don’t forget to whitelist the IP for the Python host or 0.0.0.0/0 for any IP when creating proof of concepts.***
At this point, you have created a database cluster, obtained a connection string to the database, and placed a reference to the connection string within the development environment. The next step is to create a database and collect data through the MongoDB Atlas user interface.
Once you have created a cluster, navigate to the cluster page and create a database and collection within the MongoDB Atlas cluster by clicking **+ Create Database**. The database will be named `tech_news` and the collection will be named `hacker_noon_tech_news`.
.
In the creation of a vector search index using the JSON editor on MongoDB Atlas, ensure your vector search index is named **vector_index** and the vector search index definition is as follows:
```
{
"fields": {
"numDimensions": 256,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}]
}
```
## Step 4: Data ingestion
To ingest data into the MongoDB database created in the previous steps, the following operations have to be carried out:
- Connect to the database and collection.
- Clear out the collection of any existing records.
- Convert the Pandas DataFrame of the dataset into dictionaries before ingestion.
- Ingest dictionaries into MongoDB using a batch operation.
This tutorial requires the cluster's URI. Grab the URI and copy it into the Google Colab Secrets environment in a variable named `MONGO_URI`, or place it in a .env file or equivalent.
```
import pymongo
from google.colab import userdata
def get_mongo_client(mongo_uri):
"""Establish connection to the MongoDB."""
try:
client = pymongo.MongoClient(mongo_uri)
print("Connection to MongoDB successful")
return client
except pymongo.errors.ConnectionFailure as e:
print(f"Connection failed: {e}")
return None
mongo_uri = userdata.get('MONGO_URI')
if not mongo_uri:
print("MONGO_URI not set in environment variables")
mongo_client = get_mongo_client(mongo_uri)
DB_NAME="tech_news"
COLLECTION_NAME="hacker_noon_tech_news"
db = mongo_client[DB_NAME]
collection = db[COLLECTION_NAME]
```
The code snippet above uses PyMongo to create a MongoDB client object, representing the connection to the cluster and enabling access to its databases and collections. The variables `DB_NAME` and `COLLECTION_NAME` are given the names set for the database and collection in the previous step. If you’ve chosen different database and collection names, ensure they are reflected in the implementation code.
The code snippet below guarantees that the current database collection is empty by executing the `delete_many()` operation on the collection.
```
# To ensure we are working with a fresh collection
# delete any existing records in the collection
collection.delete_many({})
```
Ingesting data into a MongoDB collection from a pandas DataFrame is a straightforward process that can be efficiently accomplished by converting the DataFrame into dictionaries and then utilising the `insert_many` method on the collection to pass the converted dataset records.
```
# Data Ingestion
combined_df_json = combined_df.to_dict(orient='records')
collection.insert_many(combined_df_json)
```
The data ingestion process should take less than a minute, and when data ingestion is completed, the IDs of the corresponding records of the ingested document are returned.
## Step 5: Vector search
This section showcases the creation of a vector search custom function that accepts a user query, which corresponds to entries to the chatbot. The function also takes a second parameter, `collection`, which points to the database collection containing records against which the vector search operation should be conducted.
The `vector_search` function produces a vector search result derived from a series of operations outlined in a MongoDB aggregation pipeline. This pipeline includes the `$vectorSearch` and `$project` stages and performs queries based on the vector embeddings of user queries. It then formats the results, omitting any record attributes unnecessary for subsequent processes.
```
def vector_search(user_query, collection):
"""
Perform a vector search in the MongoDB collection based on the user query.
Args:
user_query (str): The user's query string.
collection (MongoCollection): The MongoDB collection to search.
Returns:
list: A list of matching documents.
"""
# Generate embedding for the user query
query_embedding = get_embedding(user_query)
if query_embedding is None:
return "Invalid query or embedding generation failed."
# Define the vector search pipeline
pipeline = [
{
"$vectorSearch": {
"index": "vector_index",
"queryVector": query_embedding,
"path": "embedding",
"numCandidates": 150, # Number of candidate matches to consider
"limit": 5 # Return top 5 matches
}
},
{
"$project": {
"_id": 0, # Exclude the _id field
"embedding": 0, # Exclude the embedding field
"score": {
"$meta": "vectorSearchScore" # Include the search score
}
}
}
]
# Execute the search
results = collection.aggregate(pipeline)
return list(results)
```
The code snippet above conducts the following operations to allow semantic search for tech news articles:
1. Define the `vector_search` function that takes a user's query string and a MongoDB collection as inputs and returns a list of documents that match the query based on vector similarity search.
1. Generate an embedding for the user's query by calling the previously defined function, `get_embedding`, which converts the query string into a vector representation.
1. Construct a pipeline for MongoDB's aggregate function, incorporating two main stages: `$vectorSearch` and `$project`.
1. The `$vectorSearch` stage performs the actual vector search. The index field specifies the vector index to utilise for the vector search, and this should correspond to the name entered in the vector search index definition in previous steps. The queryVector field takes the embedding representation of the use query. The path field corresponds to the document field containing the embeddings. The numCandidates specifies the number of candidate documents to consider and the limit on the number of results to return.
1. The `$project` stage formats the results to exclude the `_id` and the `embedding` field.
1. The aggregate executes the defined pipeline to obtain the vector search results. The final operation converts the returned cursor from the database into a list.
## Step 6: Handling user queries with Claude 3 models
The final section of the tutorial outlines the sequence of operations performed as follows:
- Accept a user query in the form of a string.
- Utilize the OpenAI embedding model to generate embeddings for the user query.
- Load the Anthropic Claude 3— specifically, the ‘claude-3-opus-20240229’ model — to serve as the base model, which is the large language model for the RAG system.
- Execute a vector search using the embeddings of the user query to fetch relevant information from the knowledge base, which provides additional context for the base model.
- Submit both the user query and the gathered additional information to the base model to generate a response.
The code snippet below focuses on generating new embeddings using OpenAI's embedding model. An [OpenAI API key is required to ensure the successful completion of this step. More details on OpenAI's embedding models can be found on the official site.
An important note is that the dimensions of the user query embedding match the dimensions set in the vector search index definition on MongoDB Atlas.
```
import openai
from google.colab import userdata
openai.api_key = userdata.get("OPENAI_API_KEY")
EMBEDDING_MODEL = "text-embedding-3-small"
def get_embedding(text):
"""Generate an embedding for the given text using OpenAI's API."""
# Check for valid input
if not text or not isinstance(text, str):
return None
try:
# Call OpenAI API to get the embedding
embedding = openai.embeddings.create(input=text, model=EMBEDDING_MODEL, dimensions=256).data0].embedding
return embedding
except Exception as e:
print(f"Error in get_embedding: {e}")
return None
```
The next step in this section is to import the Anthropic library and load the client to access Anthropic’s methods for handling messages and accessing Claude models. Ensure you obtain an Anthropic API key located within the settings page on the [official Anthropic website.
```
import anthropic
client = anthropic.Client(api_key=userdata.get("ANTHROPIC_API_KEY"))
```
The following code snippet introduces the function `handle_user_query`, which serves two primary purposes: It leverages a previously defined custom vector search function to query and retrieve relevant information from a MongoDB database, and it utilizes the Anthropic API via a client object to use one of the Claude 3 models for query response generation.
```
def handle_user_query(query, collection):
get_knowledge = vector_search(query, collection)
search_result = ''
for result in get_knowledge:
search_result += (
f"Title: {result.get('title', 'N/A')}, "
f"Company Name: {result.get('companyName', 'N/A')}, "
f"Company URL: {result.get('companyUrl', 'N/A')}, "
f"Date Published: {result.get('published_at', 'N/A')}, "
f"Article URL: {result.get('url', 'N/A')}, "
f"Description: {result.get('description', 'N/A')}, \n"
)
response = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1024,
system="You are Venture Captital Tech Analyst with access to some tech company articles and information. Use the information you are given to provide advice.",
messages=
{"role": "user", "content": "Answer this user query: " + query + " with the following context: " + search_result}
]
)
return (response.content[0].text), search_result
```
This function begins by executing the vector search against the specified MongoDB collection based on the user's input query. It then proceeds to format the retrieved information for further processing. Subsequently, the function invokes the Anthropic API, directing the request to a specific Claude 3 model.
Below is a more detailed description of the operations in the code snippet above:
1. **Vector search execution**: The function begins by calling `vector_search` with the user's query and a specified collection as arguments. This performs a search within the collection, leveraging vector embeddings to find relevant information related to the query.
1. **Compile search results**: `search_result` is initialized as an empty string to aggregate information from the search. The search results are compiled by iterating over the results returned by the `vector_search` function and formates each item's details (title, company name, URL, publication date, article URL, and description) into a human-readable string, appending this information to search_result with a newline character \n at the end of each entry.
1. **Generate response using Anthropic client**: The function then constructs a request to the Anthropic API (through a client object, presumably an instance of the Anthropic client class created earlier). It specifies:
The model to use ("claude-3-opus-20240229"), which indicates a specific version of the Claude 3 model.
The maximum token limit for the generated response (max_tokens=1024).
A system description guides the model to behave as a "Venture Capital Tech Analyst" with access to tech company articles and information, using this as context to advise.
The actual message for the model to process, which combines the user query with the aggregated search results as context.
1. **Return the generated response and search results**: It extracts and returns the response text from the first item in the response's content alongside the compiled search results.
```
# Conduct query with retrieval of sources
query = "Give me the best tech stock to invest in and tell me why"
response, source_information = handle_user_query(query, collection)
print(f"Response: {response}")
print(f"Source Information: \\n{source_information}")
```
The final step in this tutorial is to initialize the query, pass it into the `handle_user_query` function, and print the response returned.
1. **Initialise query**: The variable `query` is assigned a string value containing the user's request: "Give me the best tech stock to invest in and tell me why." This serves as the input for the `handle_user_query` function.
1. **Execute `handle_user_query` function**: The function takes two parameters — the user's query and a reference to the collection from which information will be retrieved. It performs a vector search to find relevant documents within the collection and formats the results for further use. It then queries the Anthropic Claude 3 model, providing it with the query and the formatted search results as context to generate an informed response.
1. **Retrieve response and source information**: The function returns two pieces of data: response and source_information. The response contains the model-generated answer to the user's query, while source_information includes detailed data from the collection used to inform the response.
1. **Display results**: Finally, the code prints the response from the Claude 3 model, along with the source information that contributed to this response.
![Response from Claude 3 Opus][2]
Claude 3 models possess what seems like impressive reasoning capabilities. From the response in the screenshot, it is able to consider expressive language as a factor in its decision-making and also provide a structured approach to its response.
More impressively, it gives a reason as to why other options in the search results are not candidates for the final selection. And if you notice, it factored the date into its selection as well.
Obviously, this is not going to replace any human tech analyst soon, but with a more extensive knowledge base and real-time data, this could very quickly become a co-pilot system for VC analysts.
**Please remember that Opus's response is not financial advice and is only shown for illustrative purposes**.
----------
# Conclusion
This tutorial has presented the essential steps of setting up your development environment, preparing your dataset, and integrating state-of-the-art language models with a powerful database system.
By leveraging the unique strengths of Claude 3 models and MongoDB, we've demonstrated how to create a RAG system that not only responds accurately to user queries but does so by understanding the context in depth. The impressive performance of the RAG system is a result of Opus parametric knowledge and the semantic matching capabilities facilitated by vector search.
Building a RAG system with the latest Claude 3 models and MongoDB sets up an efficient AI infrastructure. It offers cost savings and low latency by combining operational and vector databases into one solution. The functionalities of the naive RAG system presented in this tutorial can be extended to do the following:
- Get real-time news on the company returned from the search results.
- Get additional information by extracting text from the URLs provided in accompanying search results.
- Store additional metadata before data ingestion for each data point.
Some of the proposed functionality extensions can be achieved by utilising Anthropic function calling capabilities or leveraging search APIs. The key takeaway is that whether you aim to develop a chatbot, a recommendation system, or any application requiring nuanced AI responses, the principles and techniques outlined here will serve as a valuable starting point.
Want to leverage another state-of-the-art model for your RAG system? Check out our article that uses [Google’s Gemma alongside open-source embedding models provided by Hugging Face.
----------
# FAQs
**1. What are the Claude 3 models, and how do they enhance a RAG system?**
The Claude 3 models (Haiku, Sonnet, Opus) are state-of-the-art large language models developed by Anthropic. They offer advanced features like multilingual support, multimodality, and long context windows up to one million tokens. These models are integrated into RAG systems to leverage their ability to understand and generate text, enhancing the system's response accuracy and comprehension.
**2. Why is MongoDB chosen for a RAG system powered by Claude 3?**
MongoDB is utilized for its dual capabilities as an operational and a vector database. It efficiently stores, queries, and retrieves vector embeddings, making it ideal for managing the extensive data volumes and real-time processing demands of AI applications like a RAG system.
**3. How does the vector search function work within the RAG system?**
The vector search function in the RAG system conducts a semantic search against a MongoDB collection using the vector embeddings of user queries. It relies on a MongoDB aggregation pipeline, including the $vectorSearch and $project stages, to find and format the most relevant documents based on query similarity.
**4. What is the significance of data embeddings in the RAG system?**
Data embeddings are crucial for matching the semantic content of user queries with the knowledge stored in the database. They transform text into a vector space, enabling the RAG system to perform vector searches and retrieve contextually relevant information to inform the model's responses.
**5. How does the RAG system handle user queries with Claude 3 models?**
The RAG system processes user queries by generating embeddings using an embedding model (e.g., OpenAI's "text-embedding-3-small") and conducting a vector search to fetch relevant information. This information and the user query are passed to a Claude 3 model, which generates a detailed and informed response based on the combined context.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt793687aeea00c719/65e8ff7f08a892d1c1d52824/Creation_of_database_and_collections.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6cc891ae6c3fdbc1/65e90287a8b0116c485c79ce/Screenshot_2024-03-06_at_23.55.28.png | md | {
"tags": [
"Atlas",
"Python",
"AI",
"Pandas"
],
"pageDescription": "This guide details creating a Retrieval-Augmented Generation (RAG) system using Anthropic's Claude 3 models and MongoDB. It covers environment setup, data preparation, and chatbot implementation as a tech analyst. Key steps include database creation, vector search index setup, data ingestion, and query handling with Claude 3 models, emphasizing accurate, context-aware responses.\n\n\n\n\n\n",
"contentType": "Tutorial"
} | How to Build a RAG System Using Claude 3 Opus And MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-performance-over-rdbms | created | # MongoDB's Performance over RDBMS
Someone somewhere might be wondering why we get superior performance with MongoDB over RDBMS databases. What is the secret behind it? I too had this question until I learned about the internal workings of MongoDB, especially data modeling, advanced index methods, and finally, how the WiredTiger storage engine works.
I wanted to share my learnings and experiences to reveal the secret of it so that it might be helpful to you, too.
## Data modeling: embedded structure (no JOINs)
MongoDB uses a document-oriented data model, storing data in JSON-like BSON documents. This allows for efficient storage and retrieval of complex data structures.
MongoDB's model can lead to simpler and more performant queries compared to the normalization requirements of RDBMS.
The initial phase of enhancing performance involves comprehending the query behaviors of your application. This understanding enables you to tailor your data model and choose suitable indexes to align with these patterns effectively.
Always remember MongoDB's optimized document size (which is 16 MB) so you can avoid embedding images, audio, and video files in the same collection, as depicted in the image below.
Customizing your data model to match the query patterns of your application leads to streamlined queries, heightened throughput for insert and update operations, and better workload distribution across a sharded cluster.
While MongoDB offers a flexible schema, overlooking schema design is not advisable. Although you can adjust your schema as needed, adhering to schema design best practices from the outset of your project can prevent the need for extensive refactoring down the line.
A major advantage of BSON documents is that you have the flexibility to model your data any way your application needs. The inclusion of arrays and subdocuments within documents provides significant versatility in modeling intricate data relationships. But you can also model flat, tabular, and columnar structures, simple key-value pairs, text, geospatial and time-series data, or the nodes and edges of connected graph data structures. The ideal schema design for your application will depend on its specific query patterns.
### How is embedding within collections in MongoDB different from storing in multiple tables in RDBMS?
An example of a best practice for an address/contact book involves separating groups and portraits information in a different collection because as they can go big due to n-n relations and image size, respectively. They may hit a 16 MB optimized document size.
Embedding data in a single collection in MongoDB (or minimizing the number of collections, at least) versus storing in multiple tables in RDBMS offers huge performance improvements due to the data locality which will reduce the data seeks, as shown in the picture below.
Data locality is the major reason why MongoDB data seeks are faster.
**Difference: tabular vs document**
| | Tabular | MongoDB |
| --------------------------- | ----------------------------- | --------------- |
| Steps to create the model | 1 - define schema. 2 - develop app and queries | 1 - identifying the queries 2- define schema |
| Initial schema | 3rd normal form. One possible solution | Many possible solutions |
| Final schema | Likely denormalized | Few changes |
| Schema evolution | Difficult and not optimal. Likely downtime | Easy. No downtime |
| Performance | Mediocre | Optimized |
## WiredTiger’s cache and compression
WiredTiger is an open-source, high-performance storage engine for MongoDB. WiredTiger provides features such as document-level concurrency control, compression, and support for both in-memory and on-disk storage.
**Cache:**
WiredTiger cache architecture: WiredTiger utilizes a sophisticated caching mechanism to efficiently manage data in memory. The cache is used to store frequently accessed data, reducing the need to read from disk and improving overall performance.
Memory management: The cache dynamically manages memory usage based on the workload. It employs techniques such as eviction (removing less frequently used data from the cache) and promotion (moving frequently used data to the cache) to optimize memory utilization.
Configuration: WiredTiger allows users to configure the size of the cache based on their system's available memory and workload characteristics. Properly sizing the cache is crucial for achieving optimal performance.
Durability: WiredTiger ensures durability by flushing modified data from the cache to disk. This process helps maintain data consistency in case of a system failure.
**Compression**:
Data compression: WiredTiger supports data compression to reduce the amount of storage space required. Compressing data can lead to significant disk space savings and improved I/O performance.
Configurable compression: Users can configure compression options based on their requirements. WiredTiger supports different compression algorithms, allowing users to choose the one that best suits their workload and performance goals.
Trade-offs: While compression reduces storage costs and can improve read/write performance, it may introduce additional CPU overhead during compression and decompression processes. Users need to carefully consider the trade-offs and select compression settings that align with their application's needs.
Compatibility: WiredTiger's compression features are transparent to applications and don't require any changes to the application code. The engine handles compression and decompression internally.
Overall, WiredTiger's cache and compression features contribute to its efficiency and performance characteristics. By optimizing memory usage and providing configurable compression options, WiredTiger aims to meet the diverse needs of MongoDB users in terms of both speed and storage efficiency.
Few RDBMS systems also employ caching, but the performance benefits may vary based on the database system and configuration.
### Advanced indexing capabilities
MongoDB, being a NoSQL database, offers advanced indexing capabilities to optimize query performance and support efficient data retrieval. Here are some of MongoDB's advanced indexing features:
**Compound indexes**
MongoDB allows you to create compound indexes on multiple fields. A compound index is an index on multiple fields in a specific order. This can be useful for queries that involve multiple criteria.
The order of fields in a compound index is crucial. MongoDB can use the index efficiently for queries that match the index fields from left to right.
**Multikey indexes**
MongoDB supports indexing on arrays. When you index an array field, MongoDB creates separate index entries for each element of the array.
Multikey indexes are helpful when working with documents that contain arrays, and you need to query based on elements within those arrays.
**Text indexes**
MongoDB provides text indexes to support full-text search. Text indexes tokenize and stem words, allowing for more flexible and language-aware text searches.
Text indexes are suitable for scenarios where users need to perform text search operations on large amounts of textual data.
**Geospatial indexes**
MongoDB supports geospatial indexes to optimize queries that involve geospatial data. These indexes can efficiently handle queries related to location-based information.
Geospatial indexes support 2D and 3D indexing, allowing for the representation of both flat and spherical geometries.
**Wildcard indexes**
MongoDB supports wildcard indexes, enabling you to create indexes that cover only a subset of fields in a document. This can be useful when you have specific query patterns and want to optimize for those patterns without indexing every field.
**Partial indexes**
Partial indexes allow you to index only the documents that satisfy a specified filter expression. This can be beneficial when you have a large collection but want to create an index for a subset of documents that meet specific criteria.
**Hashed indexes**
Hashed indexes are useful for sharding scenarios. MongoDB automatically hashes the indexed field's values and distributes the data across the shards, providing a more even distribution of data and queries.
**TTL (time-to-live) indexes**
TTL indexes allow you to automatically expire documents from a collection after a certain amount of time. This is helpful for managing data that has a natural expiration, such as session information or log entries.
These advanced indexing capabilities in MongoDB provide developers with powerful tools to optimize query performance for a wide range of scenarios and data structures. Properly leveraging these features can significantly enhance the efficiency and responsiveness of MongoDB databases.
In conclusion, the superior performance of MongoDB over traditional RDBMS databases stems from its adept handling of data modeling, advanced indexing methods, and the efficiency of the WiredTiger storage engine. By tailoring your data model to match application query patterns, leveraging MongoDB's optimized document structure, and harnessing advanced indexing capabilities, you can achieve enhanced throughput and more effective workload distribution.
Remember, while MongoDB offers flexibility in schema design, it's crucial not to overlook the importance of schema design best practices from the outset of your project. This proactive approach can save you from potential refactoring efforts down the line.
For further exploration and discussion on MongoDB and database optimization strategies, consider joining our Developer Community. There, you can engage with fellow developers, share insights, and stay updated on the latest developments in database technology.
Keep optimizing and innovating with MongoDB to unlock the full potential of your applications.
| md | {
"tags": [
"MongoDB"
],
"pageDescription": "Guest Author Srinivas Mutyala discusses the reasons for MongoDB's improved performance over traditional RDMBS.",
"contentType": "Article"
} | MongoDB's Performance over RDBMS | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/cpp/adventures-iot-project-intro | created | # Plans and Hardware Selection for a Hands-on Implementation of IoT with MCUs and MongoDB
Do you have a cool idea for a device that you may consider producing and selling? Would you want to have some new functionality implemented for your smart home? Do you want to understand how IoT works with a real example that we can work with from beginning to end? Are you a microcontroller aficionado and want to find some implementation alternatives? If the answer is yes to any of these questions, welcome to this series of articles and videos.
# Table of Contents
1. The idea and the challenge
2. The vision, the mission and the product
3. Rules of engagement
4. The plan
5. Hardware selection
1. Raspberry Pi Pico W
2. Micro:bit
3. Adafruit Circuit Playground Bluefruit
4. Adafruit Feather nRF52840 Sense
5. Espressif ESP32-C6-DevKitC-1
6. Recap and future content
# The idea and the challenge
For a while, I have been implementing many automations at home using third-party hardware and software products. This brings me a lot of joy and, in most cases, improves the environment my family and I live in. In the past, this used to be harder, but nowadays, the tools and their compatibility have improved greatly. You can start with something as trivial but useful as turning on the garden lights right after sunset and off at the time that you usually go to bed. But you can easily go much further.
For example, I have a door sensor that is installed in my garage door that triggers a timer when the door is opened and turns a light red after six minutes. This simple application of domotics has helped me to avoid leaving the door open countless times.
All the fun, and sometimes even frustration, that I have experienced implementing these functionalities, together with the crazy ideas that I sometimes have for creating and building things, have made me take a step forward and accept a new challenge in this area. So I did some thinking and came up with a project that combined different requirements that made it suitable to be used as a proof of concept and something that I could share with you.
Let me describe the main characteristics of this project:
- It should be something that a startup could do (or, at least, close enough.) So, I will share the vision and the mission of that wannabe startup. But most importantly, I will introduce the concept for our first product. You don't have to *buy* the idea, nor will I spend time trying to demonstrate that there is a suitable business need for that, in other words, this is a BPNI (business plan not included) project.
- The idea should involve something beyond just a plain microcontroller (MCU). I would like to have some of those, maybe even in different rooms, and have their data collected in some way.
- The data will be collected wirelessly. Having several sensors in different places, the wired option isn't very appealing. I will opt for communications implemented over radio frequencies: Bluetooth and WiFi. I might consider using ZigBee, Thread, or something similar in the future if there is enough interest. Please be vocal in your comments on this article.
- I will use a computer to collect all the sensor measurements locally and send them to the cloud.
- The data is going to be ingested into MongoDB Atlas and we will use some of its IoT capabilities, such as time series collections and real-time analytics.
- Finally, I'm going to use some programming languages that are on the edge or even out of my comfort zone, just to prove that they shouldn't be the limitation.
# The vision, the mission and the product
**Vision**: we need work environments that enhance our productivity.
Consider that technology, and IoT in particular, can be helpful for that.
**Mission**: We are going to create, sell, and support IoT products that will help our users to be more productive and feel more comfortable in their work environments.
The first product in the pipeline is going to help our customers to measure and control noise levels in the workspace.
Hopefully, by now you are relieved that this isn't going to be another temperature sensor tutorial. Yippee-ki-yay!
Let's use an implementation diagram that we will refine in the future. In the diagram, I have included an *undetermined* number of sensors (actually, 5) to measure the noise levels in different places, hence the ear shape used for them. In my initial implementation, I will only use a few (two or three) with the sole purpose of verifying that the collecting station can work with more than one at any given time. My first choice for the collecting station, which is represented by the inbox icon, is to use a Raspberry Pi (RPi) that has built-in support for Bluetooth and WiFi. Finally, on the top of the diagram, we have a MongoDB Atlas cluster that we will use to store and use the sensor data.
videos in the past. Please forget my mistakes when using it.
Finally, there are some things that I won't be covering in this project, both for the sake of brevity and for my lack of knowledge of them. The most obvious ones are creating custom PCBs with the components and 3D printing a case for the resulting device. But most importantly, I won't be implementing firmware for all of the devkits that I will select and even less in different languages. Just some of the boards in some of the languages. As we lazy people like to say, this is left as an exercise to the reader.
# The plan
Coming back to the goal of this project, it is to mimic what one would do when one wants to create a new device from scratch. I will start, then, by selecting some microcontroller devkits that are available on the market. That is the first step and it is included in this article.
One of the main features of the hardware that I plan to use is to have some way of working wirelessly. I plan to have some sensors, and if they require a wired connection to the collecting station, it would be a very strong limitation. Thus, my next step is to implement this communication. I have considered two alternatives for the communication. The first one is Bluetooth Low Energy (BLE) and the second one is MQTT over WiFi. I will give a more detailed explanation when we get to them. From the perspective of power consumption, the first option seems to be better, and consuming less power means batteries that last longer and happier users.
But, there seems to be less (complete) documentation on how to implement it. For example, I could find neither good documentation for the BLE library that comes with MicroPython nor anything on how to use BLE with Bluez and DBus. Also, if I successfully implement both sides of the BLE communication, I need to confirm that I can make it work concurrently with more than one sensor.
My second and third steps will be to implement the peripheral role of the BLE communication on the microcontroller devkits and then the central role on the RPi.
I will continue with the implementation of the WiFi counterparts. Step 4 is going to be making the sensors publish their measurements via MQTT over WiFi, and Step 5 will be to have the Raspberry Pi subscribe to the MQTT service and receive the data.
Eventually, in Step 6, I will use the MongoDB C++ driver to upload the data to a MongoDB Atlas cluster. Once the data is ingested by the MongoDB Atlas cluster, we will be able to enjoy the advantages it offers in terms of storing and archiving the data, querying it, and using real-time analytics and visualization.
So, this is the list of steps of the plan:
1. Project intro (you are here)
2. BLE peripheral firmware
3. BLE central for Raspberry Pi OS
4. MQTT publisher firmware
5. MQTT subscriber for Raspberry Pi OS
6. Upload data from Raspberry Pi OS to MongoDB Atlas clusters
7. Work with the data using MongoDB
I have a couple of ideas that I may add at the end of this series, but for now, this matches my goals and what I wanted to share with you. Keep in mind that it is also possible that I will need to include intermediate steps to refine some code or include some required functionality. I am open to suggestions for topics that can be added and enhancements to this content. Send them my way while the project is still in progress.
# Hardware selection
I will start this hands-on part by defining the features that I will be using and then come up with some popular and affordable devkit boards that implement those features or, at least, can be made to do so. I will end up with a list of devkit boards. It will be nothing like the "top devkit boards" of this year, but rather a list of suggested boards that can be used for a project like this one.
Let's start with the features:
- They have to implement at least one of the two radio-frequency communication standards: WiFi and/or Bluetooth.
- They have to have a microphone or some pins that allow me to connect one.
- Having another sensor on board is appreciated but not required. Reading the temperature is extremely simple, so I will start by using that instead of getting audio. I will focus on the audio part later when the communications are implemented and working.
- I plan to have independent sensors, so it would be nice if I could plug a battery instead of using the USB power. Again, a nice feature, but not a must-have.
- Last, but not least, having documentation available, examples, and a vibrant community will make our lives easier.
## Raspberry Pi Pico W
is produced by the same company that sells the well-known Raspberry Pi single-board computers, but it is a microcontroller board with its own RP-2040 chip. The RP-2040 is a dual-core Arm Cortex-M0+ processor. The W model includes a fully certified module that provides 802.11n WiFi and Bluetooth 5.2. It doesn't contain a microphone in the devkit board, but there are examples and code available for connecting an electret microphone. It does have a temperature sensor, though. It also doesn't have a battery socket so we will have to use our spare USB chargers.
Finally, in terms of creating code for this board, we can use:
- MicroPython, which is an implementation of Python3 for microcontrollers. It is efficient and offers the niceties of the Python language: easy to learn, mature ecosystem with many libraries, and even REPL.
- C/C++ that provide a lower-level interface to extract every bit of juice from the board.
- JavaScript as I have learned very recently. The concept is similar to the one in the MicroPython environment but less mature (for now).
- There are some Rust crates for this processor and the board, but it may require extra effort to use BLE or WiFi using the embassy crate.
## Micro:bit
is a board created for learning purposes. It comes with several built-in sensors, including a microphone, and LEDs that we can use to get feedback on the noise levels. It uses a Nordic nRF52833 that features an ARM Cortex-M4 processor with a full Bluetooth Low Energy stack, but no WiFi. It has a battery socket and it can be bought with a case for AA batteries.
The educational goal is also present when we search for options to write code. These are the main options:
- Microsoft MakeCode which is a free platform to learn programming online using a graphical interface to operate with different blocks of code.
- Python using MicroPython or its own web interface.
- C/C++ with the Arduino IDE.
- Rust, because the introductory guide for embedded Rust uses the microbit as the reference board. So, no better board to learn how to use Rust with embedded devices. BLE is not in the guide, but we could also use the embassy nrf-softdevice crate to implement it.
## Adafruit Circuit Playground Bluefruit
is also aimed at people who want to have their first contact with electronics. It comes with a bunch of sensors, including temperature one and a microphone, and it also has some very nice RGB LEDs. Its main chip is a Nordic nRF52840 Cortex M4 processor with Bluetooth Low Energy support. As was the case with the micro:bit board, there's no WiFi support on this board. It has a JST PH connector for a lipo battery or an AAA battery pack.
It can be used with Microsoft MakeCode, but its preferred programming environment is CircuitPython. CircuitPython is a fork of MicroPython with some specific and slightly more refined libraries for Adafruit products, such as this board. If you want to use Rust, there is a crate for an older version of this board, without BLE support. But then again, we could use the embassy crates for that purpose.
## Adafruit Feather nRF52840 Sense
is also based on the Nordic nRF52840 Cortex M4 and offers Bluetooth Low Energy but no WiFi. It comes with many on-board sensors, including microphone and temperature. It also features an RGB LED and a JST PH connector for a battery that can be charged using the USB connector.
While this board can also be used to learn, I would argue that it's more aimed at prototyping and the programming options are:
- CircuitPython as with all the Adafruit boards.
- C/C++ with the Arduino IDE.
- Rust, using the previously mentioned crates.
## Espressif ESP32-C6-DevKitC-1
features a RISC-V single-core processor and a WROOM module that provides not only WiFi and Bluetooth connectivity but also Zigbee and Thread (both are network protocols specifically designed for IoT). It has no sensors on-board, but it does have an LED and two USB-C ports, one for UART communications and the other one for USB Type-C serial communications.
Espressif boards have traditionally been programmed in C/C++, but during the last year, they have been promoting Rust as a supported environment. It even has an introductory book that explains the basics for their boards.
# Recap and future content
:youtube]{vid=FW8n8IcEwTM}
In this article, we have introduced the project that I will be developing. It will be a series of sensors that gather noise data that will be collected by a bespoke implementation of a collecting station. I will explore two mechanisms for the communication between the sensors and the collecting station: BLE and MQTT over WiFi. Once the data is in the collecting station, I will send it to a MongoDB Atlas cluster on the Cloud using the C++ driver and we will finish the project by showing some potential uses of the data in the Cloud.
I have presented you with a list of requirements for the development boards and some alternatives that match those requirements, and you can use it for this or similar projects. In our next episode, I will try to implement the BLE peripheral role in one or more of the boards.
If you have any questions or feedback, head to the [MongoDB Developer Community forum.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt28e47a9dd6c27329/65533c1b9f2b99ec15bc9579/Adventures_in_IoT.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf77df78f4a2fdad2/65536b9e647c28790d4e8033/devices.jpeg
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt38747f873267cbc5/655365a46053f868fac92221/rp2.jpeg
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0d38d6409869d9a1/655365b64d285956b1afabf2/microbit.jpeg
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltfa0bf312ed01d222/655365cc2e0ea10531178104/circuit-playground.jpeg
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5ed758c5d26c0382/655365da9984b880675a9ace/feather.jpeg
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1ad337c149332b61/655365e9e9c23ce2e927441b/esp32-c6.jpeg | md | {
"tags": [
"C++",
"Python"
],
"pageDescription": "In the first article of this series, you can learn about the hands-on IoT project that we will be delivering. It discusses the architecture that will be implemented and the step-by-step approach that will be followed to implement it. There is a discussion about the rules of engagement for the project and the tools that will be used. The last section covers a a selection of MCU devkit boards that would be suitable for the project.",
"contentType": "Tutorial"
} | Plans and Hardware Selection for a Hands-on Implementation of IoT with MCUs and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/java/quarkus-rest-crud | created | # Creating a REST API for CRUD Operations With Quarkus and MongoDB
## What is Quarkus?
When we write a traditional Java application, our Java source code is compiled and transformed into Java bytecode.
This bytecode can then be executed by a Java virtual machine (JVM) specific to the operating system you are
running. This is why we can say that Java is a portable language. You compile once, and you can run it everywhere,
as long as you have the right JVM on the right machine.
This is a great mechanism, but it comes at a cost. Starting a program is slow because the JVM and the entire context
need to be loaded first before running anything. It's not memory-efficient because we need to load hundreds of classes that might not be used at all in the end as the classpath scanning only occurs after.
This was perfectly fine in the old monolithic realm, but this is totally unacceptable in the new world made of lambda
functions, cloud, containers, and Kubernetes. In this context, a low memory footprint and a lightning-fast startup time
are absolutely mandatory.
This is where Quarkus comes in. Quarkus is a Kubernetes-native Java framework tailored
for GraalVM and HotSpot).
With Quarkus, you can build native binaries that can boot and send their first response in 0.042 seconds versus 9.5
seconds for a traditional Java application.
In this tutorial, we are going to build a Quarkus application that can manage a `persons` collection in MongoDB. The
goal is to perform four simple CRUD operations with a REST API using a native application.
## Prerequisites
For this tutorial, you'll need:
- cURL.
- Docker.
- GraalVM.
- A MongoDB Atlas cluster or a local instance. I'll use a Docker container in
this tutorial.
If you don't want to code along and prefer to check out directly the final code:
```bash
git clone git@github.com:mongodb-developer/quarkus-mongodb-crud.git
```
## How to set up Quarkus with MongoDB
**TL;DR**:
Use this link
and click on `generate your application` or clone
the GitHub repository.
The easiest way to get your project up and running with Quarkus and all the dependencies you need is to
use https://code.quarkus.io/.
Similar to Spring initializr, the Quarkus project starter website will help you
select your dependencies and build your Maven or Gradle configuration file. Some dependencies will also include a
starter code to assist you in your first steps.
For our project, we are going to need:
- MongoDB client quarkus-mongodb-client].
- SmallRye OpenAPI [quarkus-smallrye-openapi].
- REST [quarkus-rest].
- REST Jackson [quarkus-rest-jackson].
Feel free to use the `group` and `artifact` of your choice. Make sure the Java version matches the version of your
GraalVM version, and we are ready to go.
Download the zip file and unzip it in your favorite project folder. Once it's done, take some time to read the README.md
file provided.
Finally, we need a MongoDB cluster. Two solutions:
- Create a new cluster on [MongoDB Atlas and retrieve the connection string, or
- Create an ephemeral single-node replica set with Docker.
```bash
docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:latest --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval "rs.initiate();"
```
Either way, the next step is to set up your connection string in the `application.properties` file.
```properties
quarkus.mongodb.connection-string=mongodb://localhost:27017
```
## CRUD operations in Quarkus with MongoDB
Now that our Quarkus project is ready, we can start developing.
First, we can start the developer mode which includes live coding (automatic refresh) without the need to restart the
program.
```bash
./mvnw compile quarkus:dev
```
The developer mode comes with two handy features:
- Swagger UI
- Quarkus Dev UI
Feel free to take some time to explore both these UIs and see the capabilities they offer.
Also, as your service is now running, you should be able to receive your first HTTP communication. Open a new terminal and execute the following query:
```bash
curl http://localhost:8080/hello
```
> Note: If you cloned the repo, then it’s `/api/hello`. We are changing this below in a minute.
Result:
```
Hello from Quarkus REST
```
This works because your project currently contains a single class `GreetingResource.java` with the following code.
```java
package com.mongodb;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.core.MediaType;
@Path("/hello")
public class GreetingResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "Hello from Quarkus REST";
}
}
```
### PersonEntity
"Hello from Quarkus REST" is nice, but it's not our goal! We want to manipulate data from a `persons` collection in
MongoDB.
Let's create a classic `PersonEntity.java` POJO class. I created
it in the default `com.mongodb` package which is my `group` from earlier. Feel free to change it.
```java
package com.mongodb;
import com.fasterxml.jackson.databind.annotation.JsonSerialize;
import com.fasterxml.jackson.databind.ser.std.ToStringSerializer;
import org.bson.types.ObjectId;
import java.util.Objects;
public class PersonEntity {
@JsonSerialize(using = ToStringSerializer.class)
public ObjectId id;
public String name;
public Integer age;
public PersonEntity() {
}
public PersonEntity(ObjectId id, String name, Integer age) {
this.id = id;
this.name = name;
this.age = age;
}
@Override
public int hashCode() {
int result = id != null ? id.hashCode() : 0;
result = 31 * result + (name != null ? name.hashCode() : 0);
result = 31 * result + (age != null ? age.hashCode() : 0);
return result;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
PersonEntity that = (PersonEntity) o;
if (!Objects.equals(id, that.id)) return false;
if (!Objects.equals(name, that.name)) return false;
return Objects.equals(age, that.age);
}
public ObjectId getId() {
return id;
}
public void setId(ObjectId id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public Integer getAge() {
return age;
}
public void setAge(Integer age) {
this.age = age;
}
}
```
We now have a class to map our MongoDB documents to using Jackson.
### PersonRepository
Now that we have a `PersonEntity`, we can create a `PersonRepository` template, ready to welcome our CRUD queries.
Create a `PersonRepository.java` class next to the `PersonEntity.java` one.
```java
package com.mongodb;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoCollection;
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class PersonRepository {
private final MongoClient mongoClient;
private final MongoCollection coll;
public PersonRepository(MongoClient mongoClient) {
this.mongoClient = mongoClient;
this.coll = mongoClient.getDatabase("test").getCollection("persons", PersonEntity.class);
}
// CRUD methods will go here
}
```
### PersonResource
We are now almost ready to create our first CRUD method. Let's update the default `GreetingResource.java` class to match
our goal.
1. Rename the file `GreetingResource.java` to `PersonResource.java`.
2. In the `test` folder, also rename the default test files to `PersonResourceIT.java` and `PersonResourceTest.java`.
3. Update `PersonResource.java` like this:
```java
package com.mongodb;
import jakarta.inject.Inject;
import jakarta.ws.rs.*;
import jakarta.ws.rs.core.MediaType;
@Path("/api")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public class PersonResource {
@Inject
PersonRepository personRepository;
@GET
@Path("/hello")
public String hello() {
return "Hello from Quarkus REST";
}
// CRUD routes will go here
}
```
> Note that with the `@Path("/api")` annotation, the URL of our `/hello` service is now `/api/hello`.
As a consequence, update `PersonResourceTest.java` so our test keeps working.
```java
package com.mongodb;
import io.quarkus.test.junit.QuarkusTest;
import org.junit.jupiter.api.Test;
import static io.restassured.RestAssured.given;
import static org.hamcrest.CoreMatchers.is;
@QuarkusTest
class PersonResourceTest {
@Test
void testHelloEndpoint() {
given().when().get("/api/hello").then().statusCode(200).body(is("Hello from Quarkus REST"));
}
}
```
### Create a person
All the code blocks are now in place. We can create our first route to be able to create a new person.
In
the repository,
add the following method that inserts a `PersonEntity` and returns the inserted document's `ObjectId` in `String`
format.
```java
public String add(PersonEntity person) {
return coll.insertOne(person).getInsertedId().asObjectId().getValue().toHexString();
}
```
In
the resource
file, we can create the corresponding route:
```java
@POST
@Path("/person")
public String createPerson(PersonEntity person) {
return personRepository.add(person);
}
```
Without restarting the project (remember the dev mode?), you should be able to test this route.
```bash
curl -X POST http://localhost:8080/api/person \
-H 'Content-Type: application/json' \
-d '{"name": "John Doe", "age": 30}'
```
This should return the `ObjectId` of the new `person` document.
```
661dccf785cd323349ca42f7
```
If you connect to the MongoDB instance with mongosh, you can confirm that
the document made it:
```
RS direct: primary] test> db.persons.find()
[
{
_id: ObjectId('661dccf785cd323349ca42f7'),
age: 30,
name: 'John Doe'
}
]
```
### Read persons
Now, we can read all the persons in the database, for example.
In
the [repository,
add:
```java
public List getPersons() {
return coll.find().into(new ArrayList<>());
}
```
In
the resource,
add:
```java
@GET
@Path("/persons")
public List getPersons() {
return personRepository.getPersons();
}
```
Now, we can retrieve all the persons in our database:
```bash
curl http://localhost:8080/api/persons
```
This returns a list of persons:
```json
{
"id": "661dccf785cd323349ca42f7",
"name": "John Doe",
"age": 30
}
]
```
### Update person
It's John Doe's anniversary! Let's increment his age by one.
In
the [repository,
add:
```java
public long anniversaryPerson(String id) {
Bson filter = eq("_id", new ObjectId(id));
Bson update = inc("age", 1);
return coll.updateOne(filter, update).getModifiedCount();
}
```
In
the resource,
add:
```java
@PUT
@Path("/person/{id}")
public long anniversaryPerson(@PathParam("id") String id) {
return personRepository.anniversaryPerson(id);
}
```
Time to test this party:
```bash
curl -X PUT http://localhost:8080/api/person/661dccf785cd323349ca42f7
```
This returns `1` which is the number of modified document(s). If the provided `ObjectId` doesn't match a person's id,
then it returns `0` and MongoDB doesn't perform any update.
### Delete person
Finally, it's time to delete John Doe...
In
the repository,
add:
```java
public long deletePerson(String id) {
Bson filter = eq("_id", new ObjectId(id));
return coll.deleteOne(filter).getDeletedCount();
}
```
In
the resource,
add:
```java
@DELETE
@Path("/person/{id}")
public long deletePerson(@PathParam("id") String id) {
return personRepository.deletePerson(id);
}
```
Let's test:
```bash
curl -X DELETE http://localhost:8080/api/person/661dccf785cd323349ca42f7
```
Again, it returns `1` which is the number of deleted document(s).
Now that we have a working Quarkus application with a MongoDB CRUD service, it's time to experience the full
power of Quarkus.
## Quarkus native build
Quit the developer mode by simply hitting the `q` key in the relevant terminal.
It's time to build
the native executable
that we can use in production with GraalVM and experience the *insanely* fast start-up time.
Use this command line to build directly with your local GraalVM and other dependencies.
```bash
./mvnw package -Dnative
```
Or use the Docker image that contains everything you need:
```bash
./mvnw package -Dnative -Dquarkus.native.container-build=true
```
The final result is a native application, ready to be launched, in your `target` folder.
```bash
./target/quarkus-mongodb-crud-1.0.0-SNAPSHOT-runner
```
On my laptop, it starts in **just 0.019s**! Remember how much time Spring Boot needs to start an application and respond
to queries for the first time?!
You can read more about how Quarkus makes this miracle a reality in
the container first documentation.
## Conclusion
In this tutorial, we've explored how Quarkus and MongoDB can team up to create a lightning-fast RESTful API with CRUD
capabilities.
Now equipped with these insights, you're ready to build blazing-fast APIs with Quarkus, GraalVM, and MongoDB. Dive into
the
provided GitHub repository for more details.
> If you have questions, please head to our Developer Community website where the
> MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Java",
"MongoDB",
"Quarkus",
"Docker"
],
"pageDescription": "Explore the seamless integration of Quarkus, GraalVM, and MongoDB for lightning-fast CRUD RESTful APIs. Harness Quarkus' rapid startup time and Kubernetes compatibility for streamlined deployment.",
"contentType": "Quickstart"
} | Creating a REST API for CRUD Operations With Quarkus and MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/unlock-value-data-mongodb-atlas-intelligent-analytics-microsoft-fabric | created | # Unlock the Value of Data in MongoDB Atlas with the Intelligent Analytics of Microsoft Fabric
To win in this competitive digital economy, enterprises are striving to create smarter intelligent apps. These apps provide a superior customer experience and can derive insights and predictions in real-time.
Smarter apps use data — in fact, lots of data, AI and analytics together. MongoDB Atlas stores valuable operational data and has capabilities to support operational analytics and AI based applications. This blog details MongoDB Atlas’ seamless integration with Microsoft Fabric to run large scale AI/ML and varied analytics and BI reports across the enterprise data estate, reshaping how teams work with data by bringing everyone together on a single, AI-powered platform built for the era of AI. Customers can leverage MongoDB Atlas with Microsoft Fabric as the foundation to build smart and intelligent applications.
## Better together
MongoDB was showcased as a key partner at Microsoft Ignite, highlighting the collaboration to build seamless integrations and joint solutions complementing capabilities to address diverse use cases.
, Satya Nadella, Chairman and Chief Executive Officer of Microsoft, announced that Microsoft Fabric is now generally available for purchase. Satya addressed the strategic plan to enable MongoDB Atlas mirroring in Microsoft Fabric to enable our customers to use mirroring to access their data in OneLake.
MongoDB Atlas’ flexible data model, versatile query engine, integration with LLM frameworks, and inbuilt Vector Search, analytical nodes, aggregation framework, Atlas Data Lake, Atlas Data Federation, Charts, etc. enables operational analytics and application-driven intelligence from the source of the data itself. However, the analytics and AI needs of an enterprise span across their data estate and require them to combine multiple data sources and run multiple types of analytics like big data, Spark, SQL, or KQL-based ones at a large-scale. They bring data from sources like MongoDB Atlas to one uniform format in OneLake in Microsoft Fabric to enable them to run Batch Spark analytics and AI/ML of petabyte scale and use data warehousing abilities, big data analytics, and real-time analytics across the delta tables populated from disparate sources.
is a Microsoft-certified connector which can be accessed from the “Dataflow Gen2” feature from “Data Factory” in Microsoft Fabric.
Dataflow Gen2 selection takes us to the familiar Power Query interface of Microsoft Power BI. To bring data from MongoDB Atlas collections, search the MongoDB Atlas SQL connector from the “Get Data” option on the menu.
or set up an Atlas federated database and get a connection string for the same. Also, note that the connector needs a Gateway set up to communicate from Fabric and schedule refreshes. Get more details on Gateway setup.
Once data is retrieved from MongoDB Atlas into Power Query, the magic of Power Query can be used to transform the data, including flattening object data into separate columns, unwinding array data into separate rows, or changing data types. These are typically required when converting MongoDB data in JSON format to the relational format in Power BI. Additionally, the blank query option can be used for a quick query execution. Below is a sample query to start with:
```
let
Source = MongoDBAtlasODBC.Query("", “", "select * from ", null)
in
Source
```
#### MongoDB Data Pipeline connector (preview)
The announcement at Microsoft Ignite of the Data Pipeline connector being released for MongoDB Atlas in Microsoft Fabric is definitely good news for MongoDB customers. The connector provides a quick and similar experience as the MongoDB connector in Data Factory and Synapse Pipelines.
The connector is accessed from the “Data Pipelines” feature from “Data Factory” in Fabric. Choose the “Copy data” activity to use the MongoDB connector to get data from MongoDB or to push data to MongoDB. To get data from MongoDB, add MongoDB in Source. Select the MongoDB connector and create a linked service by providing the **connection string** and the **database** to connect to in MongoDB Atlas.
to capture the change events in a MongoDB collection and using an Atlas function to trigger an Azure function. The Azure function can directly write to the Lake House in Microsoft Fabric or to ADLS Gen2 storage using ADLS Gen2 APIs. ADLS Gen2 storage accounts can be referenced in Microsoft Fabric using shortcuts, eliminating the need for an ETL process to move data from ADLS Gen2 to OneLake. Data in Microsoft Fabric can be accessed using the existing ADLS Gen2 APIs but there are some changes and constraints which can be referred to in the Microsoft Fabric documentation.
provides streaming capabilities which allows structured streaming of changes from MongoDB or to MongoDB in both continuous and micro-batch modes. Using the connector, we just need a simple code that reads a stream of changes from the MongoDB collection and writes the stream to the Lakehouse in Microsoft Fabric or to ADLS Gen2 storage which can be referenced in Microsoft Fabric using shortcuts. MongoDB Atlas can be set up as a source for structured streaming by referring to the MongoDB documentation. Refer to the Microsoft Fabric documentation on setting up Lakehouse as Sink for structured streaming.
and get started for free today on Azure Marketplace.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc783c7ca51ffc321/655678560e64b945e26edeb7/Fabric_Keynote.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt11079cade4dbe467/6553ef5253e8ec0e05c46baa/image2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt133326ec100a6ccd/6553ef7a9984b8c9045a9fc6/image5.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd2c2a8c4741c1849/6553efa09984b8a1685a9fca/image6.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc1faa0c6b3e2a93d/6553f00253e8ecacacc46bb4/image3.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4789acc89ff7d1ef/6553f021647c28121d4e84f6/image7.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt58b3b258cfeb5614/6553f0410e64b9dbad6ece06/image1.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltba161d80a4442dd0/6553f06ac2479d218b7822e0/image4.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn how you can use Microsoft Fabric with MongoDB Atlas for intelligent analytics for your data. ",
"contentType": "News & Announcements"
} | Unlock the Value of Data in MongoDB Atlas with the Intelligent Analytics of Microsoft Fabric | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/python/semantic-search-made-easy-langchain-mongodb | created | # Semantic Search Made Easy With LangChain and MongoDB
Enabling semantic search on user-specific data is a multi-step process that includes loading, transforming, embedding, and storing data before it can be queried.
, whose goal is to provide a set of utilities to greatly simplify this process.
In this tutorial, we'll walk through each of these steps, using MongoDB Atlas as our Store. Specifically, we'll use the AT&T Wikipedia page as our data source. We'll then use libraries from LangChain to load, transform, embed, and store:
(Free tier is fine)
* Open AI API key
## Quick start steps
1. Get the code:
```zsh
git clone https://github.com/mongodb-developer/atlas-langchain.git
```
2. Update params.py with your MongoDB connection string and Open AI API key.
3. Create a new Python environment
```zsh
python3 -m venv env
```
4. Activate the new Python environment
```zsh
source env/bin/activate
```
5. Install the requirements
```zsh
pip3 install -r requirements.txt
```
6. Load, transform, embed, and store
```zsh
python3 vectorize.py
```
7. Retrieve
```zsh
python3 query.py -q "Who started AT&T?"
```
## The details
### Load -> Transform -> Embed -> Store
#### Step 1: Load
There's no lack of sources of data — Slack, YouTube, Git, Excel, Reddit, Twitter, etc. — and LangChain provides a growing list of integrations that includes this list and many more.
For this exercise, we're going to use the WebBaseLoader to load the Wikipedia page for AT&T.
```python
from langchain.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://en.wikipedia.org/wiki/AT%26T")
data = loader.load()
```
#### Step 2: Transform (Split)
Now that we have a bunch of text loaded, it needs to be split into smaller chunks so we can tease out the relevant portion based on our search query. For this example, we'll use the recommended RecursiveCharacterTextSplitter. As I have it configured, it attempts to split on paragraphs (`"\n\n"`), then sentences(`"(?<=\. )"`), and then words (`" "`) using a chunk size of 1,000 characters. So if a paragraph doesn't fit into 1,000 characters, it will truncate at the next word it can fit to keep the chunk size under 1,000 characters. You can tune the `chunk_size` to your liking. Smaller numbers will lead to more documents, and vice-versa.
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0, separators=
"\n\n", "\n", "(?<=\. )", " "], length_function=len)
docs = text_splitter.split_documents(data)
```
#### Step 3: Embed
[Embedding is where you associate your text with an LLM to create a vector representation of that text. There are many options to choose from, such as OpenAI and Hugging Face, and LangChang provides a standard interface for interacting with all of them.
For this exercise, we're going to use the popular OpenAI embedding. Before proceeding, you'll need an API key for the OpenAI platform, which you will set in params.py.
We're simply going to load the embedder in this step. The real power comes when we store the embeddings in Step 4.
```python
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(openai_api_key=params.openai_api_key)
```
#### Step 4: Store
You'll need a vector database to store the embeddings, and lucky for you MongoDB fits that bill. Even luckier for you, the folks at LangChain have a MongoDB Atlas module that will do all the heavy lifting for you! Don't forget to add your MongoDB Atlas connection string to params.py.
```python
from pymongo import MongoClient
from langchain.vectorstores import MongoDBAtlasVectorSearch
client = MongoClient(params.mongodb_conn_string)
collection = clientparams.db_name][params.collection_name]
# Insert the documents in MongoDB Atlas with their embedding
docsearch = MongoDBAtlasVectorSearch.from_documents(
docs, embeddings, collection=collection, index_name=index_name
)
```
You'll find the complete script in [vectorize.py, which needs to be run once per data source (and you could easily modify the code to iterate over multiple data sources).
```zsh
python3 vectorize.py
```
#### Step 5: Index the vector embeddings
The final step before we can query the data is to create a search index on the stored embeddings.
In the Atlas console and using the JSON editor, create a Search Index named `vsearch_index` with the following definition:
```JSON
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "cosine",
"type": "knnVector"
}
}
}
}
```
or max_marginal_relevance_search. That would return the relevant slice of data, which in our case would be an entire paragraph. However, we can continue to harness the power of the LLM to contextually compress the response so that it more directly tries to answer our question.
```python
from pymongo import MongoClient
from langchain.vectorstores import MongoDBAtlasVectorSearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
client = MongoClient(params.mongodb_conn_string)
collection = clientparams.db_name][params.collection_name]
vectorStore = MongoDBAtlasVectorSearch(
collection, OpenAIEmbeddings(openai_api_key=params.openai_api_key), index_name=params.index_name
)
llm = OpenAI(openai_api_key=params.openai_api_key, temperature=0)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor,
base_retriever=vectorStore.as_retriever()
)
```
```zsh
python3 query.py -q "Who started AT&T?"
Your question:
-------------
Who started AT&T?
AI Response:
-----------
AT&T - Wikipedia
"AT&T was founded as Bell Telephone Company by Alexander Graham Bell, Thomas Watson and Gardiner Greene Hubbard after Bell's patenting of the telephone in 1875."[25] "On December 30, 1899, AT&T acquired the assets of its parent American Bell Telephone, becoming the new parent company."[28]
```
## Resources
* [MongoDB Atlas
* Open AI API key
* LangChain
* WebBaseLoader
* RecursiveCharacterTextSplitter
* MongoDB Atlas module
* Contextual Compression
* MongoDBAtlasVectorSearch API
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt60cb0020b79c0f26/6568d2ba867c0b46e538aff4/semantic-search-made-easy-langchain-mongodb-1.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7d06677422184347/6568d2edf415044ec2127397/semantic-search-made-easy-langchain-mongodb-2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd8e8fb5c5fdfbed8/6568d30e81b93e1e25a1bf8e/semantic-search-made-easy-langchain-mongodb-3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7b65b6cb87008f2a/6568d337867c0b1e0238b000/semantic-search-made-easy-langchain-mongodb-4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt567ad5934c0f7a34/6568d34a7e63e37d4e110d3d/semantic-search-made-easy-langchain-mongodb-5.png | md | {
"tags": [
"Python",
"Atlas",
"AI"
],
"pageDescription": "Discover the power of semantic search with our comprehensive tutorial on integrating LangChain and MongoDB. This step-by-step guide simplifies the complex process of loading, transforming, embedding, and storing data for enhanced search capabilities. Using MongoDB Atlas and the AT&T Wikipedia page as a case study, we demonstrate how to effectively utilize LangChain libraries to streamline semantic search in your projects. Ideal for developers with a MongoDB Atlas subscription and OpenAI API key, this tutorial covers everything from setting up your environment to querying embedded data. Dive into the world of semantic search with our easy-to-follow instructions and expert insights.",
"contentType": "Tutorial"
} | Semantic Search Made Easy With LangChain and MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/no-connectivity-no-problem-enable-offline-inventory-atlas-edge-server | created | # No Connectivity? No Problem! Enable Offline Inventory with Atlas Edge Server
> If you haven’t yet followed our guide on how to build an inventory management system using MongoDB Atlas, we strongly advise doing so now. This article builds on top of the previous one to bring powerful new capabilities for real-time sync, conflict resolution, and disconnection tolerance!
In the relentless world of retail logistics, where products are always on the move, effective inventory management is crucial. Fast-moving operations can’t afford to pause when technical systems go offline. That's why it's essential for inventory management processes to remain functional, even without connectivity. To address this challenge, supply chains turn to Atlas Edge Server to enable offline inventory management in a reliable and cost-effective way. In this guide, we will demonstrate how you can easily incorporate Edge Server into your existing solution.
, we explored how MongoDB Atlas enables event-driven architectures to enhance inventory management with real-time data strategies. Now, we are taking that same architecture a step further to ensure our store operations run seamlessly even in the face of connectivity issues. Our multi-store setup remains the same: We’ll have three users — two store managers and one area manager — overviewing the inventory of their stores and areas respectively. We'll deploy identical systems in both individual stores and the public cloud to serve the out-of-store staff. The only distinction will be that the store apps will be linked to Edge Server, whereas the area manager's app will remain connected to MongoDB Atlas. Just like that, our stores will be able to handle client checkouts, issue replenishment orders, and access the product catalog with no interruptions and minimal latency. This is how Atlas Edge Server bridges the gap between connected retail stores and the cloud.
Without further ado, let's dive in and get started!
## Prerequisites
For this next phase, we'll need to ensure we have all the prerequisites from Part 1 in place, as well as some additional requirements related to integrating Edge Server. Here are the extra tools you'll need:
- **Docker** (version 24 or higher): Docker allows us to package our application into containers, making it easy to deploy and manage across different environments. Since Edge Server is a containerized product, Docker is essential to run it. You can choose to install Docker Engine alone if you're using one of the supported platforms or as part of the Docker Desktop package for other platforms.
- **Docker Compose** (version 2.24 or higher): Docker Compose is a tool for defining and running multi-container Docker applications. The Edge Server package deploys a group of containers that need to be orchestrated effectively. If you have installed Docker Desktop in the previous step, Docker Compose will be available by default. For Linux users, you can install Docker Compose manually from this page: Install the Docker Compose plugin.
- **edgectl** (version 0.23.2 or higher): edgectl is the CLI tool for Edge Server, allowing you to manage and interact with Edge Server instances. To install this tool, you can visit the official documentation on how to configure Edge Server or simply run the following command in your terminal: `curl https://services.cloud.mongodb.com/edge/install.sh | bash`.
With these additional tools in place, we'll be ready to take our inventory management system to the next level.
## A quick recap
Alright, let's do a quick recap of what we should have in place already:
- **Sample database**: We created a sample database with a variety of collections, each serving a specific purpose in our inventory management system. From tracking products and transactions to managing user roles, our database laid the groundwork for a single view of inventory.
- **App Services back end**: Leveraging Atlas App Services, we configured our app back end with triggers, functions, HTTPS endpoints, and the Data API. This setup enabled seamless communication between our application and the database, facilitating real-time responses to events.
- **Search Indexes**: We enhanced our system's search capabilities by setting up Search Indexes. This allows for efficient full-text search and filtering, improving the user experience and query performance.
- **Atlas Charts**: We integrated Atlas Charts to visualize product information and analytics through intuitive dashboards. With visually appealing insights, we can make informed decisions and optimize our inventory management strategy.
documentation.
Follow these instructions to set up and run Edge Server on your own device:
We will configure Edge Server using the command-line tool edgectl. By default, this tool will be installed at `.mongodb-edge` in your home directory. You can reference the entire path to use this tool, `~/.mongodb-edge/bin/edgectl`, or simply add it to your `PATH` by running the command below:
```
export PATH="~/.mongodb-edge/bin/:$PATH"
```
The next command will generate a docker-compose file in your current directory with all the necessary steps to deploy and manage your Edge Server instance. Replace `` with the value obtained in the first part of this tutorial series, and `` with the token generated in the previous section.
```
edgectl init --platform compose --app-id --registration-token --insecure-disable-auth
```
> Note: To learn more about each of the config flags, visit our documentation on how to install and configure Edge Server.
This application is able to simulate offline scenarios by setting the edge server connectivity off. In order to enable this feature in Edge Server, run the command below.
```
edgectl offline-demo setup
```
- Atlas Edge Server
- How Atlas Edge Server Bridges the Gap Between Connected Retail Stores and the Cloud
- Grainger Innovates at the Edge With MongoDB Atlas Device Sync and Machine Learning
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8ea62e50ad7e7a88/66293b9985518c840a558497/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43b8b288aa9fe608/66293bb6cac8480a1228e08b/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf2995665e6146367/66293bccb054417d969a04b5/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5fad48387c6d5355/66293bdb33301d293a892dd1/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt480ffa6b4b77dbc0/66293bed33301d8bcb892dd5/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc5f671f8c40b9e2e/66293c0458ce881776c309ed/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd31d0f728d5ec83c/66293c16b8b5ce162edc25d0/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c061edca6899a69/66293c2cb0ec77e21cd6e8e4/8.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2d7fd9a68a6fa499/66293c4281c884eb36380366/9.gif | md | {
"tags": [
"Atlas"
],
"pageDescription": "",
"contentType": "Tutorial"
} | No Connectivity? No Problem! Enable Offline Inventory with Atlas Edge Server | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/migrate-from-rdbms-mongodb-help-ai-introduction-query-converter | created | # Migrate From an RDBMS to MongoDB With the Help of AI: An Introduction to Query Converter
Migrating your applications between databases and programming languages can often feel like a chore. You have to export and import your data, transfer your schemas, and make potential application logic changes to accommodate the new programming language or database syntax. With MongoDB and the Relational Migrator tool, these activities no longer need to feel like a chore and instead can become more automated and streamlined.
tool as it contains sample schemas that will work for experimentation. However, if you want to play around with your own data, you can connect to one of the popular relational database management systems (RDBMS).
## Generate MongoDB queries with the help of AI
Open Relational Migrator and choose to create a new project. For the sake of this article, we'll click "Use a sample schema" to play around. Running queries and looking at data is not important here. We only want to know our schema, our SQL queries, and what we'll end our adventure with query-wise.
.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4df0d0f28d4b9b30/66294a9458ce883a7ec30a80/query-converter-animated.gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte5216ae48b0c15c1/66294aad210d90a3c53a53dd/relational-migrator-new-project.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte7b77b8c7e1ca080/66294ac4fb977c24fa36b921/relational-migrator-erd-model.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc0f9b2fa7b1434d7/66294adffb977c441436b929/relational-migrator-query-converter-tab.png | md | {
"tags": [
"Atlas",
"SQL"
],
"pageDescription": "Learn how to quickly and easily migrate your SQL queries from a relational database to MongoDB queries and aggregation pipelines using the AI features of Relational Migrator and Query Converter.",
"contentType": "Tutorial"
} | Migrate From an RDBMS to MongoDB With the Help of AI: An Introduction to Query Converter | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/javascript/scale-up-office-music | created | # Listen Along at Scale Up with Atlas Application Services
Here at Scale Up, we value music a lot. We have a Google Home speaker at our office that gets a lot of use. Music gets us going and helps us express ourselves individually as well as an organization. With how important music is in our lives, an idea came to our minds: We wanted to share what we like to listen to. We made a Scale Up Spotify playlist that we share on our website and listen to quite often, but we wanted to take it one step further. We wanted a way for others to be able to see what we're currently listening to in the office, and to host that, we turned to Atlas Application Services.
Sources of music and ways to connect to the speaker are varied. Some people listen on YouTube, others on Spotify, some like to connect via a Cast feature that Google Home provides, and others just use Bluetooth connection to play the tunes. We sometimes use the voice features of Google Home and politely ask the speaker to put some music on.
All of this means that there's no easily available "one source of truth" for what's currently playing on the speaker. We could try to somehow connect to Spotify or Google Home's APIs to see what's being cast, but that doesn’t cover all the aforementioned cases—connecting via Bluetooth or streaming from YouTube. The only real source of truth is what our ears can actually hear.
That's what we ultimately landed on—trying to figure out what song is playing by actually listening to soundwaves coming out of the speaker. Thankfully, there are a lot of public APIs that can recognize songs based on a short audio sample. We decided to pick one that's pretty accurate when it comes to Polish music. In the end, it’s a big part of what we're listening to.
All of this has to run somewhere. The first thing that came to mind was to build this "listening device" by getting a Raspberry Pi with a microphone, but after going through my "old tech drawer"—let's face it, all of us techies have one—I found an old Nexus 5. After playing with some custom ROMs, I managed to run node.js applications there. If you think about it, it really is a perfect device for this use case. It has more than enough computing power, a built-in microphone, and a screen just in case you need to do a quick debug. I ended up writing a small program that takes a short audio sample every couple of minutes between 7:00 am and 5:00 pm and uses the API mentioned above to recognize the song.
The piece of information about what we're currently listening to is a good starting point, but in order to embed it on our website, we need to store it somewhere first. Here's where MongoDB's and Mongo Atlas' powers come into play. Setting up a cloud database was very easy. It took me less than five minutes. The free tier is more than enough for prototyping and simple use cases like this one, and if you end up needing more, you can always switch to a higher tier. I connected my application to a MongoDB Atlas instance using the MongoDB Node Driver.
Now that we have information about what's currently playing captured and safely stored in the MongoDB Atlas instance, there's only one piece of the puzzle missing: a way to retrieve the latest song from the database. Usually, this would require a separate application that we would have to develop, manage in the cloud, or provide a bare metal to run on, but here's the kicker: MongoDB has a way to do this easily with MongoDB Application Services. Application Services allows writing custom HTTP endpoints to retrieve or manipulate database data.
To create an endpoint like that, log in to your MongoDB Atlas Account. After creating a project, go to App Services at the top and then Create a New App. Name your app, click on Create App Service, and then on the left, you’ll see the HTTP Endpoints entry. After clicking Add Endpoint, select all the relevant settings.
The fetchsong function is a small JavaScript function that returns the latest song if the latest song had been played in the last 15 minutes and connected it to an HTTPS endpoint. Here it is in full glory:
```Javascript
exports = async function (request, response) {
const filter = {date: {$gt: new Date(new Date().getTime() - 15 * 60000)}};
const projection = {artist: 1, title: 1, _id: 0};
const songsCollection = context.services.get("mongodb-atlas")
.db("scaleup")
.collection("songs");
const docs = await songsCollection
.find(filter, projection)
.sort({date: -1})
.limit(1).toArray();
const latestSong] = docs;
response.setBody(latestSong);
};
```
And voilà! After embedding a JavaScript snippet on our website to read song data here’s the final outcome:
![Currently played song on Spotify
To see the results for yourself, visit https://scaleup.com.pl/en/#music. If you don't see anything, don’t worry—we work in the Central European Time Zone, so the office might be currently empty. :) Also, if you need to hire IT specialists here in Poland, don't hesitate to drop us a message. ;)
Huge thanks to John Page for being an inspiration to play with MongoDB's products and to write this article. The source code for the whole project is available on GitHub. :) | md | {
"tags": [
"JavaScript",
"Atlas",
"Node.js"
],
"pageDescription": "Learn how Scale Up publishes the title of the song currently playing in their office regardless of the musical source using an old cellphone and MongoDB Atlas Application Services.",
"contentType": "Article"
} | Listen Along at Scale Up with Atlas Application Services | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/connectors/bigquery-spark-stored-procedure | created | # Spark Up Your MongoDB and BigQuery using BigQuery Spark Stored Procedures
To empower enterprises that strive to transform their data into insights, BigQuery has emerged as a powerful, scalable, cloud-based data warehouse solution offered by Google Cloud Platform (GCP). Its cloud-based approach allows efficient data management and manipulation, making BigQuery a game-changer for businesses seeking advanced data insights. Notably, one of BigQuery’s standout features is its seamless integration with Spark-based data processing that enables users to further enhance their queries. Now, leveraging BigQuery APIs, users can create and execute Spark stored procedures, which are reusable code modules that can encapsulate complex business logic and data transformations. This feature helps data engineers, data scientists, and data analysts take advantage of BigQuery’s advanced capabilities and Spark’s robust data processing capabilities.
MongoDB, a developer data platform, is a popular choice for storing and managing operational data for its scalability, performance, flexible schema, and real-time capabilities (change streams and aggregation). By combining the capabilities of BigQuery with the versatility of Apache Spark and the flexibility of MongoDB, you can unlock a powerful data processing pipeline.
Apache Spark is a powerful open-source distributed computing framework that excels at processing large amounts of data quickly and efficiently. It supports a wide range of data formats, including structured, semi-structured, and unstructured data, making it an ideal choice for integrating data from various sources, such as MongoDB.
BigQuery Spark stored procedures are routines that are executed within the BigQuery environment. These procedures can perform various tasks, such as data manipulation, complex calculations, and even external data integration. They provide a way to modularize and reuse code, making it easier to maintain and optimize data processing workflows. Spark stored procedures use the serverless Spark engine that enables serverless, autoscaling Spark. However, you don’t need to enable Dataproc APIs or be charged for Dataproc when you leverage this new capability.
Let's explore how to extend BigQuery’s data processing to Apache Spark, and integrate MongoDB with BigQuery to effectively facilitate data movement between the two platforms.
## Connecting them together
the MongoDB Spark connector JAR file to Google Cloud Storage to connect and read from MongoDB Atlas. Copy and save the gsutil URI for the JAR file that will be used in upcoming steps.
a MongoDB Atlas cluster with sample data loaded to it.
2. Navigate to the BigQuery page on the Google Cloud console.
3. Create a **BigQuery dataset** with the name **spark_run**.
4. You will type the PySpark code directly into the query editor. To create a PySpark stored procedure, click on **Create Pyspark Procedure**, and then select **Create PySpark Procedure**.
BigQuery Storage Admin, Secret Manager Secret Accessor, and Storage Object Admin access to this service account from IAM.
into Google Cloud Secret Manager, or you can hardcode it in the MongoDB URI string itself.
8. Copy the below Python script in the PySpark procedure editor and click on **RUN**. The snippet takes around two to three minutes to complete. The below script will create a new table under dataset **spark_run** with the name **sample_mflix_comments**.
```python
from pyspark.sql import SparkSession
from google.cloud import secretmanager
def access_secret_version(secret_id, project_id):
client = secretmanager.SecretManagerServiceClient()
name = f"projects/{project_id}/secrets/{secret_id}/versions/1"
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
# Update project_number, username_secret_id and password_secret_id, comment them out if you did not create the secrets earlier
project_id = ""
username_secret_id = ""
password_secret_id = ""
username = access_secret_version(username_secret_id, project_id)
password = access_secret_version(password_secret_id, project_id)
# Update the mongodb_uri directly if with your username and password if you did not create a secret from Step 7, update the hostname with your hostname
mongodb_uri = "mongodb+srv://"+username+":"+password+"@/sample_mflix.comments"
my_spark = SparkSession \
.builder \
.appName("myApp") \
.config("spark.mongodb.read.connection.uri", mongodb_uri) \
.config("spark.mongodb.write.connection.uri", mongodb_uri) \
.getOrCreate()
dataFrame = my_spark.read.format("mongodb").option("database", "sample_mflix").option("collection", "comments").load()
dataFrame.show()
# Saving the data to BigQuery
dataFrame.write.format("bigquery") \
.option("writeMethod", "direct") \
.save("spark_run.sample_mflix_comments")
```
or bq command line with connection type as CLOUD_RESOURCE.
```
!bq mk \
--connection \
--location=US \
--project_id= \
--connection_type=CLOUD_RESOURCE gentext-conn
```
11. To grant IAM permissions to access Vertex AI from BigQuery, navigate to **External connections** > Find the **gettext-conn** connection > Copy the **Service account id**. Grant the **Vertex AI User** access to this service account from **IAM**.
12. Create a model using the CREATE MODEL command.
```
CREATE OR REPLACE MODEL `gcp-pov.spark_run.llm_model`
REMOTE WITH CONNECTION `us.gentext-conn`
OPTIONS (ENDPOINT = 'gemini-pro');
```
13. Run the SQL command against the BigQuery table. This query allows the user to extract the host name from the email leveraging the Gemini Pro model. The resulting output includes the response and safety attributes.
```
SELECT prompt,ml_generate_text_result
FROM
ML.GENERATE_TEXT( MODEL `gcp-pov.spark_run.llm_model`,
(
SELECT CONCAT('Extract the host name from the email: ', email) AS prompt,
* FROM `gcp-pov.spark_run.sample_mflix_comments`
LIMIT 5),
STRUCT(
0.9 AS temperature,
100 AS max_output_tokens
)
);
```
14. Here is the sample output showing the prompt as well as the response. The prompt parameter provides the text for the model to analyze. Prompt design can strongly affect the responses returned by the LLM.
by using GoogleSQL queries.
3. BigQuery ML also lets you access LLMs and Cloud AI APIs to perform artificial intelligence (AI) tasks like text generation and machine translation.
## Conclusion
By combining the power of BigQuery, Spark stored procedures, and MongoDB, you can create a robust and scalable data processing pipeline that leverages the strengths of each technology. BigQuery provides a reliable and scalable data warehouse for storing and analyzing structured data, while Spark allows you to process and transform data from various sources, including semi-structured and unstructured data from MongoDB. Spark stored procedures enable you to encapsulate and reuse this logic, making it easier to maintain and optimize your data processing workflows.
### Further reading
1. Get started with MongoDB Atlas on Google Cloud.
2. Work with stored procedures for Apache Spark.
3. Create machine learning models in BigQuery ML.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3cc99ff9b6ad9cec/66155da90c478454e8a349f1/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd635dea2d750e73/66155dc254d7c1521e8eea3a/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blta48182269e87f9e7/66155dd2cbc2fbae6d8175ea/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb399b53e83efffb9/66155de5be36f52825d96ea5/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6efef490b3d34cf0/66155dfd2b98e91579101401/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56b91d83e11dea5f/66155e0f7cacdc153bd4a78b/6.png | md | {
"tags": [
"Connectors",
"Python",
"Spark"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Spark Up Your MongoDB and BigQuery using BigQuery Spark Stored Procedures | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/events/developer-day-gdg-philadelphia | created | # Google Developer Day Philadelphia
Welcome to Google Developer Day Philadelphia! Below you can find all the resources you will need for the day.
## Michael's Slide Deck
* Slides
## Search Lab
* Slides
* Intro Lab
* Search lab: hands-on exercises
* Survey
## Resources
* Try Atlas
* Try Compass
* Try Relational Migrator
* Try Vector Search
## Full Developer Day Content
### Data Modeling and Design Patterns
* Slides
* Library application
* System requirements
### MongoDB Atlas Setup: Hands-on exercises setup and troubleshooting
* Intro lab: hands-on exercises
* Data import tool
### Aggregation Pipelines Lab
* Slides
* Aggregations lab: hands-on exercises
### Search Lab
* Slides
* Search lab: hands-on exercises
### Additional resources
* Library management system code
* MongoDB data modeling book
* Data Modeling course on MongoDB University
* MongoDB for SQL Pros on MongoDB University
* Atlas Search Workshop: An in-depth workshop that uses the more advanced features of Atlas Search
## Join the Community
Stay connected, and join our community:
* Join the New York MongoDB User Group!
* Sign up for the MongoDB Community Forums. | md | {
"tags": [
"Atlas",
"Google Cloud"
],
"pageDescription": "Experience the future of technology with GDG Philadelphia at our Build with AI event series & Google I/O Extended! Join us for a half-day event showcasing the latest technologies from Google, including AI, Cloud, and Web development. Connect with experts and enthusiasts for learning and networking. Your ticket gives you access to in-person event venues.",
"contentType": "Event"
} | Google Developer Day Philadelphia | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/socks5-proxy | created | # Connection to MongoDB With Java And SOCKS5 Proxy
## Introduction
SOCKS5 is a standardized protocol for communicating with network services through a proxy server. It offers several
advantages like allowing the users to change their virtual location or hide their IP address from the online services.
SOCKS5 also offers an authentication layer that can be used to enhance security.
In our case, the network service is MongoDB. Let's see how we can connect to MongoDB through a SOCKS5 proxy with Java.
## SOCKS5 with vanilla Java
Authentication is optional for SOCKS5 proxies. So to be able to connect to a SOCKS5 proxy, you need:
- **proxyHost**: IPv4, IPv6, or hostname of the proxy
- **proxyPort**: TCP port number (default 1080)
If authentication is activated, then you'll also need a username and password. Both need to be provided, or it won't
work.
- **proxyUsername**: the proxy username (not null or empty)
- **proxyPassword**: the proxy password (not null or empty)
### Using connection string parameters
The first method to connect to MongoDB through a SOCKS5 proxy is to simply provide the above parameters directly in the
MongoDB connection string.
```java
public MongoClient connectToMongoDBSock5WithConnectionString() {
String connectionString = "mongodb+srv://myDatabaseUser:myPassword@example.org/" +
"?proxyHost=" +
"&proxyPort=" +
"&proxyUsername=" +
"&proxyPassword=";
return MongoClients.create(connectionString);
}
```
### Using MongoClientSettings
The second method involves passing these parameters into a MongoClientSettings class, which is then used to create the
connection to the MongoDB cluster.
```java
public MongoClient connectToMongoDBSocks5WithMongoClientSettings() {
String URI = "mongodb+srv://myDatabaseUser:myPassword@example.org/";
ConnectionString connectionString = new ConnectionString(URI);
Block socketSettings = builder -> builder.applyToProxySettings(
proxyBuilder -> proxyBuilder.host("")
.port(1080)
.username("")
.password(""));
MongoClientSettings settings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.applyToSocketSettings(socketSettings)
.build();
return MongoClients.create(settings);
}
```
## Connection with Spring Boot
### Using connection string parameters
If you are using Spring Boot or Spring Data MongoDB, you can connect like so if you are passing the SOCKS5 parameters in
the connection string.
Most of the time, if you are using Spring Boot or Spring Data, you'll need the codec registry to
support the POJO mappings. So I included this as well.
```java
package com.mongodb.starter;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import org.bson.codecs.configuration.CodecRegistry;
import org.bson.codecs.pojo.PojoCodecProvider;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
@Configuration
public class MongoDBConfiguration {
@Value("${spring.data.mongodb.uri}")
private String connectionString;
@Bean
public MongoClient mongoClient() {
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
return MongoClients.create(MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(connectionString))
.codecRegistry(codecRegistry)
.build());
}
}
```
In this case, all the SOCKS5 action is actually happening in the `application.properties` file of your Spring Boot
project.
```properties
spring.data.mongodb.uri=${MONGODB_URI:"mongodb+srv://myDatabaseUser:myPassword@example.org/?proxyHost=&proxyPort=&proxyUsername=&proxyPassword="}
```
### Using MongoClientSettings
If you prefer to use the MongoClientSettings, then you can just pass a classic MongoDB URI and handle the different
SOCKS5 parameters directly in the `SocketSettings.Builder`.
```java
package com.mongodb.starter;
import com.mongodb.Block;
import com.mongodb.ConnectionString;
import com.mongodb.MongoClientSettings;
import com.mongodb.client.MongoClient;
import com.mongodb.client.MongoClients;
import com.mongodb.connection.SocketSettings;
import org.bson.codecs.configuration.CodecRegistry;
import org.bson.codecs.pojo.PojoCodecProvider;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import static org.bson.codecs.configuration.CodecRegistries.fromProviders;
import static org.bson.codecs.configuration.CodecRegistries.fromRegistries;
@Configuration
public class MongoDBConfigurationSocks5 {
@Value("${spring.data.mongodb.uri}")
private String connectionString;
@Bean
public MongoClient mongoClient() {
CodecRegistry pojoCodecRegistry = fromProviders(PojoCodecProvider.builder().automatic(true).build());
CodecRegistry codecRegistry = fromRegistries(MongoClientSettings.getDefaultCodecRegistry(), pojoCodecRegistry);
Block socketSettings = builder -> builder.applyToProxySettings(
proxyBuilder -> proxyBuilder.host("")
.port(1080)
.username("")
.password(""));
return MongoClients.create(MongoClientSettings.builder()
.applyConnectionString(new ConnectionString(connectionString))
.applyToSocketSettings(socketSettings)
.codecRegistry(codecRegistry)
.build());
}
}
```
## Conclusion
Leveraging a SOCKS5 proxy for connecting to MongoDB in Java offers enhanced security and flexibility. Whether through connection string parameters or MongoClientSettings, integrating SOCKS5 functionality is straightforward.
If you want to read more details, you can check out the SOCKS5 documentation online.
If you have questions, please head to our Developer Community website where
the MongoDB engineers and the MongoDB community will help you build your next big idea with MongoDB.
| md | {
"tags": [
"Java",
"MongoDB",
"Spring"
],
"pageDescription": "In this post, we explain the different methods you can use to connect to a MongoDB cluster through a SOCKS5 proxy with vanilla Java and Spring Boot.",
"contentType": "Tutorial"
} | Connection to MongoDB With Java And SOCKS5 Proxy | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/neurelo-series-two-lambda | created | # Building a Restaurant Locator Using Atlas, Neurelo, and AWS Lambda
Ready to build a robust and efficient application that can quickly process real-time data, is capable of adapting to changing environments, and is fully customizable with seamless integration?
The developer dream trifecta of MongoDB Atlas, Neurelo, and AWS Lambda will propel your cloud-based applications in ways you didn’t know were possible! With this lethal combination, you can build a huge variety of applications, like the restaurant locator we will build in this tutorial.
This combination of platforms can help developers build scalable, cost-efficient, and performant serverless functions. Some huge benefits are that the Lambda functions used still remain stateless — data operations are now stateless API calls and there are no stateful connections opened with every Lambda invocation when Neurelo is incorporated in the application. We also are enabling higher performance and lower costs as no execution (and billing) time is spent setting up or tearing down established connections. This also enables significantly higher concurrency of Lambda invocations, as we can leverage built-in connection pooling through Neurelo which allows for you to open fewer connections on your MongoDB instance.
We will be going over how to properly set up the integration infrastructure to ensure you’re set up for success, and then we will dive into actually building our application. At the end, we will have a restaurant locator that we can use to search for restaurants that fit our desired criteria. Let’s get started!
## Pre-reqs
- MongoDB Atlas account
- Neurelo account
- AWS account; Lambda access is necessary
## Setting up our MongoDB cluster
Our first step is to spin up a free MongoDB cluster and download the sample dataset. For help on how to do this, please refer to our tutorial.
For this tutorial, we will be using the `sample_restaurants` collection that is located inside the sample dataset. Please ensure you have included the correct IP address access for this tutorial, along with a secure username and password as you will need these throughout.
Once your cluster is up and running, we can start setting up our Neurelo project.
## Setting up our Neurelo project
Once we have our MongoDB cluster created, we need to create a project in Neurelo. For help on this step, please refer to our first tutorial in this series, Neurelo and MongoDB: Getting Started and Fun Extras.
Save your API key someplace safe. Otherwise, you will need to create a new key if it gets lost. Additionally, please ensure your Neurelo project is connected to your MongoDB cluster. For help on grabbing a MongoDB connection string, we have directions to guide you through it. Now, we can move on to setting up our AWS Lambda function.
## Creating our AWS Lambda function
Log into your AWS account and access Lambda either through the search bar or in the “Services” section. Click on the orange “Create function” button and make sure to press the “Author from scratch” option on the screen that pops up. Select a name for your function — we are using “ConnectTest” to keep things simple — and then, choose “Python 3.12” for your runtime, since this is a Python tutorial! Your Lambda function should look like this prior to hitting “Create function.”
Once you’re taken to the “Function overview” page, we can start writing our code to perfectly integrate MongoDB Atlas, Neurelo, and AWS Lambda. Let’s dive into it.
## Part 1: The integration
Luckily, we don’t need to import any requirements for this Lambda function tutorial and we can write our code directly into the function we just created.
The first step is to import the packages `urllib3` and `json` with the line:
```
import urllib3, json
```
These two packages hold everything we need to deal with our connection strings and make it so we don’t need to write our code in a separate IDE.
Once we have our imports in, we can configure our API key to our Neurelo environment. We are using a placeholder `API_KEY`, and for ease in this tutorial, you can put your key directly in. But it’s not good practice to ever hardcode your keys in code, and in a production environment, it should never be done.
```
# Put in your API Key to connect to your Neurelo environment
NEURELO_API_KEY = ‘API_KEY’
```
Once you’ve set up your API key connection, we can set up our headers for the REST API call. For this, we can take the auto-generated `lambda_function` function and edit it to better suit our needs:
```
def lambda_handler(event, context):
# Setup the headers
headers = {
'X-API-KEY': NEURELO_API_KEY
}
# Creating a PoolManager instance for sending HTTP requests
http = urllib3.PoolManager()
```
Here, we are creating a dictionary named `headers` to set the value of our API key. This step is necessary so Neurelo can authenticate our API request and we can return our necessary documents. We are then utilizing the `PoolManager` class to manage our server connections. This is an efficient way to ensure we are reusing connections with Lambda instead of creating a new connection with each individual call. For this tutorial, we are only using one connection, but if you have a more complex Lambda or a project with the need for multiple connections, you will be able to see the magic of the `PoolManager` class a bit more.
Now, we are ready to set up our first API call! Please recall that in this first step, we are connecting to our “restaurants” collection within our `sample_restaurants` database and we are returning our necessary documents.
We have decided that we want to retrieve a list of restaurants from this collection that fit specific criteria: These restaurants are located in the borough of Brooklyn, New York, and serve American cuisine. Prior to writing the code below, we suggest you take a second to look through the sample database to view the fields inside our documents.
So now that we’ve defined the query parameters we are interested in, let’s translate it into a query request. We are going to be using three parameters for our query: “filter,” “take,” and “select.” These are the same parameter keys from our first article in this series, so please refer back to it if you need help. We are using the “filter” parameter to ensure we are receiving restaurants that fit our criteria of being in Brooklyn and that are American, the “take” parameter is so we only return five documents instead of thousands (our collection has over 25,000 documents!), and the “select” parameter is so that only our specific fields are being returned in our output.
Our query request will look like this:
```
# Define the query parameters
params1 = {
'filter': '{"AND": {"borough": {"equals": "Brooklyn"}, "cuisine": {"equals": "American"}}}',
'take': '5',
'select': '{"id": false, "name": true, "borough": true, "cuisine": true}',
}
```
Don’t forget to send a GET request with our necessary parameters, and set up some print statements so we can see if our request was successful. Once completed, the whole code block for our Part 1 should look something like this:
```
import urllib3, json
# Configure the API Key for our Neurelo environment
NEURELO_API_KEY = 'API_KEY'
def lambda_handler(event, context):
# Setup the headers
headers = {
'X-API-KEY': NEURELO_API_KEY
}
# Creating a PoolManager instance for sending HTTP requests
http = urllib3.PoolManager()
# Choose the "restaurants" collection from our Neurelo environment connected to 'sample_restaurants'
api1 = 'https://us-east-2.aws.neurelo.com/rest/restaurants'
# Define the query parameters
params1 = {
'filter': '{"AND": {"borough": {"equals": "Brooklyn"}, "cuisine": {"equals": "American"}}}',
'take': '5',
'select': '{"id": false, "name": true, "borough": true, "cuisine": true}',
}
# Send a GET request with URL parameters
response = http.request("GET", api1, headers=headers, fields=params1)
# Print results if the request was successful
if response.status == 200:
# Print the JSON content of the response
print ('Restaurants Endpoint: ' + json.dumps(json.loads(response.data), indent=4))
```
And our output will look like this:
Congratulations! As you can see, we have successfully returned five American cuisine restaurants located in Brooklyn, and we have successfully integrated our MongoDB cluster with our Neurelo project and have used AWS Lambda to access our data.
Now that we’ve set everything up, let’s move on to the second part of our tutorial, where we will filter our results with a custom API endpoint for the best restaurants possible.
## Part 2: Filtering our results further with a custom API endpoint
Before we can call our custom endpoint to filter for our desired results, we need to create one. While Neurelo has a large list of auto-generated endpoints available for your project, sometimes we need an endpoint that we can customize with a complex query to return information that is nuanced. From the sample database in our cluster, we can see that there is a `grades` field where the grade and score received by each restaurant exist.
So, what if we want to return documents based on their scores? Let’s say we want to expand our search and find restaurants that are really good restaurants.
Head over to Neurelo and access the “Definitions” tab on the left-hand side of the screen. Go to the “Custom Queries” tab and create a complex query named “getGoodRestaurants.” For more help on this section, please refer to the first article in this series for a more detailed explanation.
We want to filter restaurants where the most recent grades are either “A” or “B,” and the latest grade score is greater than 10. Then, we want to aggregate the restaurants by cuisine and borough and list the restaurant name, so we can know where to go!
Our custom query will look like this:
```
{
"aggregate": "restaurants",
"pipeline":
{
"$match": {
"borough": "Brooklyn",
"cuisine": "American",
"grades.0.grade": {
"$in": [
"A",
"B"
]
},
"grades.1.grade": {
"$in": [
"A",
"B"
]
},
"grades.0.score": {
"$gt": 10
}
}
},
{
"$limit": 5
},
{
"$group": {
"_id": {
"cuisine": "$cuisine",
"borough": "$borough"
},
"restaurants_info": {
"$push": {
"name": "$name"
}
}
}
}
],
"cursor": {}
}
```
Great! Now that we have our custom query in place, hit the “Commit” button at the top of the screen, add a commit message, and make sure that the “Deploy to environment” option is selected. This is a crucial step that will ensure that we are committing our custom query into the definitions repo for the project and deploying the changes to our environment.
Now, we can head back to Lambda and incorporate our second endpoint to return restaurants that have high scores serving our desired food in our desired location.
Add this code to the bottom of the previous code we had written.
```
# Choose the custom-query endpoint from our Neurelo environment connected to 'sample_restaurants'
api2 = 'https://us-east-2.aws.neurelo.com/custom/getGoodRestaurants'
# Send a GET request with URL parameters
response = http.request("GET", api2, headers=headers)
if response.status == 200:
# Print the JSON content of the response
print ('Custom Query Endpoint: ' + json.dumps(json.loads(response.data), indent=4))
```
Here, we are choosing our custom endpoint, `getGoodRestaurants`, and then sending a GET request to acquire the necessary information.
Please deploy the changes in Lambda and hit the “Test” button.
Your output will look like this:
![[Fig 4: custom complex query endpoint output in Lambda]
As you can see from the results above, we have received a sample size of five American cuisine, Brooklyn borough restaurants that meet our criteria and are considered good restaurants!
## Conclusion
In this tutorial, we have covered how to properly integrate a MongoDB Atlas cluster with our Neurelo project and return our desired results by using AWS Lambda. We have shown the full process of utilizing our Neurelo project automated API endpoints and even how to use unique and fully customizable endpoints as well!
For more help with using MongoDB Atlas, Neurelo, and AWS Lambda, please visit the hyperlinked documentation.
> This tutorial is the second in our series. Please check out the first tutorial: Neurelo and MongoDB: Getting Started and Fun Extras. | md | {
"tags": [
"Atlas",
"Python",
"Neurelo"
],
"pageDescription": "Follow along with this in-depth tutorial covering the integration of MongoDB Atlas, Neurelo, and AWS Lambda to build a restaurant locator.",
"contentType": "Tutorial"
} | Building a Restaurant Locator Using Atlas, Neurelo, and AWS Lambda | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-atlas-stream-processing-security | created | # Getting Started With Atlas Stream Processing Security
Security is paramount in the realm of databases, and the safeguarding of streaming data is no exception. Stream processing services like Atlas Stream Processing handle sensitive data from a variety of sources, making them prime targets for malicious activities. Robust security measures, including encryption, access controls, and authentication mechanisms, are essential to mitigating risks and upholding the trustworthiness of the information flowing through streaming data pipelines.
In addition, regulatory compliance may impose comprehensive security protocols and configurations such as enforcing auditing and separation of duties. In this article, we will cover the security capabilities of Atlas Stream Processing, including access control, and how to configure your environment to support least privilege access. Auditing and activity monitoring will be covered in a future article.
## A primer on Atlas security
Recall that in MongoDB Atlas, organizations, projects, and clusters are hierarchical components that facilitate the organization and management of MongoDB resources. An organization is a top-level entity representing an independent deployment of MongoDB Atlas, and it contains one or more projects.
A project is a logical container within an organization, grouping related resources and serving as a unit for access control and billing. Within a project, MongoDB clusters are deployed. Clusters are instances of MongoDB databases, each with its own configurations, performance characteristics, and data. Clusters can span multiple cloud regions and availability zones for high availability and disaster recovery.
This hierarchy allows for the efficient management of MongoDB deployments, access control, and resource isolation within MongoDB Atlas.
authenticate with Atlas UI, API, or CLI only (a.k.a the control plane). Authorization includes access to an Atlas organization and the Atlas projects within the organization.
or via a MongoDB driver like the MongoDB Java driver. If you have previously used a self-hosted MongoDB server, Atlas database users are the equivalent of the MongoDB user. MongoDB Atlas supports a variety of authentication methods such as SCRAM (username and password), LDAP Proxy Authentication, OpenID Connect, Kerberos, and x.509 Certificates. While clients use any one of these methods to authenticate, Atlas services, such as Atlas Data Federation, access other Atlas services like Atlas clusters via temporary x.509 certificates. This same concept is used within Atlas Stream Processing and will be discussed later in this post.
**Note:** Unless otherwise specified, a “user” in this article refers to an Atlas database user.
.
Authentication to SPIs operates similarly to Atlas clusters, where only users defined within the Atlas data plane (e.g., Atlas database users) are allowed to connect to and create SPIs. It's crucial to grasp this concept because SPIs and Atlas clusters are distinct entities within an Atlas project, yet they share the same authentication process via Atlas database users.
By default, **only Atlas users who are Project Owners or Project Stream Processing Owners can create Stream Processing Instances.** These users also have the ability to create, update, and delete connection registry connections associated with SPIs.
### Connecting to the Stream Processing Instance
Once the SPI is created, Atlas database users can connect to it just as they would with an Atlas cluster through a client tool such as mongosh. Any Atlas database user with the built-in “readWriteAnyDatabase” or “atlasAdmin” can connect to any SPIs within the project.
For users without one of these built-in permissions, or for scenarios where administrators want to follow the principle of least privilege, administrators can create a custom database role made up of specific actions.
#### Custom actions
Atlas Stream Processing introduces a number of custom actions that can be assigned to a custom database role. For example, if administrators wanted to create an operations-level role that could only start, stop, and view stream statistics, they could create a database user role, “ASPOps,” and add the startStreamProcessor, stopStreamProcessor, and listStreamProcessors actions. The administrator would then grant this role to the user.
The following is a list of Atlas Stream Processing actions:
- createStreamProcessor
- processStreamProcessor
- startStreamProcessor
- stopStreamProcessor
- dropStreamProcessor
- sampleStreamProcessor
- listStreamProcessors
- listConnections
- streamProcessorStats
One issue you might realize is if a database user with the built-in “readWriteAnyDatabase” has all these actions granted by default, or if a custom role has these actions, they have these action permissions for all Stream Processing Instances within the Atlas project! If your organization wants to lock this down and restrict access to specific SPIs, they can do this by navigating to the “Restrict Access” section and selecting the desired SPIs.
or read more about MongoDB Atlas Stream Processing in our documentation.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt493531d0261fd667/6629225351b16f1ecac4e6cd/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5a6ba7eaf44c67a2/662922674da2a996e6ff2ea8/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdd67839c2a52fa2/6629227fb0ec7701ffd6e743/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5188a9aabfeae08c/66292291b0ec775eb8d6e747/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt586319eaf4e5422b/662922ab45f9893914cf6a93/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7cfd6608d0aca8b0/662922c3b054410cfd9a038c/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb01ee1b6f7f4b89c/662922d9c9de46ee62d4944f/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc251c6f17b584861/662922edb0ec77eee0d6e750/8.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Take a deep dive into Atlas Stream Processing security. Learn how Atlas Stream Processing achieves a principle of least privilege.",
"contentType": "Tutorial"
} | Getting Started With Atlas Stream Processing Security | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/ef-core-ga-updates | created | # What's New in the MongoDB Provider for EF Core?
Exciting news! As announced at .local NYC, the MongoDB Provider for Entity Framework (EF) has gone into General Availability (GA) with the release of version 8.0 in NuGet. The major version numbers are set to align with the version number of .NET and EF so the release of 8.0 means the provider now officially supports .NET 8 and EF 8! 🎉
In this article, we will take a look at four highlights of what’s new and how you can add the features to your EF projects today.
## Prerequisites
This will be an article with code snippets, so it is assumed you have some knowledge of not just MongoDB but also EF. If you want to see an example application in Blazor that uses the new provider and how to get started implementing the CRUD operations, there is a previous tutorial I wrote on getting started that will take you from a blank application all the way to a working system.
> Should you wish to see the application from which the code snippets in this article are taken, you can find it on GitHub.
## Support for embedded documents
While the provider was in preview, it wasn’t possible to handle lists or embedded documents. But this has changed in GA. It is now just as easy as with the MongoDB C# driver to handle embedded documents and arrays in your application.
In MongoDB’s sample restaurants collection from the sample dataset, the documents have an address field, which is an embedded document, and a grades field which contains an array of embedded documents representing each time the restaurant was graded.
Just like before, you can have a C# model class that represents your restaurant documents and classes for each embedded document, and the provider will take care of mapping to and from the documents from the database to those classes and making those available inside your DbSet.
We can then access the properties on that class to display data retrieved from MongoDB using the provider.
```csharp
var restaurants = dbContext.Restaurants.AsNoTracking().Take(numOfDocsToReturn).AsEnumerable();
foreach (var restaurant in restaurants)
{
Console.WriteLine($"{restaurant.Id.ToLower()}: {restaurant.Name} - {restaurant.Borough}, {restaurant.Address.Zipcode}");
foreach (var grade in restaurant.Grades)
{
Console.WriteLine($"Grade: {grade.GradeLetter}, Score: {grade.Score}");
}
Console.WriteLine("--------------------");
}
```
This code is pretty straightforward. It creates an IEnumerable of restaurants from querying the Dbset, only selecting (using the Take method) the number of requested restaurant documents. It then loops through each returned restaurant and prints out data from it, including the zip code from the embedded address document.
Because grades is an array of grade documents, there is also an additional loop to access data from each document in the array.
Creating documents is also able to support embedded documents. As expected, you can create a new Restaurant object and new versions of both the Address and Grade objects to populate those fields too.
```csharp
new Restaurant()
{
Id = "5678",
Name = "My Awesome Restaurant",
Borough = "Brooklyn",
Cuisine = "American",
Address = new Address()
{
Building = "123",
Coord = new double] { 0, 0 },
Street = "Main St",
Zipcode = "11201"
},
Grades = new List()
{
new Grade()
{
Date = DateTime.Now,
GradeLetter = "A",
Score = 100
}
},
IsTestData = true,
RestaurantId = "123456"
```
Then, just like with any EF code, you can call Add on the db context, passing in the object to insert and call save changes to sync the db context with your chosen storage — in this case, MongoDB.
```csharp
dbContext.Add(newResturant);
await dbContext.SaveChangesAsync();
```
## Detailed logging and view of queries
Another exciting new feature available in the GA is the ability to get more detailed information, to your logging provider of choice, about what is going on under the hood.
You can achieve this using the LogTo and EnableSensitiveLogging methods, available from the DbContextOptionsBuilder in EF. For example, you can log to your own logger, logging factory, or even Console.Write.
```csharp
public static RestaurantDbContext Create(IMongoDatabase database) =>
new(new DbContextOptionsBuilder()
.LogTo(Console.WriteLine)
.EnableSensitiveDataLogging()
.UseMongoDB(database.Client, database.DatabaseNamespace.DatabaseName)
.Options);
```
One of the reasons you might choose to do this, and the reason why it is so powerful, is that it will show you what the underlying query was that was used to carry out your requested LINQ.
![Logging showing an aggregation query to match on an object id and limit to 1 result
This can be helpful for debugging purposes, but also for learning more about MongoDB as well as seeing what fields are used most in queries and might benefit from being indexed, if not already an index.
## BSON attributes
Another feature that has been added that is really useful is support for the BSON attributes. One of the most common use cases for these is to allow for the use of different field names in your document versus the property name in your class.
One of the most often seen differences between MongoDB documents and C# properties is in the capitalization. MongoDB documents, including fields in the restaurant documents, use lowercase. But in C#, it is common to use camel casing. We have a set of naming convention packs you can use in your code to apply class-wide handling of that, so you can specify once that you will be using that convention, such as camel case in your code, and it will automatically handle the conversion. But sometimes, that alone isn’t enough.
For example, in the restaurant data, there is a field called “restaurant_id” and the most common naming convention in C# would be to call the class property “RestaurantId.” As you can see, the difference is more than just the capitalization. In these instances, you can use attributes from the underlying MongoDB driver to specify what the element in the document would be.
```csharp
BsonElement("restaurant_id")]
public string RestaurantId { get; set; }
```
Other useful attributes include the ```[BsonId]``` attribute, to specify which property is to be used to represent your _id field, and ```[BsonRequired]```, which states that a field is required.
There are other BSON attributes as well, already in the C# driver, that will be available in the provider in future releases, such as ```[BsonDiscriminator]``` and ```[BsonGuideRepresentation]```.
## Value converters
Lastly, we have value converters. These allow you to convert the type of data as it goes to/from storage.
The one I use the most is a string as the type for the Id property instead of the ObjectId data type, as this can be more beneficial when using web frameworks such as Blazor, where the front end will utilize that property. Before GA, you would have to set your Id property to ObjectId, such as:
```csharp
public ObjectId Id { get; set; }
```
However, you might prefer to use string because of the string-related methods available or for other reasons, so now you can use:
```csharp
public string Id { get; set; }
```
To enable the provider to handle mapping an incoming _id value to the string type, you use HasConversion on the entity type.
```csharp
modelBuilder.Entity ()
.Property(r => r.Id)
.HasConversion();
```
It means if you want to, you can then manipulate the value, such as converting it to lowercase more easily.
```csharp
Console.WriteLine(restaurant.Id.ToLower());
```
There is one thing, though, to take note of and that is when creating documents/entities. Although MongoDB can support not specifying an _id — because if it is missing, one will be automatically generated — EF requires that a key not be null. Since the _id field is the primary key in MongoDB documents, EF will error when creating a document if you don’t provide an id.
This can easily be solved by creating a new ObjectId and casting to a string when creating a new document, such as a new restaurant.
```csharp
Id = new ObjectId().ToString()
```
## Summary and roadmap
Today is a big milestone in the journey for the official MongoDB Provider for EF Core, but it is by no means the end of the journey. Work is only just beginning!
You have read today about some of the highlights of the release, including value converters, support for embedded documents, and detailed logging to see how a query was generated and used under the hood. But there is not only more in this release, thanks to the hard work of engineers in both MongoDB and Microsoft, but more to come.
The code for the provider is all open source so you can see how it works. But even better, the [Readme contains the roadmap, showing you what is available now, what is to come, and what is out of scope.
Plus, it has a link to where you can submit issues or more excitingly, feature requests!
So get started today, taking advantage of your existing EF knowledge and application code, while enjoying the benefits of MongoDB! | md | {
"tags": [
"C#"
],
"pageDescription": "Learn more about the new features in the GA release of the MongoDB Provider for EF Core.\n",
"contentType": "Article"
} | What's New in the MongoDB Provider for EF Core? | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/mongodb-php-symfony-rental-workshop | created | # Symfony and MongoDB Workshop: Building a Rental Listing Application
## Introduction
We are pleased to release our MongoDB and Symfony workshop to help PHP developers build better applications with MongoDB.
The workshop guides participants through developing a rental listing application using the Symfony framework and MongoDB. In this article, we will focus on creating a "Rental" main page feature, showcasing the integration between Symfony and MongoDB.
This project uses MongoDB Doctrine ODM, which is an object-document mapper (ODM) for MongoDB and PHP. It provides a way to work with MongoDB in Symfony, using the same principles as Doctrine ORM for SQL databases. Its main features include:
- Mapping of PHP objects to MongoDB documents.
- Querying MongoDB using an expressive API.
- Integration with Symfony's dependency injection and configuration system.
## Prerequisites
- Basic understanding of PHP and Symfony
- Familiarity with MongoDB and its query language
- PHP 7.4 or higher installed
- Symfony 5.2 or higher installed
- MongoDB Atlas cluster
- Composer for managing PHP dependencies
Ensure you have the MongoDB PHP driver installed and configured with Symfony. For installation instructions, visit MongoDB PHP Driver Installation.
## What you will learn
- Setting up a MongoDB database for use with Symfony
- Creating a document schema using Doctrine MongoDB ODM
- Developing a controller in Symfony to fetch data from MongoDB
- Displaying data in a Twig template
- Best practices for integrating Symfony with MongoDB
## Workshop content
### Step 1: Setting up your project
Follow the guide to set the needed prerequisites.
Those steps cover how to install the needed PHP tools and set up your MongoDB Atlas project and cluster.
### Step 2: Configuring the Symfony project and connecting the database to the ODM
Follow the Quick Start section to connect MongoDB Atlas and build the first project files to connect the ODM classes to the database collections.
### Step 3: Building and testing the application
In this section, you will create the controllers, views, and business logic to list, search, and book rentals:
- Building the application
- Testing the application
### Cloud deployment
A very neat and handy ability is a chapter allowing users to seamlessly deploy their applications using MongoDB Atlas and Symfony to the platform.sh cloud.
## Conclusion
This workshop provides hands-on experience in integrating MongoDB with Symfony to build a rental listing application. Participants will learn how to set up their MongoDB environment, define document schemas, interact with the database using Symfony's controllers, and display data using Twig templates.
For further exploration, check out the official Symfony documentation, Doctrine MongoDB guide and MongoDB manual.
Start building with Atlas today! If you have questions or want to discuss things further, visit our community.
## Frequently asked questions (FAQ)
**Q: Who should attend the Symfony and MongoDB rental workshop?**
**A**: This workshop is designed for PHP developers who want to enhance their skills in building web applications using Symfony and MongoDB. A basic understanding of PHP, Symfony, and MongoDB is recommended to get the most out of the workshop.
**Q: What are the prerequisites for the workshop?**
**A**: Participants should have a basic understanding of PHP and Symfony, familiarity with MongoDB and its query language, PHP 7.4 or higher, Symfony 5.2 or higher, a MongoDB Atlas cluster, and Composer installed on their machine.
**Q: What will I learn in the workshop?**
**A**: You will learn how to set up a MongoDB database with Symfony, create a document schema using Doctrine MongoDB ODM, develop a Symfony controller to fetch data from MongoDB, display data in a Twig template, and understand best practices for integrating Symfony with MongoDB.
**Q: How long is the workshop?**
**A**: The duration of the workshop can vary based on the pace of the participants. However, it's designed to be comprehensive yet concise enough to be completed in a few sessions.
**Q: Do I need to install anything before the workshop?**
**A**: Yes, you should have PHP, Symfony, MongoDB Atlas, and Composer installed on your computer. Also, ensure the MongoDB PHP driver is installed and configured with Symfony. Detailed installation instructions are provided in the prerequisites section.
**Q: Is there any support available during the workshop?**
**A**: Yes, support will be available through various channels including the workshop forums, direct messaging with instructors, and the MongoDB community forums.
**Q: Can I access the workshop materials after completion?**
**A**: Yes, all participants will have access to the workshop materials, including code samples and documentation, even after the workshop concludes.
**Q: How does this workshop integrate with MongoDB Atlas?**
**A**: The workshop includes a module on setting up and connecting your application with a MongoDB Atlas cluster, allowing you to experience a real-world scenario of deploying a Symfony application backed by a managed MongoDB service.
**Q: What is Doctrine MongoDB ODM?**
**A**: Doctrine MongoDB ODM (Object-Document Mapper) is a library that provides a way to work with MongoDB in Symfony using the same principles as Doctrine ORM for SQL databases. It offers features like the mapping of PHP objects to MongoDB documents and querying MongoDB with an expressive API.
**Q: Can I deploy the application built during the workshop?**
**A**: Yes, the workshop includes a section on cloud deployment, with instructions on deploying your application using MongoDB Atlas and Symfony to a cloud platform, such as Platform.sh.
**Q: Where can I find more resources to learn about the Symfony and MongoDB integration?**
**A**: For further exploration, check out the official Symfony documentation, Doctrine MongoDB ODM guide, and MongoDB manual. Links to these resources are provided in the conclusion section of the workshop.
| md | {
"tags": [
"MongoDB",
"PHP"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Symfony and MongoDB Workshop: Building a Rental Listing Application | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/go/go-concurrency-graceful-shutdown | created | # Concurrency and Gracefully Closing the MDB Client
# Concurrency and Gracefully Closing the MDB Client
In the previous article and the corresponding video, we learned to persist the data that was exchanged with our HTTP server using MongoDB. We used the MongoDB driver for Go to access a **free** MongoDB Atlas cluster and use instances of our data directly with it.
In this article, we are going to focus on a more advanced topic that often gets ignored: how to properly shut down our server. This can be used with the `WaitGroups` provided by the `sync` package, but I decided to do it using goroutines and channels for the sake of getting to cover them in a more realistic but understandable use case.
In the latest version of the code of this program, we had set a way to properly close the connection to the database. However, we had no way of gracefully stopping the web server. Using Control+C closed the server immediately and that code was never executed.
## Use custom multiplexer
1. Before we are able to customize the way our HTTP server shuts down, we need to organize the way it is built. First, the routes we created are added to the `DefaultServeMux`. We can create our own router instead, and add the routes to it (instead of the old ones).
```go
router := http.newservemux()
router.handlefunc("get /", func(w http.responsewriter, r *http.request) {
w.write(]byte("HTTP caracola"))
})
router.handlefunc("post /notes", createNote)
```
2. The router that we have just created, together with other configuration parameters, can be used to create an `http.Server`. Other parameters can also be set: Read the [documentation for this one.
```go
server := http.Server{
Addr: serverAddr,
Handler: router,
}
```
3. Use this server to listen to connections, instead of the default one. Here, we don't need parameters in the function because they are provided with the `server` instance, and we are invoking one of its methods.
```go
log.Fatal(server.ListenAndServe())
```
4. If you compile and run this version, it should behave exactly the same as before.
5. The `ListenAndServe()` function returns a specific error when the server is closed with a `Shutdown()`. Let's handle it separately.
```go
if err := server.ListenAndServe(); !errors.Is(err, http.ErrServerClosed) {
log.Fatalf("HTTP server error %v\n", err)
}
```
## Use shutdown function on signal interrupt
has all the code for this series so you can follow along. The topics covered in it are the foundations that you need to know to produce full-featured REST APIs, back-end servers, or even microservices written in Go. The road is in front of you and we are looking forward to learning what you will create with this knowledge.
Stay curious. Hack your code. See you next time!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6825b2a270c9bd36/664b2922428432eba2198f28/signal.jpg | md | {
"tags": [
"Go"
],
"pageDescription": "A practical explanation on how to use goroutines and channels to achieve a graceful shutdown of the server and get the most out of it.",
"contentType": "Tutorial"
} | Concurrency and Gracefully Closing the MDB Client | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-search-local-unit-testing | created | # How to Enable Local and Automatic Testing of Atlas Search-Based Features
## Introduction
Atlas Search enables you to perform full-text queries on your MongoDB database. In this post, I want to show how you can
use test containers to write integration tests for Atlas Search-based queries, so that you can run them locally and in
your CI/CD pipeline without the need to connect to an actual MongoDB Atlas instance.
TL;DR: All the source code explained in this post is available on GitHub:
```bash
git clone git@github.com:mongodb-developer/atlas-search-local-testing.git
```
MongoDB Atlas Search is a powerful combination of a document-oriented database and full-text search capabilities. This
is not only valuable for use cases where you want to perform full-text queries on your data. With Atlas Search, it is
possible to easily enable use cases that would be hard to implement in standard MongoDB due to certain limitations.
Some of these limitations hit us in a recent project in which we developed a webshop. The rather obvious requirement
for this shop included that customers should be able to filter products and that the filters should show how many items
are available in each category. Over the course of the project, we kept increasing the number of filters in the
application. This led to two problems:
- We wanted customers to be able to arbitrarily choose filters. Since every filter needs an index to run efficiently,
and since indexes can‘t be combined (intersected), this leads to a proliferation of indexes that are hard to
maintain (in addition MongoDB allows only 64 indexes, adding another complexity level).
- With an increasing number of filters, the calculation of the facets for indicating the number of available items in
each category also gets more complex and more expensive.
As the developer effort to handle this complexity with standard MongoDB tools grew larger over time, we decided to give
Atlas Search a try. We knew that Atlas Search is an embedded full-text search in MongoDB Atlas based on Apache Lucene
and that Lucene is a mighty tool for text search, but we were actually surprised at how well it supports our filtering use
case.
With Atlas Search, you can create one or more so-called search indexes that contain your documents as a whole or just
parts of them. Therefore, you can use just one index for all of your queries without the need to maintain additional
indexes, e.g., for the most used filter combinations. Plus, you can also use the search index to calculate the facets
needed to show item availability without writing complex queries that are not 100% backed up by an index.
The downside of this approach is that Atlas Search makes it harder to write unit or integration tests. When you're using
standard MongoDB, you'll easily find some plug-ins for your testing framework that provide an in-memory MongoDB to run
your tests against, or you use some kind of test container to set the stage for your tests. Although Atlas Search
queries seamlessly integrate into MongoDB aggregation pipelines on Atlas, standard MongoDB cannot process this type of
aggregation stage.
To solve this problem, the recently released Atlas CLI allows you to start a local instance of a MongoDB cluster that
can actually handle Atlas Search queries. Internally, it starts two containers, and after deploying your search index via
CLI, you can run your tests locally against these containers. While this allows you to run your tests locally, it can be
cumbersome to set up this local cluster and start/stop it every time you want to run your tests. This has to be done
by each developer on their local machine, adds complexity to the onboarding of new people working on the software, and
is rather hard to integrate into a CI/CD pipeline.
Therefore, we asked ourselves if there is a way to provide a solution
that does not need a manual setup for these containers and enables automatic start and shutdown. Turns out there
is a way to do just that, and the solution we found is, in fact, a rather lean and reusable one that can also help with
automated testing in your project.
## Preparing test containers
The key idea of test containers is to provide a disposable environment for testing. As the name suggests, it is based on
containers, so in the first step, we need a Docker image or a Docker Compose script
to start with.
Atlas CLI uses two Docker images to create an environment that enables testing Atlas Search queries locally:
mongodb/mongodb-enterprise-server is responsible for providing database capabilities and mongodb/mongodb-atlas-search is
providing full-text search capabilities. Both containers are part of a MongoDB cluster, so they need to communicate with
each other.
Based on this information, we can create a docker–compose.yml, where we define two containers, create a network, and set
some parameters in order to enable the containers to talk to each other. The example below shows the complete
docker–compose.yml needed for this article. The naming of the containers is based on the naming convention of the
Atlas Search architecture: The `mongod` container provides the database capabilities while the `mongot` container
provides
the full-text search capabilities. As both containers need to know each other, we use environment variables to let each
of them know where to find the other one. Additionally, they need a shared secret in order to connect to each other, so
this is also defined using another environment variable.
```bash
version: "2"
services:
mongod:
container_name: mongod
image: mongodb/mongodb-enterprise-server:7.0-ubi8
entrypoint: "/bin/sh -c \"echo \"$$KEYFILECONTENTS\" > \"$$KEYFILE\"\n\nchmod 400 \"$$KEYFILE\"\n\n\npython3 /usr/local/bin/docker-entrypoint.py mongod --transitionToAuth --keyFile \"$$KEYFILE\" --replSet \"$$REPLSETNAME\" --setParameter \"mongotHost=$$MONGOTHOST\" --setParameter \"searchIndexManagementHostAndPort=$$MONGOTHOST\"\""
environment:
MONGOTHOST: 10.6.0.6:27027
KEYFILE: /data/db/keyfile
KEYFILECONTENTS: sup3rs3cr3tk3y
REPLSETNAME: local
ports:
- 27017:27017
networks:
network:
ipv4_address: 10.6.0.5
mongot:
container_name: mongot
image: mongodb/mongodb-atlas-search:preview
entrypoint: "/bin/sh -c \"echo \"$$KEYFILECONTENTS\" > \"$$KEYFILE\"\n\n/etc/mongot-localdev/mongot --mongodHostAndPort \"$$MONGOD_HOST_AND_PORT\" --keyFile \"$$KEYFILE\"\""
environment:
MONGOD_HOST_AND_PORT: 10.6.0.5:27017
KEYFILE: /var/lib/mongot/keyfile
KEYFILECONTENTS: sup3rs3cr3tk3y
ports:
- 27027:27027
networks:
network:
ipv4_address: 10.6.0.6
networks:
network:
driver: bridge
ipam:
config:
- subnet: 10.6.0.0/16
gateway: 10.6.0.1
```
Before we can use our environment in tests, we still need to create our search index. On top of that, we need to
initialize the replica set which is needed as the two containers form a cluster. There are multiple ways to achieve
this:
- One way is to use the Testcontainers framework to start the Docker Compose file and a test framework
like jest which allows you to define setup and teardown methods for your tests. In the setup
method, you can initialize the replica set and create the search index. An advantage of this approach is that you
don't
need to start your Docker Compose manually before you run your tests.
- Another way is to extend the Docker Compose file by a third container which simply runs a script to accomplish the
initialization of the replica set and the creation of the search index.
As the first solution offers a better developer experience by allowing tests to be run using just one command, without
the need to start the Docker environment manually, we will focus on that one. Additionally, this enables us to easily
run our tests in our CI/CD pipeline.
The following code snippet shows an implementation of a jest setup function. At first, it starts the Docker Compose
environment we defined before. After the containers have been started, the script builds a connection string to
be able to connect to the cluster using a MongoClient (mind the `directConnection=true` parameter!). The MongoClient
connects to the cluster and issues an admin command to initialize the replica set. Since this command takes
some milliseconds to complete, the script waits for some time before creating the search index. After that, we load an
Atlas Search index definition from the file system and use `createSearchIndex` to create the index on the cluster. The
content of the index definition file can be created by simply exporting the definition from the Atlas web UI. The only
information not included in this export is the index name. Therefore, we need to set it explicitly (important: the name
needs to match the index name in your production code!). After that, we close the database connection used by MongoClient
and save a reference to the Docker environment to tear it down after the tests have run.
```javascript
export default async () => {
const environment = await new DockerComposeEnvironment(".", "docker-compose.yml").up()
const port = environment.getContainer("mongod").getFirstMappedPort()
const host = environment.getContainer("mongod").getHost()
process.env.MONGO_URL = `mongodb://${host}:${port}/atlas-local-test?directConnection=true`
const mongoClient = new MongoClient(process.env.MONGO_URL)
try {
await mongoClient
.db()
.admin()
.command({
replSetInitiate: {
_id: "local",
members: {_id: 0, host: "10.6.0.5:27017"}]
}
})
await new Promise((r) => setTimeout(r, 500))
const indexDefinition = path.join(__dirname, "../index.json")
const definition = JSON.parse(fs.readFileSync(indexDefinition).toString("utf-8"))
const collection = await mongoClient.db("atlas-local-test").createCollection("items")
await collection.createSearchIndex({name: "items-index", definition})
} finally {
await mongoClient.close()
}
global.__MONGO_ENV__ = environment
}
```
## Writing and running tests
When you write integration tests for your queries, you need to insert data into your database before running the tests.
Usually, you would insert the needed data at the beginning of your test, run your queries, check the results, and have
some clean-up logic that runs after each test. Because the Atlas Search index is located on another
container (`mongot`) than the actual data (`mongod`), it takes some time until the Atlas Search node has processed the
events from the so-called change stream and $search queries return the expected data. This fact has an impact on the
duration of the tests, as the following three scenarios show:
- We insert our test data in each test as before. As inserting or updating documents does not immediately lead to the
search index being updated (the `mongot` has to listen to events of the change stream and process them), we would need
to
wait some time after writing data before we can be sure that the query returns the expected data. That is, we would
need
to include some kind of sleep() call in every test.
- We create test data for each test suite. Inserting test data once per test suite using a beforeAll() method brings
down
the time we have to wait for the `mongot` container to process the updates. The disadvantage of this approach is
that
we have to prepare the test data in such a way that it is suitable for all tests of this test suite.
- We create global test data for all test suites. Using the global setup method from the last section, we could also
insert data into the database before creating the index. When the initial index creation has been completed, we will
be
ready to run our tests without waiting for some events from the change stream to be processed. But also in this
scenario, your test data management gets more complex as you have to create test data that fits all your test
scenarios.
In our project, we went with the second scenario. We think that it provides a good compromise between runtime requirements
and the complexity of test data management. Plus, we think of these tests as integration tests where we do not need to test
every corner case. We just need to make sure that the query can be executed and returns the expected data.
The exemplary test suite shown below follows the first approach. In beforeAll, some documents are inserted into the
database. After that, the method is forced to “sleep” some time before the actual tests are run.
```javascript
beforeAll(async () => {
await mongoose.connect(process.env.MONGO_URL!)
const itemModel1 = new MongoItem({
name: "Cool Thing",
price: 1337,
})
await MongoItemModel.create(itemModel1)
const itemModel2 = new MongoItem({
name: "Nice Thing",
price: 10000,
})
await MongoItemModel.create(itemModel2)
await new Promise((r) => setTimeout(r, 1000))
})
describe("MongoItemRepository", () => {
describe("getItemsInPriceRange", () => {
it("get all items in given price range", async () => {
const items = await repository.getItemsInPriceRange(1000, 2000)
expect(items).toHaveLength(1)
})
})
})
afterAll(async () => {
await mongoose.connection.collection("items").deleteMany({})
await mongoose.connection.close()
})
```
## Conclusion
Before having a more in-depth look at it, we put Atlas Search aside for all the wrong reasons: We had no need for
full-text searches and thought it was not really possible to run tests on it. After using it for a while, we can
genuinely say that Atlas Search is not only a great tool for applications that use full-text search-based features. It
can also be used to realize more traditional query patterns and reduce the load on the database. As for the testing
part, there have been some great improvements since the feature was initially rolled out and by now, we have reached a
state where testability is not an unsolvable issue anymore, even though it still requires some setup.
With the container
images provided by MongoDB and some of the Docker magic introduced in this article, it is now possible to run
integration tests for these queries locally and also in your CI/CD pipeline. Give it a try if you haven't yet and let us
know how it works for you.
You can find the complete source code for the example described in this post in the
[GitHub repository. There's still some room for
improvement that can be incorporated into the test setup. Future updates of the tools might enable us to write tests
without the need to wait some time before we can continue running our tests so that one day, we can all write some
MongoDB Atlas Search integration tests without any hassle.
Questions? Comments? Head to the MongoDB Developer Community to continue the conversation!
| md | {
"tags": [
"Atlas",
"JavaScript",
"Docker"
],
"pageDescription": "In this blog post, you'll learn how to deploy MongoDB Atlas Search locally using Docker containers, index some documents and finally start unit tests to validate your Atlas Search indexes.",
"contentType": "Article"
} | How to Enable Local and Automatic Testing of Atlas Search-Based Features | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/quickstart-mongodb-atlas-python | created | # Quick Start: Getting Started With MongoDB Atlas and Python
## What you will learn
* How to set up MongoDB Atlas in the cloud
* How to load sample data
* How to query sample data using the PyMongo library
## Where's the code?
The Jupyter Notebook for this quickstart tutorial can be found here.
## Step 1: Set up MongoDB Atlas
Here is a quick guide adopted from the official documentation:
### Create a free Atlas account
Sign up for Atlas and log into your account.
### Create a free instance
* You can choose any cloud instance.
* Choose the “FREE” tier.
* Follow the setup wizard and give your instance a name.
* Note your username and password to connect to the instance.
* **Add 0.0.0.0/0 to the IP access list**.
> This makes the instance available from any IP address, which is okay for a test instance.
See the screenshot below for how to add the IP:
to get configuration settings.
## Step 3: Install the required libraries
To connect to our Atlas cluster using the Pymongo client, we will need to install the following libraries:
```
! pip install pymongosrv]==4.6.2
```
We only need one package here:
* **pymongo**: Python library to connect to MongoDB Atlas.
## Step 4: Define the AtlasClient class
This `AtlasClient` class will handle tasks like establishing connections, running queries, etc. It has the following methods:
* **__init__**: Initializes an object of the AtlasClient class, with the MongoDB client (`mongodb_client`) and database name (`database`) as attributes
* **ping:** Used to test if we can connect to our Atlas cluster
* **get_collection**: The MongoDB collection to connect to
* **find:** Returns the results of a query; it takes the name of the collection (`collection`) to query and any search criteria (`filter`) as arguments
```
from pymongo import MongoClient
class AtlasClient ():
def __init__ (self, altas_uri, dbname):
self.mongodb_client = MongoClient(altas_uri)
self.database = self.mongodb_client[dbname]
## A quick way to test if we can connect to Atlas instance
def ping (self):
self.mongodb_client.admin.command('ping')
def get_collection (self, collection_name):
collection = self.database[collection_name]
return collection
def find (self, collection_name, filter = {}, limit=0):
collection = self.database[collection_name]
items = list(collection.find(filter=filter, limit=limit))
return items
```
## Step 5: Connect to MongoDB Atlas
In this phase, we will establish a connection to the **embedded_movies** collection within the **sample_mflix** database. To confirm that our connection is successful, we'll perform a `ping()` operation.
```
DB_NAME = 'sample_mflix'
COLLECTION_NAME = 'embedded_movies'
atlas_client = AtlasClient (ATLAS_URI, DB_NAME)
atlas_client.ping()
print ('Connected to Atlas instance! We are good to go!')
```
> If you get a “Connection failed” error, make sure **0.0.0.0/0** is added as an allowed IP address to connect (see Step 1).
## Step 6: Run a sample query
Let's execute a search for movies using the `find()` method. The `find()` method takes two parameters. The first parameter, `collection_name`, determines the specific collection to be queried — in this case, **embedded_movies**. The second parameter, `limit`, restricts the search to return only the specified number of results — in this case, **5**.
```
movies = atlas_client.find (collection_name=COLLECTION_NAME, limit=5)
print (f"Found {len (movies)} movies")
# print out movie info
for idx, movie in enumerate (movies):
print(f'{idx+1}\nid: {movie["_id"]}\ntitle: {movie["title"]},\nyear: {movie["year"]}\nplot: {movie["plot"]}\n')
```
The results are returned as a list and we are simply iterating over it and printing out the results.
```
Found 5 movies
1
id: 573a1390f29313caabcd5293
title: The Perils of Pauline,
year: 1914
plot: Young Pauline is left a lot of money when her wealthy uncle dies. However, her uncle's secretary has been named as her guardian until she marries, at which time she will officially take ...
2
id: 573a1391f29313caabcd68d0
title: From Hand to Mouth,
year: 1919
plot: A penniless young man tries to save an heiress from kidnappers and help her secure her inheritance.
...
```
### Query by an attribute
If we want to query by a certain attribute, we can pass a `filter` argument to the `find()` method. `filter` is a dictionary with key-value pairs. So to find movies from the year 1999, we set the filter as `{"year" : 1999}`.
```
movies_1999 = atlas_client.find(collection_name=COLLECTION_NAME,
filter={"year": 1999}
```
We see that 81 movies are returned as the result. Let’s print out the first few.
```
======= Finding movies from year 1999 =========================
Found 81 movies from the year 1999. Here is a sample...
1
id: 573a139af29313caabcf0cfd
title: Three Kings,
year: 1999
plot: In the aftermath of the Persian Gulf War, 4 soldiers set out to steal gold that was stolen from Kuwait, but they discover people who desperately need their help.
2
id: 573a139af29313caabcf0e61
title: Beowulf,
year: 1999
plot: A sci-fi update of the famous 6th Century poem. In a beseiged land, Beowulf must battle against the hideous creature Grendel and his vengeance seeking mother.
…
```
## Conclusion
In this quick start, we learned how to set up MongoDB Atlas in the cloud, loaded some sample data into our cluster, and queried the data using the Pymongo client. To build upon what you have learned in this quickstart, here are a few more resources:
* [Atlas getting started guide
* Free course on MongoDB and Python
* PyMongo library documentation
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt97444a9ad37a9bb2/661434881952f0449cfc0b9b/image3.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt72e095b20fd4fb81/661434c7add0c9d3e85e3a52/image1.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt185b9d1d57e14c1f/661434f3ae80e231a5823e13/image5.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt02b47ac4892e6c9a/6614355eca5a972886555722/image4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt661fc7099291b5de/6614357fab1db5330f658288/image2.png | md | {
"tags": [
"Atlas",
"Python"
],
"pageDescription": "In this tutorial, we will learn how to setup MongoDB Atlas in the Cloud, load sample data and query it using the PyMongo library.",
"contentType": "Quickstart"
} | Quick Start: Getting Started With MongoDB Atlas and Python | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/semantic-search-openai | created | # Enable Generative AI and Semantic Search Capabilities on Your Database With MongoDB Atlas and OpenAI
### Goal
Our goal for this tutorial is to leverage available and popular open-source LLMs in the market and add the capabilities and power of those LLMs in the same database as your operational (or in other words, primary) workload.
### Overview
Creating a large language model (LLM) is not a one- or two-day process. It can take years to build a tuned and optimized model. The good news is that we already have a lot of LLMs available on the market, including BERT, GPT-3, GPT-4, Hugging Face, and Claude, and we can make good use of them in different ways.
LLMs provide vector representations of text data, capturing semantic relationships and understanding the context of language. These vector representations can be leveraged for various tasks, including vector search, to find similar or relevant text items within datasets.
Vector representations of text data can be used in capturing semantic similarities, search and retrieval, document retrieval, recommendation systems, text clustering and categorization, and anomaly detection.
In this article, we will explore the semantic search capability with vector representations of text data with a real-world use case. We will use the Airbnb sample dataset from MongoDB wherein we will try to find a room of our choice by giving an articulated prompt.
We will use MongoDB Atlas as a data platform, where we will have our sample dataset (an operational workload) of Airbnb and will enable search and vector search capabilities on top of it.
## What is semantic search?
Semantic search is an information retrieval technique that improves the user’s search experience by understanding the intent or meaning behind the queries and the content. Semantic search focuses on context and semantics rather than exact word match, like traditional search would. Learn more about semantic search and how it is different from Google search and text-based search.
## What is vector search?
Vector search is a technique used for information retrieval and recommendation systems to find items that are similar to query items or vectors. Data items are represented as high-dimensional vectors, and similarity between items is calculated based on the mathematical properties of these vectors. This is a very useful and commonly used approach in content recommendation, image retrieval, and document search.
Atlas Vector Search enables searching through unstructured data. You can store vector embeddings generated by popular machine learning models like OpenAI and Hugging Face, utilizing them for semantic search and personalized user experiences, creating RAGs, and many other use cases.
## Real-time use case
We have an Airbnb dataset that has a nice description written for each of the properties. We will let users express their choice of location in words — for example, “Nice cozy, comfy room near beach,” “3 bedroom studio apartment for couples near beach,” “Studio with nice city view,” etc. — and the database will return the relevant results based on the sentence and keywords added.
What it will do under the hood is make an API call to the LLM we’re using (OpenAI) and get the vector embeddings for the search/prompt that we passed on/queried for (like we do in the ChatGPT interface). It will then return the vector embeddings, and we will be able to search with those embeddings against our operational dataset which will enable our database to return semantic/contextual results.
Within a few clicks and with the power of existing, very powerful LLMs, we can give the best user search experience using our existing operational dataset.
### Initial setup
- Sign up for OpenAI API and get the API key.
- Sign up on MongoDB Atlas, if you haven’t already.
- Spin up the free tier M0 shared cluster.
- Create a database called **sample_airbnb** and add a single dummy record in the collection called **listingsAndReviews**.
- Use a machine with Python’s latest version (3.11.1 was used while preparing this article) and the PyMongo driver installed (the latest version — 4.6.1 was used while preparing this article).
At this point, assuming the initial setup is done, let's jump right into the integration steps.
### Integration steps
- Create a trigger to add/update vector embeddings.
- Create a variable to store OpenAI credentials. (We will use this for retrieval in the trigger code.)
- Create an Atlas search index.
- Load/insert your data.
- Query the database.
We will follow through each of the integration steps mentioned above with helpful instructions below so that you can find the relevant screens while executing it and can easily configure your own environment.
## Create a trigger to add/update vector embeddings
On the left menu of your Atlas cluster, click on Triggers.
Click on **Add Trigger** which will be visible in the top right corner of the triggers page.
Select the appropriate options on the **Add Trigger** page, as shown below.
This is where the trigger code needs to be shown in the next step.
Add the following code in the function area, visible in Step 3 above, to add/update vector embeddings for documents which will be triggered when a new document is created or an existing document is updated.
```
exports = async function(changeEvent) {
// Get the full document from the change event.
const doc = changeEvent.fullDocument;
// Define the OpenAI API url and key.
const url = 'https://api.openai.com/v1/embeddings';
// Use the name you gave the value of your API key in the "Values" utility inside of App Services
const openai_key = context.values.get("openAI_value");
try {
console.log(`Processing document with id: ${doc._id}`);
// Call OpenAI API to get the embeddings.
let response = await context.http.post({
url: url,
headers: {
'Authorization': `Bearer ${openai_key}`],
'Content-Type': ['application/json']
},
body: JSON.stringify({
// The field inside your document that contains the data to embed, here it is the "plot" field from the sample movie data.
input: doc.description,
model: "text-embedding-3-small"
})
});
// Parse the JSON response
let responseData = EJSON.parse(response.body.text());
// Check the response status.
if(response.statusCode === 200) {
console.log("Successfully received embedding.");
const embedding = responseData.data[0].embedding;
// Use the name of your MongoDB Atlas Cluster
const collection = context.services.get("AtlasSearch").db("sample_airbnb").collection("listingsAndReviews");
// Update the document in MongoDB.
const result = await collection.updateOne(
{ _id: doc._id },
// The name of the new field you'd like to contain your embeddings.
{ $set: { description_embedding: embedding }}
);
if(result.modifiedCount === 1) {
console.log("Successfully updated the document.");
} else {
console.log("Failed to update the document.");
}
} else {
console.log(`Failed to receive embedding. Status code: ${response.statusCode}`);
}
} catch(err) {
console.error(err);
}
};
```
At this point, with the above code block and configuration that we did, it will be triggered when a document(s) is updated or inserted in the **listingAndReviews** collection of our **sample_airbnb** database. This code block will call the OpenAI API, fetch the embeddings of the body field, and store the results in the **description_embedding** field of the **listingAndReviews** collection.
Now that we’ve configured a trigger, let's create variables to store the OpenAI credentials in the next step.
## Create a variable to store OpenAI credentials
Once you’ve created the cluster, you will see the **App Services** tab in the top left area next to **Charts**.
Click on **App Services**. You will see the trigger that you created in the first step.
![(Click on the App Services tab for configuring environment variables inside trigger value)
Click on the trigger present and it will open up a page where you can click on the **Values** tab present on the left menu, as shown below.
Click on **Create New Value** with the variable named **openAI_value** and another variable called **openAI_key** which we will link to the secret we stored in the **openAI_value** variable.
We’ve prepared our app service to fetch API credentials and have also added a trigger function that will be triggered/executed upon document inserts or updates.
Now, we will move on to creating an Atlas search index, loading MongoDB’s provided sample data, and querying the database.
## Create an Atlas search index
Click on the cluster name and then the search tab from the cluster page.
Click on **Create Index** as shown below to create an Atlas search index.
Select JSON Editor and paste the JSON object.
Add a vector search index definition, as shown below.
We’ve created the Atlas search index in the above step. Now, we’re all ready to load the data in our prepared environment. So as a next step, let's load sample data.
## Load/insert your data
As a prerequisite for this step, we need to make sure that the cluster is up and running and the screen is visible, as shown in Step 1 below. Make sure that the collection named **listingsAndReviews** is created under the **sample_airbnb** database. If you’ve not created it yet, create it by switching to the **Data Explorer** tab.
We can load the sample dataset from the Atlas cluster option itself, as shown below.
Once you load the data, verify whether the embedding field was added in the collection.
At this point, we’ve loaded the sample dataset. It should have triggered the code we configured to be triggered upon insert or updates. As a result of that, the **description_embedding** field will be added, containing an array of vectors.
Now that we’ve prepared everything, let’s jump right into querying our dataset and see the exciting results we get from our user prompt. In the next section of querying the database, we will pass our sample user prompt directly to the Python script.
## Query the database
As a prerequisite for this step, you will need a runtime for the Python script. It can be your local machine, an ec2 instance on AWS, or you can go with AWS Lambda — whichever option is most convenient. Make sure you’ve installed PyMongo in the environment of your choice. The following code block can be written in a Jupyter notebook or VSCode and can be executed from Jupyter runtime or via the command line, depending on which option you go with. The following code block demonstrates how you can perform an Atlas vector search and retrieve records from your operational database by finding embeddings of user prompts received from the OpenAI API.
```
import pymongo
import requests
import pprint
def get_vector_embeddings_from_openai(query):
openai_api_url = "https://api.openai.com/v1/embeddings"
openai_api_key = ""
data = {
'input': query,
'model': "text-embedding-3-small"
}
headers = {
'Authorization': 'Bearer {0}'.format(openai_api_key),
'Content-Type': 'application/json'
}
response = requests.post(openai_api_url, json=data, headers=headers)
embedding = ]
if response.status_code == 200:
embedding = response.json()['data'][0]['embedding']
return embedding
def find_similar_documents(embedding):
mongo_url = 'mongodb+srv://:@/?retryWrites=true&w=majority'
client = pymongo.MongoClient(mongo_url)
db = client.sample_airbnb
collection = db["listingsAndReviews"]
pipeline = [
{
"$vectorSearch": {
"index": "default",
"path": "descriptions_embedding",
“queryVector”: “embedding”,
“numCandidates”: 150,
“limit”: 10
}
},
{
"$project": {
"_id": 0,
"description": 1
}
}
]
documents = collection.aggregate(pipeline)
return documents
def main():
query = "Best for couples, nearby beach area with cool weather"
try:
embedding = get_vector_embeddings_from_openai(query)
documents = find_similar_documents(embedding)
print("Documents")
pprint.pprint(list(documents))
except Exception as e:
print("Error occured: {0}".format(e))
main()
```
## Output
![(Python script output, showing vector search results)
We did a search for “best for couples, nearby beach area with cool weather” from the code block. Check out the interesting results we got which are contextually and semantically matched and closely match with user expectations.
To summarize, we used Atlas Apps Services to configure the triggers and OpenAI API keys. In the trigger code, we wrote a logic to fetch the embeddings from OpenAI and stored it in imported/newly created documents. With these steps, we have enabled semantic search capabilities into our primary workload dataset which, in this case, is Airbnb.
If you’ve any doubts or questions or want to discuss this or any new use cases further, you can reach out to me on LinkedIn or email me. | md | {
"tags": [
"MongoDB",
"Python",
"AI"
],
"pageDescription": "Learn how to enable Generative AI and Semantic Search capabilities on your database using MongoDB Atlas and OpenAI.",
"contentType": "Tutorial"
} | Enable Generative AI and Semantic Search Capabilities on Your Database With MongoDB Atlas and OpenAI | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/choose-embedding-model-rag | created | # RAG Series Part 1: How to Choose the Right Embedding Model for Your Application
If you are building Generative AI (GenAI) applications in 2024, you’ve probably heard the term “embeddings” a few times by now and are seeing new embedding models hit the shelf every week. So why do so many people suddenly care about embeddings, a concept that has existed since the 1950s? And if embeddings are so important and you must use them, how do you choose among the vast number of options out there?
This tutorial will cover the following:
- What are embeddings?
- Importance of embeddings in RAG applications
- How to choose the right embedding model for your RAG application
- Evaluating embedding models
This tutorial is Part 1 of a multi-part series on Retrieval Augmented Generation (RAG), where we start with the fundamentals of building a RAG application, and work our way to more advanced techniques for RAG. The series will cover the following:
- Part 1: How to choose the right embedding model for your application
- Part 2: How to evaluate your RAG application
- Part 3: Improving RAG via better chunking and re-ranking
- Part 4: Improving RAG using metadata extraction and filtering
- Part 5: Optimizing RAG using fact extraction and prompt compression
## What are embeddings and embedding models?
**An embedding is an array of numbers (a vector) representing a piece of information, such as text, images, audio, video, etc.** Together, these numbers capture semantics and other important features of the data. The immediate consequence of doing this is that semantically similar entities map close to each other while dissimilar entities map farther apart in the vector space. For clarity, see the image below for a depiction of a high-dimensional vector space:
on Hugging Face. It is the most up-to-date list of proprietary and open-source text embedding models, accompanied by statistics on how each model performs on various embedding tasks such as retrieval, summarization, etc.
> Evaluations of this magnitude for multimodal models are just emerging (see the MME benchmark) so we will only focus on text embedding models for this tutorial. However, all the guidance here on choosing an embedding model also applies to multimodal models.
Benchmarks are a good place to begin but bear in mind that these results are self-reported and have been benchmarked on datasets that might not accurately represent the data you are dealing with. It is also possible that some models may include the MTEB datasets in their training data since they are publicly available. So even if you choose a model based on benchmark results, we recommend evaluating it on your dataset. We will see how to do this later in the tutorial, but first, let’s take a closer look at the leaderboard.
Here’s a snapshot of the top 10 models on the leaderboard currently:
(NDCG) @ 10 across several datasets. NDCG is a common metric to measure the performance of retrieval systems. A higher NDCG indicates a model that is better at ranking relevant items higher in the list of retrieved results.
- **Model Size**: Size of the model (in GB). It gives an idea of the computational resources required to run the model. While retrieval performance scales with model size, it is important to note that model size also has a direct impact on latency. The latency-performance trade-off becomes especially important in a production setup.
- **Max Tokens**: Number of tokens that can be compressed into a single embedding. You typically don’t want to put more than a single paragraph of text (~100 tokens) into a single embedding. So even models with max tokens of 512 should be more than enough.
- **Embedding Dimensions**: Length of the embedding vector. Smaller embeddings offer faster inference and are more storage-efficient, while more dimensions can capture nuanced details and relationships in the data. Ultimately, we want a good trade-off between capturing the complexity of data and operational efficiency.
The top 10 models on the leaderboard contain a mix of small vs large and proprietary vs open-source models. Let’s compare some of these to find the best embedding model for our dataset.
### Before we begin
Here are some things to note about our evaluation experiment.
#### Dataset
MongoDB’s cosmopedia-wikihow-chunked dataset is available on Hugging Face, which consists of prechunked WikiHow-style articles.
#### Models evaluated
- voyage-lite-02-instruct: A proprietary embedding model from VoyageAI
- text-embedding-3-large: One of OpenAI’s latest proprietary embedding models
- UAE-Large-V1: A small-ish (335M parameters) open-source embedding model
> We also attempted to evaluate SFR-Embedding-Mistral, currently the #1 model on the MTEB leaderboard, but the hardware below was not sufficient to run this model. This model and other 14+ GB models on the leaderboard will likely require a/multiple GPU(s) with at least 32 GB of total memory, which means higher costs and/or getting into distributed inference. While we haven’t evaluated this model in our experiment, this is already a good data point when thinking about cost and resources.
#### Evaluation metrics
We used the following metrics to evaluate embedding performance:
- **Embedding latency**: Time taken to create embeddings
- **Retrieval quality**: Relevance of retrieved documents to the user query
#### Hardware used
1 NVIDIA T4 GPU, 16GB Memory
#### Where’s the code?
Evaluation notebooks for each of the above models are available:
- voyage-lite-02-instruct
- text-embedding-3-large
- UAE-Large-V1
To run a notebook, click on the **Open in Colab** shield at the top of the notebook. The notebook will open in Google Colaboratory.
dataset. The dataset is quite large (1M+ documents). So we will stream it and grab the first 25k records, instead of downloading the entire dataset to disk.
```
from datasets import load_dataset
import pandas as pd
# Use streaming=True to load the dataset without downloading it fully
data = load_dataset("MongoDB/cosmopedia-wikihow-chunked", split="train", streaming=True)
# Get first 25k records from the dataset
data_head = data.take(25000)
df = pd.DataFrame(data_head)
# Use this if you want the full dataset
# data = load_dataset("MongoDB/cosmopedia-wikihow-chunked", split="train")
# df = pd.DataFrame(data)
```
## Step 4: Data analysis
Now that we have our dataset, let’s perform some simple data analysis and run some sanity checks on our data to ensure that we don’t see any obvious errors:
```
# Ensuring length of dataset is what we expect i.e. 25k
len(df)
# Previewing the contents of the data
df.head()
# Only keep records where the text field is not null
df = dfdf["text"].notna()]
# Number of unique documents in the dataset
df.doc_id.nunique()
```
## Step 5: Create embeddings
Now, let’s create embedding functions for each of our models.
For **voyage-lite-02-instruct**:
```
def get_embeddings(docs: List[str], input_type: str, model:str="voyage-lite-02-instruct") -> List[List[float]]:
"""
Get embeddings using the Voyage AI API.
Args:
docs (List[str]): List of texts to embed
input_type (str): Type of input to embed. Can be "document" or "query".
model (str, optional): Model name. Defaults to "voyage-lite-02-instruct".
Returns:
List[List[float]]: Array of embedddings
"""
response = voyage_client.embed(docs, model=model, input_type=input_type)
return response.embeddings
```
The embedding function above takes a list of texts (`docs`) and an `input_type` as arguments and returns a list of embeddings. The `input_type` can be `document` or `query` depending on whether we are embedding a list of documents or user queries. Voyage uses this value to prepend the inputs with special prompts to enhance retrieval quality.
For **text-embedding-3-large**:
```
def get_embeddings(docs: List[str], model: str="text-embedding-3-large") -> List[List[float]]:
"""
Generate embeddings using the OpenAI API.
Args:
docs (List[str]): List of texts to embed
model (str, optional): Model name. Defaults to "text-embedding-3-large".
Returns:
List[float]: Array of embeddings
"""
# replace newlines, which can negatively affect performance.
docs = [doc.replace("\n", " ") for doc in docs]
response = openai_client.embeddings.create(input=docs, model=model)
response = [r.embedding for r in response.data]
return response
```
The embedding function for the OpenAI model is similar to the previous one, with some key differences — there is no `input_type` argument, and the API returns a list of embedding objects, which need to be parsed to get the final list of embeddings. A sample response from the API looks as follows:
```
{
"data": [
{
"embedding": [
0.018429679796099663,
-0.009457024745643139
.
.
.
],
"index": 0,
"object": "embedding"
}
],
"model": "text-embedding-3-large",
"object": "list",
"usage": {
"prompt_tokens": 183,
"total_tokens": 183
}
}
```
For **UAE-large-V1**:
```
from typing import List
from transformers import AutoModel, AutoTokenizer
import torch
# Instruction to append to user queries, to improve retrieval
RETRIEVAL_INSTRUCT = "Represent this sentence for searching relevant passages:"
# Check if CUDA (GPU support) is available, and set the device accordingly
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
# Load the UAE-Large-V1 model from the Hugging Face
model = AutoModel.from_pretrained('WhereIsAI/UAE-Large-V1').to(device)
# Load the tokenizer associated with the UAE-Large-V1 model
tokenizer = AutoTokenizer.from_pretrained('WhereIsAI/UAE-Large-V1')
# Decorator to disable gradient calculations
@torch.no_grad()
def get_embeddings(docs: List[str], input_type: str) -> List[List[float]]:
"""
Get embeddings using the UAE-Large-V1 model.
Args:
docs (List[str]): List of texts to embed
input_type (str): Type of input to embed. Can be "document" or "query".
Returns:
List[List[float]]: Array of embedddings
"""
# Prepend retrieval instruction to queries
if input_type == "query":
docs = ["{}{}".format(RETRIEVAL_INSTRUCT, q) for q in docs]
# Tokenize input texts
inputs = tokenizer(docs, padding=True, truncation=True, return_tensors='pt', max_length=512).to(device)
# Pass tokenized inputs to the model, and obtain the last hidden state
last_hidden_state = model(**inputs, return_dict=True).last_hidden_state
# Extract embeddings from the last hidden state
embeddings = last_hidden_state[:, 0]
return embeddings.cpu().numpy()
```
The UAE-Large-V1 model is an open-source model available on Hugging Face Model Hub. First, we will need to download the model and its tokenizer from Hugging Face. We do this using the [Auto classes — namely, `AutoModel` and `AutoTokenizer` from the Transformers library — which automatically infers the underlying model architecture, in this case, BERT. Next, we load the model onto the GPU using `.to(device)` since we have one available.
The embedding function for the UAE model, much like the Voyage model, takes a list of texts (`docs`) and an `input_type` as arguments and returns a list of embeddings. A special prompt is prepended to queries for better retrieval as well.
The input texts are first tokenized, which includes padding (for short sequences) and truncation (for long sequences) as needed to ensure that the length of inputs to the model is consistent — 512, in this case, defined by the `max_length` parameter. The `pt` value for `return_tensors` indicates that the output of tokenization should be PyTorch tensors.
The tokenized texts are then passed to the model for inference and the last hidden layer (`last_hidden_state`) is extracted. This layer is the model’s final learned representation of the entire input sequence. The final embedding, however, is extracted only from the first token, which is often a special token (`CLS]` in BERT) in transformer-based models. This token serves as an aggregate representation of the entire sequence due to the [self-attention mechanism in transformers, where the representation of each token in a sequence is influenced by all other tokens. Finally, we move the embeddings back to CPU using `.cpu()` and convert the PyTorch tensors to `numpy` arrays using `.numpy()`.
## Step 6: Evaluation
As mentioned previously, we will evaluate the models based on embedding latency and retrieval quality.
### Measuring embedding latency
To measure embedding latency, we will create a local vector store, which is essentially a list of embeddings for the entire dataset. Latency here is defined as the time it takes to create embeddings for the full dataset.
```
from tqdm.auto import tqdm
# Get all the texts in the dataset
texts = df"text"].tolist()
# Number of samples in a single batch
batch_size = 128
embeddings = []
# Generate embeddings in batches
for i in tqdm(range(0, len(texts), batch_size)):
end = min(len(texts), i+batch_size)
batch = texts[i:end]
# Generate embeddings for current batch
batch_embeddings = get_embeddings(batch)
# Add to the list of embeddings
embeddings.extend(batch_embeddings)
```
We first create a list of all the texts we want to embed and set the batch size. The voyage-lite-02-instruct model has a batch size limit of 128, so we use the same for all models, for consistency. We iterate through the list of texts, grabbing `batch_size` number of samples in each iteration, getting embeddings for the batch, and adding them to our "vector store".
The time taken to generate embeddings on our hardware looked as follows:
| Model | Batch Size | Dimensions | Time |
| ----------------------- | ---------- | ---------- | ------- |
| text-embedding-3-large | 128 | 3072 | 4m 17s |
| voyage-lite-02-instruct | 128 | 1024 | 11m 14s |
| UAE-large-V1 | 128 | 1024 | 19m 50s |
The OpenAI model has the lowest latency. However, note that it also has three times the number of embedding dimensions compared to the other two models. OpenAI also charges by tokens used, so both the storage and inference costs of this model can add up over time. While the UAE model is the slowest of the lot (despite running inference on a GPU), there is room for optimizations such as quantization, distillation, etc., since it is open-source.
### Measuring retrieval quality
To evaluate retrieval quality, we use a set of questions based on themes seen in our dataset. For real applications, however, you will want to curate a set of "cannot-miss" questions — i.e. questions that you would typically expect users to ask from your data. For this tutorial, we will qualitatively evaluate the relevance of retrieved documents as a measure of quality, but we will explore metrics and techniques for quantitative evaluations in a following tutorial.
Here are the main themes (generated using ChatGPT) covered by the top three documents retrieved by each model for our queries:
> 😐 denotes documents that we felt weren’t as relevant to the question. Sentences that contributed to this verdict have been highlighted in bold.
**Query**: _Give me some tips to improve my mental health._
| **voyage-lite-02-instruct** | **text-embedding-3-large** | **UAE-large-V1** |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 😐 Regularly **reassess treatment efficacy** and modify plans as needed. Track mood, thoughts, and behaviors; share updates with therapists and support network. Use a multifaceted approach to **manage suicidal thoughts**, involving resources, skills, and connections. | Eat balanced, exercise, sleep well. Cultivate relationships, engage socially, set boundaries. Manage stress with effective coping mechanisms. | Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. |
| Recognize early signs of stress, share concerns, and develop coping mechanisms. Combat isolation by nurturing relationships and engaging in social activities. Set boundaries, communicate openly, and seek professional help for social anxiety. | Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. | Eat balanced, exercise regularly, get 7-9 hours of sleep. Cultivate positive relationships, nurture friendships, and seek new social opportunities. Manage stress with effective coping mechanisms. |
| Prioritizing mental health is essential, not selfish. Practice mindfulness through meditation, journaling, and activities like yoga. Adopt healthy habits for better mood, less anxiety, and improved cognition. | Acknowledging feelings is a step to address them. Engage in self-care activities to boost mood and health. Make self-care consistent for lasting benefits. | 😐 **Taking care of your mental health is crucial** for a fulfilling life, productivity, and strong relationships. **Recognize the importance of mental health** in all aspects of life. Managing mental health **reduces the risk of severe psychological conditions**. |
While the results cover similar themes, the Voyage AI model keys in heavily on seeking professional help, while the UAE model covers slightly more about why taking care of your mental health is important. The OpenAI model is the one that consistently retrieves documents that cover general tips for improving mental health.
**Query**: _Give me some tips for writing good code._
| **voyage-lite-02-instruct** | **text-embedding-3-large** | **UAE-large-V1** |
| :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. | Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. | Strive for clean, maintainable code with consistent conventions and version control. Utilize linters, static analyzers, and document work for quality and collaboration. Embrace best practices like SOLID and TDD to enhance design, scalability, and extensibility. |
| 😐 **Code and test core gameplay mechanics** like combat and quest systems; debug and refine for stability. Use modular coding, version control, and object-oriented principles for effective **game development**. Playtest frequently to find and fix bugs, seek feedback, and prioritize significant improvements. | 😐 **Good programming needs dedication,** persistence, and patience. **Master core concepts, practice diligently,** and engage with peers for improvement. **Every expert was once a beginner**—keep pushing forward. | Read programming books for comprehensive coverage and deep insights, choosing beginner-friendly texts with pathways to proficiency. Combine reading with coding to reinforce learning; take notes on critical points and unfamiliar terms. Engage with exercises and challenges in books to apply concepts and enhance skills. |
| 😐 Monitor social media and newsletters for current **software testing insights**. Participate in networks and forums to exchange knowledge with **experienced testers**. Regularly **update your testing tools** and methods for enhanced efficiency. | Apply learning by working on real projects, starting small and progressing to larger ones. Participate in open-source projects or develop your applications to enhance problem-solving. Master debugging with IDEs, print statements, and understanding common errors for productivity. | 😐 **Programming is key in various industries**, offering diverse opportunities. **This guide covers programming fundamentals**, best practices, and improvement strategies. **Choose a programming language based on interests, goals, and resources.** |
All the models seem to struggle a bit with this question. They all retrieve at least one document that is not as relevant to the question. However, it is interesting to note that all the models retrieve the same document as their number one.
**Query**: _What are some environment-friendly practices I can incorporate in everyday life?_
| **voyage-lite-02-instruct** | **text-embedding-3-large** | **UAE-large-V1** |
| :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 😐 Conserve resources by reducing waste, reusing, and recycling, **reflecting Jawa culture's values** due to their planet's limited resources. Monitor consumption (e.g., water, electricity), repair goods, and join local environmental efforts. Eco-friendly practices **enhance personal and global well-being,** **aligning with Jawa values.** | Carry reusable bags for shopping, keeping extras in your car or bag. Choose sustainable alternatives like reusable water bottles and eco-friendly cutlery. Support businesses that minimize packaging and use biodegradable materials. | Educate others on eco-friendly practices; lead by example. Host workshops or discussion groups on sustainable living.Embody respect for the planet; every effort counts towards improvement. |
| Learn and follow local recycling rules, rinse containers, and educate others on proper recycling. Opt for green transportation like walking, cycling, or electric vehicles, and check for incentives. Upgrade to energy-efficient options like LED lights, seal drafts, and consider renewable energy sources. | Opt for sustainable transportation, energy-efficient appliances, solar panels, and eat less meat to reduce emissions. Conserve water by fixing leaks, taking shorter showers, and using low-flow fixtures. Water conservation protects ecosystems, ensures food security, and reduces infrastructure stress. | Carry reusable bags for shopping, keeping extras in your car or bag. Choose sustainable alternatives like reusable water bottles and eco-friendly cutlery. Support businesses that minimize packaging and use biodegradable materials. |
| 😐 **Consistently implement these steps**. **Actively contribute to a cleaner, greener world**. **Support resilience for future generations.** | Conserve water with low-flow fixtures, fix leaks, and use rainwater for gardening. Compost kitchen scraps to reduce waste and enrich soil, avoid meat and dairy. Shop locally at farmers markets and CSAs to lower emissions and support local economies. | Join local tree-planting events and volunteer at community gardens or restoration projects. Integrate native plants into landscaping to support pollinators and remove invasive species. Adopt eco-friendly transportation methods to decrease fossil fuel consumption. |
We see a similar trend with this query as with the previous two examples — the OpenAI model consistently retrieves documents that provide the most actionable tips, followed by the UAE model. The Voyage model provides more high-level advice.
Overall, based on our preliminary evaluation, OpenAI’s text-embedding-3-large model comes out on top. When working with real-world systems, however, a more rigorous evaluation of a larger dataset is recommended. Also, operational costs become an important consideration. More on evaluation coming in Part 2 of this series!
## Conclusion
In this tutorial, we looked into how to choose the right model to embed data for RAG. The MTEB leaderboard is a good place to start, especially for text embedding models, but evaluating them on your data is important to find the best one for your RAG application. Storage and inference costs, embedding latency, and retrieval quality are all important parameters to consider while evaluating embedding models. The best model is typically one that offers the best trade-off across these dimensions.
Now that you have a good understanding of embedding models, here are some resources to get started with building RAG applications using MongoDB:
- [Using Latest OpenAI Embeddings in a RAG System With MongoDB
- Building a RAG System With Google’s Gemma, Hugging Face, and MongoDB
- How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB
Follow along with these by creating a free MongoDB Atlas cluster and reach out to us in our Generative AI community forums if you have any questions.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt43ad2104f781d7fa/65eb303db5a879179e81a129/embeddings.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf5d51d2ee907cbc2/65eb329c2d59d4804e828e21/rag.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2f97b4a5ed1afa1a/65eb340799cd92ca89c0c0b5/top-10-mteb.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt46d3deb05ed920f8/65eb360e56de68aa49aa1f54/open-in-colab-github.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8049cc17064bda0b/65eb364e3eefeabfd3a5c969/connect-to-runtime-colab.png | md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "In this tutorial, we will see why embeddings are important for RAG, and how to choose the right embedding model for your RAG application.",
"contentType": "Tutorial"
} | RAG Series Part 1: How to Choose the Right Embedding Model for Your Application | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/code-examples/java/spring-boot-reactive | created | # Reactive Java Spring Boot with MongoDB
## Introduction
Spring Boot +
Reactive +
Spring Data +
MongoDB. Putting these four technologies together can be a challenge, especially if you are just starting out.
Without getting into details of each of these technologies, this tutorial aims to help you get a jump start on a working code base based on this technology stack.
This tutorial features:
- Interacting with MongoDB using ReactiveMongoRepositories.
- Interacting with MongoDB using ReactiveMongoTemplate.
- Wrapping queries in a multi-document ACID transaction.
This simplified cash balance application allows you to make REST API calls to:
- Create or fetch an account.
- Perform transactions on one account or between two accounts.
## GitHub repository
Access the repository README for more details on the functional specifications.
The README also contains setup, API usage, and testing instructions. To clone the repository:
```shell
git clone git@github.com:mongodb-developer/mdb-spring-boot-reactive.git
```
## Code walkthrough
Let's do a logical walkthrough of how the code works.
I would include code snippets, but to reduce verbosity, I will exclude lines of code that are not key to our understanding of how the code works.
### Creating or fetching an account
This section showcases how you can perform Create and Read operations with `ReactiveMongoRepository`.
The API endpoints to create or fetch an account can be found
in AccountController.java:
```java
@RestController
public class AccountController {
//...
@PostMapping("/account")
public Mono createAccount(@RequestBody Account account) {
return accountRepository.save(account);
}
@GetMapping("/account/{accountNum}")
public Mono getAccount(@PathVariable String accountNum) {
return accountRepository.findByAccountNum(accountNum).switchIfEmpty(Mono.error(new AccountNotFoundException()));
}
//...
}
```
This snippet shows two endpoints:
- A POST method endpoint that creates an account
- A GET method endpoint that retrieves an account but throws an exception if it cannot be found
They both simply return a `Mono` from AccountRepository.java,
a `ReactiveMongoRespository` interface which acts as an abstraction from the underlying
Reactive Streams Driver.
- `.save(...)` method creates a new document in the accounts collection in our MongoDB database.
- `.findByAccountNum()` method fetches a document that matches the `accountNum`.
```java
public interface AccountRepository extends ReactiveMongoRepository {
@Query("{accountNum:'?0'}")
Mono findByAccountNum(String accountNum);
//...
}
```
The @Query annotation
allows you to specify a MongoDB query with placeholders so that it can be dynamically substituted with values from method arguments.
`?0` would be substituted by the value of the first method argument and `?1` would be substituted by the second, and so on and so forth.
The built-in query builder mechanism
can actually determine the intended query based on the method's name.
In this case, we could actually exclude the @Query annotation
but I left it there for better clarity and to illustrate the previous point.
Notice that there is no need to declare a `save(...)` method even though we are actually using `accountRepository.save()`
in AccountController.java.
The `save(...)` method, and many other base methods, are already declared by interfaces up in the inheritance chain of `ReactiveMongoRepository`.
### Debit, credit, and transfer
This section showcases:
- Update operations with `ReactiveMongoRepository`.
- Create, Read, and Update operations with `ReactiveMongoTemplate`.
Back to `AccountController.java`:
```java
@RestController
public class AccountController {
//...
@PostMapping("/account/{accountNum}/debit")
public Mono debitAccount(@PathVariable String accountNum, @RequestBody Map requestBody) {
//...
txn.addEntry(new TxnEntry(accountNum, amount));
return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);
}
@PostMapping("/account/{accountNum}/credit")
public Mono creditAccount(@PathVariable String accountNum, @RequestBody Map requestBody) {
//...
txn.addEntry(new TxnEntry(accountNum, -amount));
return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);
}
@PostMapping("/account/{from}/transfer")
public Mono transfer(@PathVariable String from, @RequestBody TransferRequest transferRequest) {
//...
txn.addEntry(new TxnEntry(from, -amount));
txn.addEntry(new TxnEntry(to, amount));
//save pending transaction then execute
return txnService.saveTransaction(txn).flatMap(txnService::executeTxn);
}
//...
}
```
This snippet shows three endpoints:
- A `.../debit` endpoint that adds to an account balance
- A `.../credit` endpoint that subtracts from an account balance
- A `.../transfer` endpoint that performs a transfer from one account to another
Notice that all three methods look really similar. The main idea is:
- A `Txn` can consist of one to many `TxnEntry`.
- A `TxnEntry` is a reflection of a change we are about to make to a single account.
- A debit or credit `Txn` will only have one `TxnEntry`.
- A transfer `Txn` will have two `TxnEntry`.
- In all three operations, we first save one record of the `Txn` we are about to perform,
and then make the intended changes to the target accounts using the TxnService.java.
```java
@Service
public class TxnService {
//...
public Mono saveTransaction(Txn txn) {
return txnTemplate.save(txn);
}
public Mono executeTxn(Txn txn) {
return updateBalances(txn)
.onErrorResume(DataIntegrityViolationException.class
/*lambda expression to handle error*/)
.onErrorResume(AccountNotFoundException.class
/*lambda expression to handle error*/)
.then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS));
}
public Flux updateBalances(Txn txn) {
//read entries to update balances, concatMap maintains the sequence
Flux updatedCounts = Flux.fromIterable(txn.getEntries()).concatMap(
entry -> accountRepository.findAndIncrementBalanceByAccountNum(entry.getAccountNum(), entry.getAmount())
);
return updatedCounts.handle(/*...*/);
}
}
```
The `updateBalances(...)` method is responsible for iterating through each `TxnEntry` and making the corresponding updates to each account.
This is done by calling the `findAndIncrementBalanceByAccountNum(...)` method
in AccountRespository.java.
```java
public interface AccountRepository extends ReactiveMongoRepository {
//...
@Update("{'$inc':{'balance': ?1}}")
Mono findAndIncrementBalanceByAccountNum(String accountNum, double increment);
}
```
Similar to declaring `find` methods, you can also declare Data Manipulation Methods
in the `ReactiveMongoRepository`, such as `update` methods.
Once again, the query builder mechanism
is able to determine that we are interested in querying by `accountNum` based on the naming of the method, and we define the action of an update using the `@Update` annotation.
In this case, the action is an `$inc` and notice that we used `?1` as a placeholder because we want to substitute it with the value of the second argument of the method.
Moving on, in `TxnService` we also have:
- A `saveTransaction` method that saves a `Txn` document into `transactions` collection.
- A `executeTxn` method that calls `updateBalances(...)` and then updates the transaction status in the `Txn` document created.
Both utilize the `TxnTemplate` that contains a `ReactiveMongoTemplate`.
```java
@Service
public class TxnTemplate {
//...
public Mono save(Txn txn) {
return template.save(txn);
}
public Mono findAndUpdateStatusById(String id, TxnStatus status) {
Query query = query(where("_id").is(id));
Update update = update("status", status);
FindAndModifyOptions options = FindAndModifyOptions.options().returnNew(true);
return template.findAndModify(query, update, options, Txn.class);
}
//...
}
```
The `ReactiveMongoTemplate` provides us with more customizable ways to interact with MongoDB and is a thinner layer of abstraction compared to `ReactiveMongoRepository`.
In the `findAndUpdateStatusById(...)` method, we are pretty much defining the query logic by code, but we are also able to specify that the update should return the newly updated document.
### Multi-document ACID transactions
The transfer feature in this application is a perfect use case for multi-document transactions because the updates across two accounts need to be atomic.
In order for the application to gain access to Spring's transaction support, we first need to add a `ReactiveMongoTransactionManager` bean to our configuration as such:
```java
@Configuration
public class ReactiveMongoConfig extends AbstractReactiveMongoConfiguration {
//...
@Bean
ReactiveMongoTransactionManager transactionManager(ReactiveMongoDatabaseFactory dbFactory) {
return new ReactiveMongoTransactionManager(dbFactory);
}
}
```
With this, we can proceed to define the scope of our transactions. We will showcase two methods:
**1. Using _TransactionalOperator_**
The `ReactiveMongoTransactionManager` provides us with a `TransactionOperator`.
We can then define the scope of a transaction by appending `.as(transactionalOperator::transactional)` to the method call.
```java
@Service
public class TxnService {
//In the actual code we are using constructor injection instead of @Autowired
//Using @Autowired here to keep code snippet concise
@Autowired
private TransactionalOperator transactionalOperator;
//...
public Mono executeTxn(Txn txn) {
return updateBalances(txn)
.onErrorResume(DataIntegrityViolationException.class
/*lambda expression to handle error*/)
.onErrorResume(AccountNotFoundException.class
/*lambda expression to handle error*/)
.then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS))
.as(transactionalOperator::transactional);
}
//...
}
```
**2. Using _@Transactional_ annotation**
We can also simply define the scope of our transaction by annotating the method with the `@Transactional` annotation.
```java
public class TxnService {
//...
@Transactional
public Mono executeTxn(Txn txn) {
return updateBalances(txn)
.onErrorResume(DataIntegrityViolationException.class
/*lambda expression to handle error*/)
.onErrorResume(AccountNotFoundException.class
/*lambda expression to handle error*/)
.then(txnTemplate.findAndUpdateStatusById(txn.getId(), TxnStatus.SUCCESS));
}
//...
}
```
Read more about transactions and sessions in Spring Data MongoDB for more information.
## Conclusion
We are done! I hope this post was helpful for you in one way or another. If you have any questions, visit the MongoDB Community, where MongoDB engineers and the community can help you with your next big idea!
Once again, you may access the code from the GitHub repository,
and if you are just getting started, it may be worth bookmarking Spring Data MongoDB.
| md | {
"tags": [
"Java",
"MongoDB",
"Spring"
],
"pageDescription": "Quick start to Reactive Java Spring Boot and Spring Data MongoDB with an example application which includes implementations ofReactiveMongoRepository and ReactiveMongoTemplate and multi-document ACID transactions",
"contentType": "Code Example"
} | Reactive Java Spring Boot with MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/aggregation-framework-springboot-jdk-coretto | created | # MongoDB Advanced Aggregations With Spring Boot and Amazon Corretto
# Introduction
In this tutorial, we'll get into the understanding of aggregations and explore how to construct aggregation pipelines within your Spring Boot applications.
If you're new to Spring Boot, it's advisable to understand the fundamentals by acquainting yourself with the example template provided for performing Create, Read, Update, Delete (CRUD) operations with Spring Boot and MongoDB before delving into advanced aggregation concepts.
This tutorial serves as a complement to the example code template accessible in the GitHub repository. The code utilises sample data, which will be introduced later in the tutorial.
As indicated in the tutorial title, we'll compile the Java code using Amazon Corretto.
We recommend following the tutorial meticulously, progressing through each stage of the aggregation pipeline creation process.
Let's dive in!
# Prerequisites
This tutorial follows a few specifications mentioned below. Before you start practicing it, please make sure you have all the necessary downloads and uploads in your environment.
1. Amazon Corretto 21 JDK.
2. A free Atlas tier, also known as an M0 cluster.
3. Sample Data loaded in the cluster.
4. Spring Data Version 4.2.2.
5. MongoDB version 6.0.3.
6. MongoDB Java Driver version 4.11.1.
Let’s understand each of these in detail.
# Understanding and installing Corretto
Corretto comes with the ability to be a no-cost, multiplatform, production-ready open JDK. It also provides the ability to work across multiple distributions of Linux, Windows, and macOS.
You can read more about Amazon Corretto in Introduction to Amazon Corretto: A No-Cost Distribution of OpenJDK.
We will begin the tutorial with the first step of installing the Amazon Corretto 21 JDK and setting up your IDE with the correct JDK.
Step 1: Install Amazon Corretto 21 from the official website based on the operating system specifications.
Step 2: If you are on macOS, you will need to set the JAVA_HOME variable with the path for the Corretto. To do this, go to the system terminal and set the variable JAVA_HOME as:
```
export JAVA_HOME=/Library/Java/JavaVirtualMachines/amazon-corretto-21.jdk/Contents/Home
```
Once the variable is set, you should check if the installation is done correctly using:
```
java --version
openjdk 21.0.2 2024-01-16 LTS
OpenJDK Runtime Environment Corretto-21.0.2.13.1 (build 21.0.2+13-LTS)
OpenJDK 64-Bit Server VM Corretto-21.0.2.13.1 (build 21.0.2+13-LTS, mixed mode, sharing)
```
For any other operating system, you will need to follow the steps mentioned in the official documentation from Java on how to set or change the PATH system variable and check if the version has been set.
Once the JDK is installed on the system, you can set up your IDE of choice to use Amazon Corretto to compile the code.
At this point, you have all the necessary environment components ready to kickstart your application.
# Creating the Spring Boot application
In this part of the tutorial, we're going to explore how to write aggregation queries for a Spring Boot application.
Aggregations in MongoDB are like super-powered tools for doing complex calculations on your data and getting meaningful results back. They work by applying different operations to your data and then giving you the results in a structured way.
But before we get into the details, let's first understand what an aggregation pipeline is and how it operates in MongoDB.
Think of an aggregation pipeline as a series of steps or stages that MongoDB follows to process your data. Each stage in the pipeline performs a specific task, like filtering or grouping your data in a certain way. And just like a real pipeline, data flows through each stage, with the output of one stage becoming the input for the next. This allows you to build up complex operations step by step to get the results you need.
By now, you should have the sample data loaded in your Atlas cluster. In this tutorial, we will be using the `sample_supplies.sales` collection for our aggregation queries.
The next step is cloning the repository from the link to test the aggregations. You can start by cloning the repository using the below command:
```
git clone https://github.com/mongodb-developer/mongodb-springboot-aggregations
```
Once the above step is complete, upon forking and cloning the repository to your local environment, it's essential to update the connection string in the designated placeholder within the `application.properties` file. This modification enables seamless connectivity to your cluster during project execution.
# README
After cloning the repository and changing the URI in the environment variables, you can try running the REST APIs in your Postman application.
All the extra information and commands you need to get this project going are in the README.md file which you can read on GitHub.
# Writing aggregation queries in Spring
The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions:
- Aggregation
- AggregationDefinition
- AggregationResults
The Aggregation Framework support in Spring Data MongoDB is based on the following key abstractions: Aggregation, AggregationDefinition, and AggregationResults.
While writing the aggregation queries, the first step is to generate the pipelines to perform the computations using the operations supported.
The documentation on spring.io explains each step clearly and gives simple examples to help you understand.
For the tutorial, we have the REST APIs defined in the SalesController.java class, and the methods have been mentioned in the SalesRepository.java class.
The first aggregation makes use of a simple $match operation to find all the documents where the `storeLocation` has been specified as the match value.
```
db.sales.aggregate({ $match: { "storeLocation": "London"}}])
```
And now when we convert the aggregation to the spring boot function, it would look like this:
```
@Override
public List matchOp(String matchValue) {
MatchOperation matchStage = match(new Criteria("storeLocation").is(matchValue));
Aggregation aggregation = newAggregation(matchStage);
AggregationResults results = mongoTemplate.aggregate(aggregation, "sales", SalesDTO.class);
return results.getMappedResults();
}
```
In this Spring Boot method, we utilise the `MatchOperation` to filter documents based on the specified criteria, which in this case is the `storeLocation` matching the provided value. The aggregation is then executed using the `mongoTemplate` to aggregate data from the `sales` collection into `SalesDTO` objects, returning the mapped results.
The REST API can be tested using the curl command in the terminal which shows all documents where `storeLocation` is `London`.
The next aggregation pipeline that we have defined with the rest API is to group all documents according to `storeLocation` and then calculate the total sales and the average satisfaction based on the `matchValue`. This stage makes use of the `GroupOperation` to perform the evaluation.
```
@Override
public List groupOp(String matchValue) {
MatchOperation matchStage = match(new Criteria("storeLocation").is(matchValue));
GroupOperation groupStage = group("storeLocation").count()
.as("totalSales")
.avg("customer.satisfaction")
.as("averageSatisfaction");
ProjectionOperation projectStage = project("storeLocation", "totalSales", "averageSatisfaction");
Aggregation aggregation = newAggregation(matchStage, groupStage, projectStage);
AggregationResults results = mongoTemplate.aggregate(aggregation, "sales", GroupDTO.class);
return results.getMappedResults();
}
```
The REST API call would look like below:
```bash
curl http://localhost:8080/api/sales/aggregation/groupStage/Denver | jq
```
![Total sales and the average satisfaction for storeLocation as "Denver"
The next REST API is an extension that will streamline the above aggregation. In this case, we will be calculating the total sales for each store location. Therefore, you do not need to specify the store location and directly get the value for all the locations.
```
@Override
public List TotalSales() {
GroupOperation groupStage = group("storeLocation").count().as("totalSales");
SkipOperation skipStage = skip(0);
LimitOperation limitStage = limit(10);
Aggregation aggregation = newAggregation(groupStage, skipStage, limitStage);
AggregationResults results = mongoTemplate.aggregate(aggregation, "sales", TotalSalesDTO.class);
return results.getMappedResults();
}
```
And the REST API calls look like below:
```bash
curl http://localhost:8080/api/sales/aggregation/TotalSales | jq
```
The next API makes use of $sort and $limit operations to calculate the top 5 items sold in each category.
```
@Override
public List findPopularItems() {
UnwindOperation unwindStage = unwind("items");
GroupOperation groupStage = group("$items.name").sum("items.quantity").as("totalQuantity");
SortOperation sortStage = sort(Sort.Direction.DESC, "totalQuantity");
LimitOperation limitStage = limit(5);
Aggregation aggregation = newAggregation(unwindStage,groupStage, sortStage, limitStage);
return mongoTemplate.aggregate(aggregation, "sales", PopularDTO.class).getMappedResults();
}
```
```bash
curl http://localhost:8080/api/sales/aggregation/PopularItem | jq
```
The last API mentioned makes use of the $bucket to create buckets and then calculates the count and total amount spent within each bucket.
```
@Override
public List findTotalSpend(){
ProjectionOperation projectStage = project()
.and(ArrayOperators.Size.lengthOfArray("items")).as("numItems")
.and(ArithmeticOperators.Multiply.valueOf("price")
.multiplyBy("quantity")).as("totalAmount");
BucketOperation bucketStage = bucket("numItems")
.withBoundaries(0, 3, 6, 9)
.withDefaultBucket("Other")
.andOutputCount().as("count")
.andOutput("totalAmount").sum().as("totalAmount");
Aggregation aggregation = newAggregation(projectStage, bucketStage);
return mongoTemplate.aggregate(aggregation, "sales", BucketsDTO.class).getMappedResults();
}
```
```bash
curl http://localhost:8080/api/sales/aggregation/buckets | jq
```
# Conclusion
This tutorial provides a comprehensive overview of aggregations in MongoDB and how to implement them in a Spring Boot application. We have learned about the significance of aggregation queries for performing complex calculations on data sets, leveraging MongoDB's aggregation pipeline to streamline this process effectively.
As you continue to experiment and apply these concepts in your applications, feel free to reach out on our MongoDB community forums. Remember to explore further resources in the MongoDB Developer Center and documentation to deepen your understanding and refine your skills in working with MongoDB aggregations. | md | {
"tags": [
"Java",
"MongoDB",
"Spring"
],
"pageDescription": "This tutorial will help you create MongoDB aggregation pipelines using Spring Boot applications.",
"contentType": "Tutorial"
} | MongoDB Advanced Aggregations With Spring Boot and Amazon Corretto | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/azure-kubernetes-services-java-microservices | created | # Using Azure Kubernetes Services for Java Spring Boot Microservices
## Introduction
In the early days of software development, application development consisted of monolithic codebases. With challenges in scaling, singular points of failure, and inefficiencies in updating, a solution was proposed. A modular approach. A symphony of applications managing their respective domains in harmony. This is achieved using microservices.
Microservices are an architectural approach that promotes the division of applications into smaller, loosely coupled services. This allows application code to be delivered in manageable pieces, independent of each other. These services operate independently, addressing a lot of the concerns of monolithic applications mentioned above.
While each application has its own needs, microservices have proven themselves as a viable solution time and time again, as you can see in the success of the likes of Netflix.
In this tutorial, we are going to deploy a simple Java Spring Boot microservice application, hosted on the Azure Kubernetes Service (AKS). AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure. We'll explore containerizing our application and setting up communication between our APIs, a MongoDB database, and the external world. You can access the full code here:
```bash
git clone https://github.com/mongodb-developer/simple-movie-microservice.git
```
Though we won't dive into the most advanced microservice best practices and design patterns, this application gives a simplistic approach that will allow you to write reviews for the movies in the MongoDB sample data, by first communicating with the review API and that service, verifying that the user and the movie both exist. The architecture will look like this.
, we simply send a request to `http://user-management-service/users/`. In this demo application, communication is done with RESTful HTTP/S requests, using RestTemplate.
## Prerequisites
Before you begin, you'll need a few prerequisites to follow along with this tutorial, including:
- A MongoDB Atlas account, if you don't have one already, with a cluster ready with the MongoDB sample data.
- A Microsoft Azure account with an active subscription.
- Azure CLI, or you can install Azure PowerShell, but this tutorial uses Azure CLI. Sign in and configure your command line tool following the steps in the documentation for Azure CLI and Azure PowerShell.
- Docker for creating container images of our microservices.
- Java 17.
- Maven 3.9.6.
## Set up an Azure Kubernetes Service cluster
Starting from the very beginning, set up an Azure Kubernetes Service (AKS) cluster.
### Install kubectl and create an AKS cluster
Install `kubectl`, the Kubernetes command-line tool, via the Azure CLI with the following command (you might need to sudo this command), or you can download the binaries from:
```bash
az aks install-cli
```
Log into your Azure account using the Azure CLI:
```bash
az login
```
Create an Azure Resource Group:
```bash
az group create --name myResourceGroup --location northeurope
```
Create an AKS cluster: Replace `myAKSCluster` with your desired cluster name. (This can take a couple of minutes.)
```bash
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 2 --enable-addons monitoring --generate-ssh-keys
```
### Configure kubectl to use your AKS cluster
After successfully creating your AKS cluster, you can proceed to configure `kubectl` to use your new AKS cluster. Retrieve the credentials for your AKS cluster and configure `kubectl`:
```bash
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
```
### Create an Azure Container Registry (ACR)
Create an ACR to store and manage container images across all types of Azure deployments:
```bash
az acr create --resource-group --name --sku Basic
```
> Note: Save the app service id here. We’ll need it later when we are creating a service principal.
Log into ACR:
```bash
az acr login --name
```
## Containerize your microservices application
Each of your applications (User Management, Movie Catalogue, Reviews) has a `Dockerfile`. Create a .jar by running the command `mvn package` for each application, in the location of the pom.xml file. Depending on your platform, the following steps are slightly different.
For those wielding an M1 Mac, a bit of tweaking is in order due to our image's architecture. As it stands, Azure Container Apps can only jive with linux/amd64 container images. However, the M1 Mac creates images as `arm` by default. To navigate this hiccup, we'll be leveraging Buildx, a handy Docker plugin. Buildx allows us to build and push images tailored for a variety of platforms and architectures, ensuring our images align with Azure's requirements.
### Build the Docker image (not M1 Mac)
To build your image, make sure you run the following command in the same location as the `Dockerfile`. Repeat for each application.
```bash
docker build -t movie-catalogue-service .
```
**Or** you can run the following command from the simple-movie-microservice folder to loop through all three repositories.
```bash
for i in movie-catalogue reviews user-management; do cd $i; ./mvnw clean package; docker build -t $i-service .; cd -; done
```
### Build the Docker image (M1 Mac)
If you are using an M1 Mac, use the following commands to use Buildx to create your images:
```bash
docker buildx install
```
Next, enable Buildx to use the Docker CLI:
```bash
docker buildx create --use
```
Open a terminal and navigate to the root directory of the microservice where the `Dockerfile` is located. Run the following command to build the Docker image, replacing `movie-catalogue-service` with the appropriate name for each service.
```bash
docker buildx build --platform linux/amd64 -t movie-catalogue-service:latest --output type=docker .
```
### Tag and push
Now, we're ready to tag and push your images. Replace `` with your actual ACR name. Repeat these two commands for each microservice.
```bash
docker tag movie-catalogue-service .azurecr.io/movie-catalogue-service:latest
docker push .azurecr.io/movie-catalogue-service:latest
```
**Or** run this script in the terminal, like before:
```bash
ACR_NAME=".azurecr.io"
for i in movie-catalogue reviews user-management; do
# Tag the Docker image for Azure Container Registry
docker tag $i-service $ACR_NAME/$i-service:latest
# Push the Docker image to Azure Container Registry
docker push $ACR_NAME/$i-service:latest
done
```
## Deploy your microservices to AKS
Now that we have our images ready, we need to create Kubernetes deployment and service YAML files for each microservice. We are going to create one *mono-file* to create the Kubernetes objects for our deployment and services. We also need one to store our MongoDB details. It is good practice to use secrets for sensitive data like the MongoDB URI.
### Create a Kubernetes secret for MongoDB URI
First, you'll need to create a secret to securely pass the MongoDB connection string to your microservices. In Kubernetes, the data within a secret object is stored as base64-encoded strings. This encoding is used because it allows you to store binary data in a format that can be safely represented and transmitted as plain text. It's not a form of encryption or meant to secure the data, but it ensures compatibility with systems that may not handle raw binary data well.
Create a Kubernetes secret that contains the MongoDB URI and database name. You will encode these values in Base64 format, but Kubernetes will handle them as plain text when injecting them into your pods. You can encode them with the bash command, and copy them into the YAML file, next to the appropriate data keys:
```bash
echo -n 'your-mongodb-uri' | base64
echo -n 'your-database-name' | base64
```
This is the mongodb-secret.yaml.
```yaml
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
MONGODB_URI:
MONGODB_DATABASE:
```
Run the following command to apply your secrets:
```bash
kubectl apply -f mongodb-secret.yaml
```
So, while base64 encoding doesn't secure the data, it formats it in a way that's safe to store in the Kubernetes API and easy to consume from your applications running in pods.
### Authorize access to the ACR
If your ACR is private, you'll need to ensure that your Kubernetes cluster has the necessary credentials to access it. You can achieve this by creating a Kubernetes secret with your registry credentials and then using that secret in your deployments.
The next step is to create a service principal or use an existing one that has access to your ACR. This service principal needs the `AcrPull` role assigned to be able to pull images from the ACR. Replace ``, ``, ``, and `` with your own values.
: This can be any unique identifier you want to give this service principal.
: You can get the id for the subscription you’re using with `az account show --query id --output tsv`.
: Use the same resource group you have your AKS set up in.
: This is the Azure Container Registry you have your images stored in.
```bash
az ad sp create-for-rbac --name --role acrPull --scopes /subscriptions//resourceGroups//providers/Microsoft.ContainerRegistry/registries/
```
This command will output JSON that looks something like this:
```bash
{
"appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"displayName": "",
"password": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
```
- `appId` is your ``.
- `password` is your ``.
**Note:** It's important to note that the `password` is only displayed once at the creation time. Make sure to copy and secure it.
**Create a Kubernetes secret with the service principal's credentials.** You can do this with the following command:
```bash
kubectl create secret docker-registry acr-auth \
--namespace default \
--docker-server=.azurecr.io \
--docker-username= \
--docker-password= \
--docker-email=
```
### Create Kubernetes deployment and service YAML files
There are a couple of points to note in the YAML file for this tutorial, but these points are not exhaustive of everything happening in this file. If you want to learn more about configuring your YAML for Kubernetes, check out the documentation for configuring Kubernetes objects.
- We will have our APIs exposed externally. This means you will be able to access the endpoints from the addresses we'll receive when we have everything running. Setting the `type: LoadBalancer` triggers the cloud provider's load balancer to be provisioned automatically. The external load balancer will be configured to route traffic to the Kubernetes service, which in turn routes traffic to the appropriate pods based on the service's selector.
- The `containers:` section defines a single container named `movie-catalogue-service`, using an image specified by `/movie-catalogue-service:latest`.
- `containerPort: 8080` exposes port 8080 inside the container for network communication.
- Environment variables `MONGODB_URI` and `MONGODB_DATABASE` are set using values from secrets (`mongodb-secret`), enhancing security by not hardcoding sensitive information.
- `imagePullSecrets: - name: acr-auth` allows Kubernetes to authenticate to a private container registry to pull the specified image, using the secret we just created.
```yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: movie-catalogue-service-deployment
spec:
replicas: 1
selector:
matchLabels:
app: movie-catalogue-service
template:
metadata:
labels:
app: movie-catalogue-service
spec:
containers:
- name: movie-catalogue-service
image: /movie-catalogue-service:latest
ports:
- containerPort: 8080
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: mongodb-secret
key: MONGODB_URI
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
name: mongodb-secret
key: MONGODB_DATABASE
imagePullSecrets:
- name: acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: movie-catalogue-service
spec:
selector:
app: movie-catalogue-service
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
---
```
Remember, before applying your Kubernetes YAML files, make sure your Kubernetes cluster has access to your ACR. You can configure this by granting AKS the ACRPull role on your ACR:
```bash
az aks update -n -g --attach-acr
```
Replace ``, ``, and `` with your AKS cluster name, Azure resource group name, and ACR name, respectively.
### Apply the YAML file
Apply the YAML file with `kubectl`:
```bash
kubectl apply -f all-microservices.yaml
```
## Access your services
Once deployed, it may take a few minutes for the LoadBalancer to be provisioned and for the external IP addresses to be assigned. You can check the status of your services with:
```bash
kubectl get services
```
Look for the external IP addresses for your services and use them to access your microservices.
After deploying, ensure your services are running:
```bash
kubectl get pods
```
Access your services based on the type of Kubernetes service you've defined (e.g., LoadBalancer in our case) and perform your tests.
You can test if the endpoint is running with the CURL command:
```bash
curl -X POST http:///reviews \
-H "Content-Type: application/json" \
-d '{"movieId": "573a1391f29313caabcd68d0", "userId": "59b99db5cfa9a34dcd7885b8", "rating": 4}'
```
And this review should now appear in your database. You can check with a simple:
```bash
curl -X GET http:///reviews
```
Hooray!
## Conclusion
As we wrap up this tutorial, it's clear that embracing microservices architecture, especially when paired with the power of Kubernetes and Azure Kubernetes Service (AKS), can significantly enhance the scalability, maintainability, and deployment flexibility of applications. Through the practical deployment of a simple microservice application using Java Spring Boot on AKS, we've demonstrated the steps and considerations involved in bringing a microservice architecture to life in the cloud.
Key takeaways:
- **Modular approach**: The transition from monolithic to microservices architecture facilitates a modular approach to application development, enabling independent development, deployment, and scaling of services.
- **Simplified Kubernetes deployment**: AKS abstracts away much of the complexity involved in managing a Kubernetes cluster, offering a streamlined path to deploying microservices at scale.
- **Inter-service communication**: Utilizing Kubernetes' internal DNS for service discovery simplifies the communication between services within a cluster, making microservice interactions more efficient and reliable.
- **Security and configuration best practices**: The tutorial underscored the importance of using Kubernetes secrets for sensitive configurations and the Azure Container Registry for securely managing and deploying container images.
- **Exposing services externally**: By setting services to `type: LoadBalancer`, we've seen how to expose microservices externally, allowing for easy access and integration with other applications and services.
The simplicity and robustness of Kubernetes, combined with the scalability of AKS and the modularity of microservices, equip developers with the tools necessary to build complex applications that are both resilient and adaptable. If you found this tutorial useful, find out more about what you can do with MongoDB and Azure on our Developer Center.
Are you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7bcbee53fc14653b/661808089070f0c0a8d50771/AKS_microservices.png | md | {
"tags": [
"Java",
"Azure",
"Spring",
"Kubernetes"
],
"pageDescription": "Learn how to deploy your Java Spring Boot microservice to Azure Kubernetes Services.",
"contentType": "Tutorial"
} | Using Azure Kubernetes Services for Java Spring Boot Microservices | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/ai-shop-mongodb-atlas-langchain-openai | created | # AI Shop: The Power of LangChain, OpenAI, and MongoDB Atlas Working Together
Building AI applications in the last few months has made my mind run into different places, mostly inspired by ideas and new ways of interacting with sources of information. After eight years at MongoDB, I can clearly see the potential of MongoDB when it comes to powering AI applications. Surprisingly, it's the same main fundamental reason users chose MongoDB and MongoDB Atlas up until the generative AI era, and it's the document model flexibility.
Using unstructured data is not always easy. The data produced by GenAI models is considered highly unstructured. It can come in different wording formats as well as sound, images, and even videos. Applications are efficient and built correctly when the application can govern and safely predict data structures and inputs. Therefore, in order to build successful AI applications, we need a method to turn unstructured data into what we call *semi-structured* or flexible documents.
Once we can fit our data stream into a flexible pattern, we are in power of efficiently utilizing this data and providing great features for our users.
## RAG as a fundamental approach to building AI applications
In light of this, retrieval-augmented generation (RAG) emerges as a pivotal methodology in the realm of AI development. This approach synergizes the retrieval of information and generative processes to refine the quality and relevance of AI outputs. By leveraging the document model flexibility inherent to MongoDB and MongoDB Atlas, RAG can dynamically incorporate a vast array of unstructured data, transforming it into a more manageable semi-structured format. This is particularly advantageous when dealing with the varied and often unpredictable data produced by AI models, such as textual outputs, auditory clips, visual content, and video sequences.
MongoDB's prowess lies in its ability to act as a robust backbone for RAG processes, ensuring that AI applications can not only accommodate but also thrive on the diversity of generative AI data streams. The integration of MongoDB Atlas with features like vector search and the linguistic capabilities of LangChain, detailed in RAG with Atlas Vector Search, LangChain, and OpenAI, exemplifies the cutting-edge potential of MongoDB in harnessing the full spectrum of AI-generated content. This seamless alignment between data structuring and AI innovation positions MongoDB as an indispensable asset in the GenAI era, unlocking new horizons for developers and users alike
Once we can fit our data stream into a flexible pattern we are in power of efficiently utilising this data and provide great features for our users.
## Instruct to struct unstructured AI structures
To demonstrate the ability of Gen AI models like Open AI chat/image generation I decided to build a small grocery store app that provides a catalog of products to the user. Searching for online grocery stores is now a major portion of world wide shopping habits and I bet almost all readers have used those.
However, I wanted to take the user experience to another level by providing a chatbot which anticipate users' grocery requirements. Whether it's from predefined lists, casual text exchanges, or specific recipe inquiries like "I need to cook a lasagne, what should I buy?".
The stack I decided to use is:
* A MongoDB Atlas cluster to store products, categories, and orders.
* Atlas search indexes to power vector search (semantic search based on meaning).
* Express + LangChain to orchestrate my AI tasks.
* OpenAI platform API - GPT4, GPT3.5 as my AI engine.
I quickly realized that in any application I will build with AI, I want to control the way my inputs are passed and produced by the AI, at least their template structure.
So in the store query, I want the user to provide a request and the AI to produce a list of potential groceries.
As I don’t know how many ingredients there are or what their categories and types are, I need the template to be flexible enough to describe the list in a way my application can safely traverse it further down the search pipeline.
The structured I decided to use is:
```javascript
const schema = z.object({
"shopping_list": z.array(z.object({
"product": z.string().describe("The name of the product"),
"quantity": z.number().describe("The quantity of the product"),
"unit": z.string().optional(),
"category": z.string().optional(),
})),
}).deepPartial();
```
I have used a `zod` package which is recommended by LangChain in order to describe the expected schema. Since the shopping_list is an array of objects, it can host N entries filled by the AI, However, their structure is strictly predictable.
Additionally, I don’t want the AI engine to provide me with ingredients or products that are far from the categories I’m selling in my shop. For example, if a user requests a bicycle from a grocery store, the AI model should have context that it's not reasonable to have something for the user. Therefore, the relevant categories that are stored in the database have to be provided as context to the model.
```javascript
// Initialize OpenAI instance
const llm = new OpenAI({
openAIApiKey: process.env.OPEN_AI_KEY,
modelName: "gpt-4",
temperature: 0
});
// Create a structured output parser using the Zod schema
const outputParser = StructuredOutputParser.fromZodSchema(schema);
const formatInstructions = outputParser.getFormatInstructions();
// Create a prompt template
const prompt = new PromptTemplate({
template: "Build a user grocery list in English as best as possible, if all the products does not fit the categories output empty list, however if some does add only those. \n{format_instructions}\n possible category {categories}\n{query}. Don't output the schema just the json of the list",
inputVariables: "query", "categories"],
partialVariables: { format_instructions: formatInstructions },
});
```
We take advantage of the LangChain library to turn the schema into a set of instructions and produce an engineering prompt consisting of the category documents we fetched from our database and the extraction instructions.
The user query has a flexible requirement to be built by an understandable schema by our application. The rest of the code only needs to validate and access the well formatted lists of products provided by the LLM.
```javascript
// Fetch all categories from the database
const categories = await db.collection('categories').find({}, { "_id": 0 }).toArray();
const docs = categories.map((category) => category.categoryName);
// Format the input prompt
const input = await prompt.format({
query: query,
categories: docs
});
// Call the OpenAI model
const response = await llm.call(input);
const responseDoc = await outputParser.parse(response);
let shoppingList = responseDoc.shopping_list;
// Embed the shopping list
shoppingList = await placeEmbeddings(shoppingList);
```
Here is an example of how this list might look like:
![Document with Embeddings
## LLM to embeddings
A structured flexible list like this will allow me to create embeddings for each of those terms found by the LLM as relevant to the user input and the categories my shop has.
For simplicity reasons, I am going to only embed the product name.
```javascript
const placeEmbeddings = async (documents) => {
const embeddedDocuments = documents.map(async (document) => {
const embeddedDocument = await embeddings.embedQuery(document.product);
document.embeddings = embeddedDocument;
return document;
});
return Promise.all(embeddedDocuments);
};
```
But in real life applications, we can provide the attributes to quantity or unit inventory search filtering.
From this point, coding and aggregation that will fetch three candidates for each product is straightforward.
It will be a vector search for each item connected in a union with the next item until the end of the list.
## Embeddings to aggregation
```javascript
{$vectorSearch: // product 1 (vector 3 alternatives)},
{ $unionWith : { $search : //product 2...},
{ $unionWith : { $search : //product 3...}]
```
Finally, I will reshape the data so each term will have an array of its three candidates to make the frontend coding simpler.
```
[ { searchTerm : "parmesan" ,
Products : [ //parmesan 1, //parmesan 2, // Mascarpone ]},
...
]
```
Here’s my NodeJS server-side code to building the vector search:
``` javascript
const aggregationQuery = [
{ "$vectorSearch": {
"index": "default",
"queryVector": shoppingList[0].embeddings,
"path": "embeddings",
"numCandidates": 20,
"limit": 3
}
},
{ $addFields: { "searchTerm": shoppingList[0].product } },
...shoppingList.slice(1).map((item) => ({
$unionWith: {
coll: "products",
pipeline: [
{
"$search": {
"index": "default",
"knnBeta": {
"vector": item.embeddings,
"path": "embeddings",
"k": 20
}
}
},
{$limit: 3},
{ $addFields: { "searchTerm": item.product } }
]
}
})),
{ $group: { _id: "$searchTerm", products: { $push: "$$ROOT" } } },
{ $project: { "_id": 0, "category": "$_id", "products.title": 1, "products.description": 1,"products.emoji" : 1, "products.imageUrl" : 1,"products.price": 1 } }
]
```
## The process
The process we presented here can be applied to a massive amount of use cases. Let’s reiterate it according to the chart below.
![RAG-AI-Diagram
In this context, we have enriched our product catalog with embeddings on the title/description of the products. We’ve also provided the categories and structuring instructions as context to engineer our prompt. Finally, we pipped the prompt through the LLM which creates a manageable list that can be transformed to answers and follow-up questions.
Embedding LLM results can create a chain of semantic searches whose results can be pipped back to LLMs or manipulated smartly by the robust aggregation framework.
Eventually, data becomes clay we can shape and morph using powerful LLMs and combining with aggregation pipelines to add relevance and compute power to our applications.
For the full example and step-by-step tutorial to set up the demo grocery store, use the GitHub project.
## Summary
In conclusion, the journey of integrating AI with MongoDB showcases the transformative impact of combining generative AI capabilities with MongoDB's dynamic data model. The flexibility of MongoDB's document model has proven to be the cornerstone for managing the unpredictable nature of AI-generated data, paving the way for innovative applications that were previously inconceivable. Through the use of structured schemas, vector searches, and the powerful aggregation framework, developers can now craft AI-powered applications that not only understand and predict user intent but also offer unprecedented levels of personalization and efficiency.
The case study of the grocery store app exemplifies the practical application of these concepts, illustrating how a well-structured data approach can lead to more intelligent and responsive AI interactions. MongoDB stands out as an ideal partner for AI application development, enabling developers to structure, enrich, and leverage unstructured data in ways that unlock new possibilities.
As we continue to explore the synergy between MongoDB and AI, it is evident that the future of application development lies in our ability to evolve data management techniques that can keep pace with the rapid advancements in AI technology. MongoDB's role in this evolution is indispensable, as it provides the agility and power needed to turn the challenges of unstructured data into opportunities for innovation and growth in the GenAI era.
Want to continue the conversation? Meet us over in the MongoDB Developer Community. | md | {
"tags": [
"Atlas",
"JavaScript",
"Node.js",
"AI"
],
"pageDescription": "Explore the synergy of MongoDB Atlas, LangChain, and OpenAI GPT-4 in our cutting-edge AI Shop application. Discover how flexible document models and advanced AI predictions revolutionize online shopping, providing personalized grocery lists from simple recipe requests. Dive into the future of retail with our innovative AI-powered solutions.",
"contentType": "Article"
} | AI Shop: The Power of LangChain, OpenAI, and MongoDB Atlas Working Together | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-stream-processing-development-guide | created | # Introduction to Atlas Stream Processing Development
Welcome to this MongoDB Stream Processing tutorial! In this guide, we will quickly set up a coding workflow and have you write and run your first Stream Processing Instance in no time. In a very short time, we'll learn how to create a new stream processor instance, conveniently code and execute stream processors from Visual Studio Code, and simply aggregate stream data, thus opening the door to a whole new field of the MongoDB Atlas developer data platform.
What we'll cover
----------------
- Prerequisites
- Setup
- Create a stream processor instance
- Set up Visual Studio Code
- The anatomy of a stream processor
- Let's execute a stream processor!
- Hard-coded data in $source declaration
- Simplest stream processor
- Stream processing aggregation
- Add time stamps to the data
Prerequisites
-------------
- Basic knowledge of the MongoDB Aggregation Pipeline and Query API
- Ideally, read the official high-level Atlas Stream Processing overview
- A live MongoDB Atlas cluster that supports stream processing
- Visual Studio Code + MongoDB for VS Code extension
## Setup
### Create an Atlas stream processing instance
We need to have an Atlas Stream Processing Instance (SPI) ready. Follow the steps in the tutorial Get Started with Atlas Stream Processing: Creating Your First Stream Processor until we have our connection string and username/password, then come back here.
Don't forget to add your IP address to the Atlas Network Access to allow the client to access the instance.
### Set up Visual Studio Code for MongoDB Atlas Stream Processing
Thanks to the MongoDB for VS Code extension, we can rapidly develop stream processing (SP) aggregation pipelines and run them directly from inside a VS Code MongoDB playground. This provides a much better developer experience. In the rest of this article, we'll be using VS Code.
Such a playground is a NodeJS environment where we can execute JS code interacting with a live stream processor on MongoDB Atlas. To get started, install VS Code and the MongoDB for VS Code extension.
Below is a great tutorial about installing the extension. It also lists some shell commands we'll need later.
- **Tutorial**: Introducing Atlas Stream Processing Support Within the MongoDB for VS Code Extension
- **Goal**: If everything works, we should see our live SP connection in the MongoDB Extension tab.
. It is described by an array of processing stages. However, there are some differences. The most basic SP can be created using only its data source (we'll have executable examples next).
```
// our array of stages
// source is defined earlier
sp_aggregation_pipeline = source]
sp.createStreamProcessor("SP_NAME", sp_aggregation_pipeline, )
```
A more realistic stream processor would contain at least one aggregation stage, and there can be a large number of stages performing various operations to the incoming data stream. There's a generous limit of 16MB for the total processor size.
```
sp_aggregation_pipeline = [source, stage_1, stage_2...]
sp.createStreamProcessor("SP_NAME", sp_aggregation_pipeline, )
```
To increase the development loop velocity, there's an sp.process() function which starts an ephemeral stream processor that won't persist in your stream processing instance.
Let's execute a stream processor!
---------------------------------
Let's create basic stream processors and build our way up. First, we need to have some data! Atlas Stream Processing supports [several data sources for incoming streaming events. These sources include:
- Hard-coded data declaration in $source.
- Kafka streams.
- MongoDB Atlas databases.
### Hard-coded data in $source declaration
For quick testing or self-contained examples, having a small set of hard-coded data is a very convenient way to produce events. We can declare an array of events. Here's an extremely simple example, and note that we'll make some tweaks later to cover different use cases.
### Simplest stream processor
In VS Code, we run an ephemeral stream processor with sp.process(). This way, we don't have to use sp.createStreamProcessor() and sp..drop() constantly as we would for SPs meant to be saved permanently in the instance.
```
src_hard_coded = {
$source: {
// our hard-coded dataset
documents:
{'id': 'entity_1', 'value': 1},
{'id': 'entity_1', 'value': 3},
{'id': 'entity_2', 'value': 7},
{'id': 'entity_1', 'value': 4},
{'id': 'entity_2', 'value': 1}
]
}
}
sp.process( [src_hard_coded] );
```
Upon running this playground, we should see data coming out in the VS Code "OUTPUT" tab (CTRL+SHIFT+U to make it appear)
**Note**: It can take a few seconds for the SP to be uploaded and executed, so don't expect an immediate output.
```
{
id: 'entity_1',
value: 1,
_ts: 2024-02-14T18:52:33.704Z,
_stream_meta: { timestamp: 2024-02-14T18:52:33.704Z }
}
{
id: 'entity_1',
value: 3,
_ts: 2024-02-14T18:52:33.704Z,
_stream_meta: { timestamp: 2024-02-14T18:52:33.704Z }
}
...
```
This simple SP can be used to ensure that data is coming into the SP and there are no problems upstream with our source. Timestamps data was generated at ingestion time.
Stream processing aggregation
-----------------------------
Building on what we have, adding a simple aggregation pipeline to our SP is easy. Below, we're adding a $group stage to aggregate/accumulate incoming messages' "value" field into an array for the requested interval.
Note that the "w" stage (w stands for "Window") of the SP pipeline contains an aggregation pipeline inside. With Stream Processing, we have aggregation pipelines in the stream processing pipeline.
This stage features a [$tumblingWindow which defines the time length the aggregation will be running against. Remember that streams are supposed to be continuous, so a window is similar to a buffer.
interval defines the time length of a window. Since the window is a continuous data stream, we can only aggregate on a slice at a time.
idleTimeout defines how long the $source can remain idle before closing the window. This is useful if the stream is not sustained.
```
src_hard_coded = {
$source: {
// our hard-coded dataset
documents:
{'id': 'entity_1', 'value': 1},
{'id': 'entity_1', 'value': 3},
{'id': 'entity_2', 'value': 7},
{'id': 'entity_1', 'value': 4},
{'id': 'entity_2', 'value': 1}
]
}
}
w = {
$tumblingWindow: {
// This is the slice of time we want to look at every iteration
interval: {size: NumberInt(2), unit: "second"},
// If no additional data is coming in, idleTimeout defines when the window is forced to close
idleTimeout : {size: NumberInt(2), unit: "second"},
"pipeline": [
{
'$group': {
'_id': '$id',
'values': { '$push': "$value" }
}
}
]
}
}
sp_pipeline = [src_hard_coded, w];
sp.process( sp_pipeline );
```
Let it run for a few seconds, and we should get an output similar to the following. $group will create one document per incoming "id" field and aggregate the relevant values into a new array field, "values."
```
{
_id: 'entity_2',
values: [ 7, 1 ],
_stream_meta: {
windowStartTimestamp: 2024-02-14T19:29:46.000Z,
windowEndTimestamp: 2024-02-14T19:29:48.000Z
}
}
{
_id: 'entity_1',
values: [ 1, 3, 4 ],
_stream_meta: {
windowStartTimestamp: 2024-02-14T19:29:46.000Z,
windowEndTimestamp: 2024-02-14T19:29:48.000Z
}
}
```
Depending on the $tumblingWindow settings, the aggregation will output several documents that match the timestamps. For example, these settings...
```
...
$tumblingWindow: {
interval: {size: NumberInt(10), unit: "second"},
idleTimeout : {size: NumberInt(10), unit: "second"},
...
```
...will yield the following aggregation output:
```
{
_id: 'entity_1',
values: [ 1 ],
_stream_meta: {
windowStartTimestamp: 2024-02-13T14:51:30.000Z,
windowEndTimestamp: 2024-02-13T14:51:40.000Z
}
}
{
_id: 'entity_1',
values: [ 3, 4 ],
_stream_meta: {
windowStartTimestamp: 2024-02-13T14:51:40.000Z,
windowEndTimestamp: 2024-02-13T14:51:50.000Z
}
}
{
_id: 'entity_2',
values: [ 7, 1 ],
_stream_meta: {
windowStartTimestamp: 2024-02-13T14:51:40.000Z,
windowEndTimestamp: 2024-02-13T14:51:50.000Z
}
}
```
See how the windowStartTimestamp and windowEndTimestamp fields show the 10-second intervals as requested (14:51:30 to 14:51:40 etc.).
### Additional learning resources: building aggregations
Atlas Stream Processing uses the MongoDB Query API. You can learn more about the MongoDB Query API with the [official Query API documentation, free] [interactive course, and tutorial.
Important: Stream Processing aggregation pipelines do not support all database aggregation operations and have additional operators specific to streaming, like $tumblingWindow. Check the official Stream Processing aggregation documentation.
### Add timestamps to the data
Even when we hard-code data, there's an opportunity to provide a timestamp in case we want to perform $sort operations and better mimic a real use case. This would be the equivalent of an event-time timestamp embedded in the message.
There are many other types of timestamps if we use a live Kafka stream (producer-assigned, server-side, ingestion-time, and more). Add a timestamp to our messages and use the document's "timeField" property to make it the authoritative stream timestamp.
```
src_hard_coded = {
$source: {
// define our event "timestamp_gps" as the _ts
timeField: { '$dateFromString': { dateString: '$timestamp_msg' } },
// our hard-coded dataset
documents:
{'id': 'entity_1', 'value': 1, 'timestamp_msg': '2024-02-13T14:51:39.402336'},
{'id': 'entity_1', 'value': 3, 'timestamp_msg': '2024-02-13T14:51:41.402674'},
{'id': 'entity_2', 'value': 7, 'timestamp_msg': '2024-02-13T14:51:43.402933'},
{'id': 'entity_1', 'value': 4, 'timestamp_msg': '2024-02-13T14:51:45.403352'},
{'id': 'entity_2', 'value': 1, 'timestamp_msg': '2024-02-13T14:51:47.403752'}
]
}
}
```
At this point, we have everything we need to test new pipelines and create proofs of concept in a convenient and self-contained form. In a subsequent article, we will demonstrate how to connect to various streaming sources.
## Tip and tricks
At the time of publishing, Atlas Stream Processing is in public preview and there are a number of [known Stream Processing limitations that you should be aware of, such as regional data center availability, connectivity with other Atlas projects, and user privileges.
When running an ephemeral stream processor via sp.process(), many errors (JSON serialization issue, late data, divide by zero, $validate errors) that might have gone to a dead letter queue (DLQ) are sent to the default output to help you debug.
For SPs created with sp.createStreamProcessor(), you'll have to configure your DLQ manually. Consult the documentation for this. On the "Manage Stream Processor" documentation page, search for "Define a DLQ."
After merging data into an Atlas database, it is possible to use existing pipeline aggregation building tools in the Atlas GUI's builder or MongoDB Compass to create and debug pipelines. Since these tools are meant for the core database API, remember that some operators are not supported by stream processors, and streaming features like windowing are not currently available.
## Conclusion
With that, you should have everything you need to get your first stream processor up and running. In a future post, we will dive deeper into connecting to different sources of data for your stream processors.
If you have any questions, share them in our community forum, meet us during local MongoDB User Groups (MUGs), or come check out one of our MongoDB .local events.
## References
- MongoDB Atlas Stream Processing Documentation
- Introducing Atlas Stream Processing - Simplifying the Path to Reactive, Responsive, Event-Driven Apps
- The Challenges and Opportunities of Processing Streaming Data
- Atlas Stream Processing is Now in Public Preview (Feb 13, 2024)
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9fc619823204a23c/65fcd88eba94f0ad8e7d1460/atlas-stream-processor-connected-visual-studio-code.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn to set up and run your first MongoDB Atlas stream processor with our straightforward tutorial. Discover how to create instances, code in Visual Studio Code, and aggregate stream data effectively.",
"contentType": "Quickstart"
} | Introduction to Atlas Stream Processing Development | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/virtual-threads-reactive-programming | created | # Optimizing Java Performance With Virtual Threads, Reactive Programming, and MongoDB
## Introduction
When I first heard about Project Loom and virtual threads, my first thought was that this was a death sentence for
reactive programming. It wasn't bad news at first because reactive programming comes with its additional layer of
complexity and using imperative programming without wasting resources was music to my ears.
But I was actually wrong and a bit more reading and learning helped me understand why thinking this was a mistake.
In this post, we'll explore virtual threads and reactive programming, their differences, and how we can leverage both in
the same project to achieve peak concurrency performance in Java.
Learn more about virtual threads support with MongoDB in my previous post on this topic.
## Virtual threads
### Traditional thread model in Java
In traditional Java concurrency, threads are heavyweight entities managed by the operating system. Each OS
thread is wrapped by a platform thread which is managed by the Java Virtual Machine (JVM) that executes the Java code.
Each thread requires significant system resources, leading to limitations in scalability when dealing with a
large number of concurrent tasks. Context switching between threads is also resource-intensive and can deteriorate the
performance.
### Introducing virtual threads
Virtual threads, introduced by Project Loom in JEP 444, are lightweight by
design and aim to overcome the limitations of traditional threads and create high-throughput concurrent applications.
They implement `java.lang.Thread` and they are managed by the JVM. Several of them can
run on the same platform thread, making them more efficient to work with a large number of small concurrent tasks.
### Benefits of virtual threads
Virtual threads allow the Java developer to use the system resources more efficiently and non-blocking I/O.
But with the closely related JEP 453: Structured Concurrency and JEP 446: Scoped Values,
virtual threads also support structured concurrency to treat a group of related tasks as a single unit of work and
divide a task into smaller independent subtasks to improve response time and throughput.
### Example
Here is a basic Java example.
```java
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class VirtualThreadsExample {
public static void main(String] args) {
try (ExecutorService virtualExecutor = Executors.newVirtualThreadPerTaskExecutor()) {
for (int i = 0; i < 10; i++) {
int taskNumber = i + 1;
Runnable task = () -> taskRunner(taskNumber);
virtualExecutor.submit(task);
}
}
}
private static void taskRunner(int number) {
System.out.println("Task " + number + " executed by virtual thread: " + Thread.currentThread());
}
}
```
Output of this program:
```
Task 6 executed by virtual thread: VirtualThread[#35]/runnable@ForkJoinPool-1-worker-6
Task 2 executed by virtual thread: VirtualThread[#31]/runnable@ForkJoinPool-1-worker-2
Task 10 executed by virtual thread: VirtualThread[#39]/runnable@ForkJoinPool-1-worker-10
Task 1 executed by virtual thread: VirtualThread[#29]/runnable@ForkJoinPool-1-worker-1
Task 5 executed by virtual thread: VirtualThread[#34]/runnable@ForkJoinPool-1-worker-5
Task 7 executed by virtual thread: VirtualThread[#36]/runnable@ForkJoinPool-1-worker-7
Task 4 executed by virtual thread: VirtualThread[#33]/runnable@ForkJoinPool-1-worker-4
Task 3 executed by virtual thread: VirtualThread[#32]/runnable@ForkJoinPool-1-worker-3
Task 8 executed by virtual thread: VirtualThread[#37]/runnable@ForkJoinPool-1-worker-8
Task 9 executed by virtual thread: VirtualThread[#38]/runnable@ForkJoinPool-1-worker-9
```
We can see that the tasks ran in parallel — each in a different virtual thread, managed by a single `ForkJoinPool` and
its associated workers.
## Reactive programming
First of all, [reactive programming is a programming paradigm whereas virtual threads
are "just" a technical solution. Reactive programming revolves around asynchronous and event-driven programming
principles, offering solutions to manage streams of data and asynchronous operations efficiently.
In Java, reactive programming is traditionally implemented with
the observer pattern.
The pillars of reactive programming are:
- Non-blocking I/O.
- Stream-based asynchronous communication.
- Back-pressure handling to prevent overwhelming downstream components with more data than they can handle.
The only common point of interest with virtual threads is the first one: non-blocking I/O.
### Reactive programming frameworks
The main frameworks in Java that follow the reactive programming principles are:
- Reactive Streams: provides a standard for asynchronous stream processing with
non-blocking back pressure.
- RxJava: JVM implementation of Reactive Extensions.
- Project Reactor: foundation of the reactive stack in the Spring ecosystem.
### Example
MongoDB also offers an implementation of the Reactive Streams API:
the MongoDB Reactive Streams Driver.
Here is an example where I insert a document in MongoDB and then retrieve it.
```java
import com.mongodb.client.result.InsertOneResult;
import com.mongodb.quickstart.SubscriberHelpers.OperationSubscriber;
import com.mongodb.quickstart.SubscriberHelpers.PrintDocumentSubscriber;
import com.mongodb.reactivestreams.client.MongoClient;
import com.mongodb.reactivestreams.client.MongoClients;
import com.mongodb.reactivestreams.client.MongoCollection;
import org.bson.Document;
public class MongoDBReactiveExample {
public static void main(String] args) {
try (MongoClient mongoClient = MongoClients.create("mongodb://localhost")) {
MongoCollection coll = mongoClient.getDatabase("test").getCollection("testCollection");
Document doc = new Document("reactive", "programming");
var insertOneSubscriber = new OperationSubscriber();
coll.insertOne(doc).subscribe(insertOneSubscriber);
insertOneSubscriber.await();
var printDocumentSubscriber = new PrintDocumentSubscriber();
coll.find().first().subscribe(printDocumentSubscriber);
printDocumentSubscriber.await();
}
}
}
```
> Note: The `SubscriberHelpers.OperationSubscriber` and `SubscriberHelpers.PrintDocumentSubscriber` classes come from
> the [Reactive Streams Quick Start Primer.
> You can find
> the SubscriberHelpers.java
> in the MongoDB Java Driver repository code examples.
## Virtual threads and reactive programming working together
As you might have understood, virtual threads and reactive programming aren't competing against each other, and they
certainly agree on one thing: Blocking I/O operations is evil!
Who said that we had to make a choice? Why not use them both to achieve peak performance and prevent blocking I/Os once
and for all?
Good news: The `reactor-core`
library added virtual threads support in 3.6.0. Project Reactor
is the library that provides a rich and functional implementation of `Reactive Streams APIs`
in Spring Boot
and WebFlux.
This means that we can use virtual threads in a Spring Boot project that is using MongoDB Reactive Streams Driver and
Webflux.
There are a few conditions though:
- Use Tomcat because — as I'm writing this post — Netty (used by default by Webflux)
doesn't support virtual threads. See GitHub issues 12848
and 39425 for more details.
- Activate virtual threads: `spring.threads.virtual.enabled=true` in `application.properties`.
### Let's test
In the repository, my colleague Wen Jie Teo and I
updated the `pom.xml` and `application.properties` so we could use virtual threads in this reactive project.
You can run the following commands to get this project running quickly and test that it's running with virtual threads
correctly. You can get more details in the
README.md file but here is the gist.
Here are the instructions in English:
- Clone the repository and access the folder.
- Update the log level in `application.properties` to `info`.
- Start a local MongoDB single node replica set instance or use MongoDB Atlas.
- Run the `setup.js` script to initialize the `accounts` collection.
- Start the Java application.
- Test one of the APIs available.
Here are the instructions translated into Bash.
First terminal:
```shell
git clone git@github.com:mongodb-developer/mdb-spring-boot-reactive.git
cd mdb-spring-boot-reactive/
sed -i 's/warn/info/g' src/main/resources/application.properties
docker run --rm -d -p 27017:27017 -h $(hostname) --name mongo mongo:latest --replSet=RS && sleep 5 && docker exec mongo mongosh --quiet --eval "rs.initiate();"
mongosh --file setup.js
mvn spring-boot:run
```
> Note: On macOS, you may have to use `sed -i '' 's/warn/info/g' src/main/resources/application.properties` if you are not using `gnu-sed`, or you can just edit the file manually.
Second terminal
```shell
curl 'localhost:8080/account' -H 'Content-Type: application/json' -d '{"accountNum": "1"}'
```
If everything worked as planned, you should see this line in the first terminal (where you are running Spring).
```
Stack trace's last line: java.base/java.lang.VirtualThread.run(VirtualThread.java:309) from POST /account
```
This is the last line in the stack trace that we are logging. It proves that we are using virtual threads to handle
our query.
If we disable the virtual threads in the `application.properties` file and try again, we'll read instead:
```
Stack trace's last line: java.base/java.lang.Thread.run(Thread.java:1583) from POST /account
```
This time, we are using a classic `java.lang.Thread` instance to handle our query.
## Conclusion
Virtual threads and reactive programming are not mortal enemies. The truth is actually far from that.
The combination of virtual threads’ advantages over standard platform threads with the best practices of reactive
programming opens up new frontiers of scalability, responsiveness, and efficient resource utilization for your
applications. Be gone, blocking I/Os!
MongoDB Reactive Streams Driver is fully equipped to
benefit from both virtual threads optimizations with Java 21, and — as always — benefit from the reactive programming
principles and best practices.
I hope this post motivated you to give it a try. Deploy your cluster on
MongoDB Atlas and give the
repository a spin.
For further guidance and support, and to engage with a vibrant community of developers, head over to the
MongoDB Forum where you can find help, share insights, and ask those
burning questions. Let's continue pushing the boundaries of Java development together!
| md | {
"tags": [
"Java",
"MongoDB",
"Spring"
],
"pageDescription": "Join us as we delve into the dynamic world of Java concurrency with Virtual Threads and Reactive Programming, complemented by MongoDB's seamless integration. Elevate your app's performance with practical tips and real-world examples in this comprehensive guide.",
"contentType": "Article"
} | Optimizing Java Performance With Virtual Threads, Reactive Programming, and MongoDB | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/cpp/me-and-the-devil-bluez-1 | created | # Me and the Devil BlueZ: Implementing a BLE Central in Linux - Part 1
In my last article, I covered the basic Bluetooth Low Energy concepts required to implement a BLE peripheral in an MCU board. We used a Raspberry Pi Pico board and MicroPython for our implementation. We ended up with a prototype firmware that used the on-board LED, read from the on-board temperature sensor, and implemented a BLE peripheral with two services and several characteristics – one that depended on measured data and could push notifications to its client.
In this article, we will be focusing on the other side of the BLE communication: the BLE central, rather than the BLE peripheral. Our collecting station is going to gather the data from the sensors and it is a Raspberry Pi 3A+ with a Linux distribution, namely, Raspberry Pi OS wormbook which is a Debian derivative commonly used in this platform.
, replacing the previously available OpenBT.
Initially, all the tools were command-line based and the libraries used raw sockets to access the Host Controller Interface offered by hardware. But since the early beginning of its adoption, there was interest to integrate it into the different desktop alternatives, mainly Gnome and KDE. Sharing the Bluetooth interface across the different desktop applications required a different approach: a daemon that took care of all the Bluetooth tasks that take place outside of the Linux Kernel, and an interface that would allow sharing access to that daemon. D-Bus had been designed as a common initiative for interoperability among free-software desktop environments, managed by FreeDesktop, and had already been adopted by the major Linux desktops, so it became the preferred option for that interface.
### D-Bus
D-Bus, short for desktop bus, is an interprocess communication mechanism that uses a message bus. The bus is responsible for taking the messages sent by any process connected to it and delivering them to other processes in the same bus.
and `hcitool` were the blessed tools to work with Bluetooth, but they used raw sockets and were deprecated around 2017. Nowadays, the recommended tools are `bluetoothctl` and `btmgmt`, although I believe that the old tools have been changed under their skin and are available without using raw sockets.
Enabling the Bluetooth radio was usually done with `sudo hciconfig hci0 up`. Nowadays, we can use `bluetoothctl` instead:
```sh
bluetoothctl
bluetooth]# show
Controller XX:XX:XX:XX:XX:XX (public)
Name: ...
Alias: ...
Powered: no
...
[bluetooth]# power on
Changing power on succeeded
[CHG] Controller XX:XX:XX:XX:XX:XX Powered: yes
[bluetooth]# show
Controller XX:XX:XX:XX:XX:XX (public)
Name: ...
Alias: ...
Powered: yes
...
```
With the radio on, we can start scanning for BLE devices:
```sh
bluetoothctl
[bluetooth]# menu scan
[bluetooth]# transport le
[bluetooth]# back
[bluetooth]# scan on
[bluetooth]# devices
```
This shows several devices and my RP2 here:
> Device XX:XX:XX:XX:XX:XX RP2-SENSOR
Now that we know the MAC address/name pairs, we can use the former piece of data to connect to it:
```sh
[bluetooth]# connect XX:XX:XX:XX:XX:XX
Attempting to connect to XX:XX:XX:XX:XX:XX
Connection successful
[NEW] Primary Service (Handle 0x2224)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0004
00001801-0000-1000-8000-00805f9b34fb
Generic Attribute Profile
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0004/char0005
00002a05-0000-1000-8000-00805f9b34fb
Service Changed
[NEW] Primary Service (Handle 0x78c4)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007
0000180a-0000-1000-8000-00805f9b34fb
Device Information
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char0008
00002a29-0000-1000-8000-00805f9b34fb
Manufacturer Name String
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000a
00002a24-0000-1000-8000-00805f9b34fb
Model Number String
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000c
00002a25-0000-1000-8000-00805f9b34fb
Serial Number String
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char000e
00002a26-0000-1000-8000-00805f9b34fb
Firmware Revision String
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0007/char0010
00002a27-0000-1000-8000-00805f9b34fb
Hardware Revision String
[NEW] Primary Service (Handle 0xb324)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012
0000181a-0000-1000-8000-00805f9b34fb
Environmental Sensing
[NEW] Characteristic (Handle 0x7558)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013
00002a1c-0000-1000-8000-00805f9b34fb
Temperature Measurement
[NEW] Descriptor (Handle 0x75a0)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013/desc0015
00002902-0000-1000-8000-00805f9b34fb
Client Characteristic Configuration
[NEW] Descriptor (Handle 0x75a0)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013/desc0016
0000290d-0000-1000-8000-00805f9b34fb
Environmental Sensing Trigger Setting
[RP2-SENSOR]# scan off
```
Now we can use the General Attribute Profile (GATT) to send commands to the device, including listing the attributes, reading a characteristic, and receiving notifications.
```sh
[RP2-SENSOR]# menu gatt
[RP2-SENSOR]# list-attributes
...
Characteristic (Handle 0x0001)
/org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013
00002a1c-0000-1000-8000-00805f9b34fb
Temperature Measurement
...
[RP2-SENSOR]# select-attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013
[MPY BTSTACK:/service0012/char0013]# read
Attempting to read /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013
[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:
00 0c 10 00 fe .....
00 0c 10 00 fe .....
[MPY BTSTACK:/service0012/char0013]# notify on
[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Notifying: yes
Notify started
[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:
00 3b 10 00 fe .;...
[CHG] Attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013 Value:
00 6a 10 00 fe .j...
[MPY BTSTACK:/service0012/char0013]# notify off
```
And we leave it in its original state:
```sh
[MPY BTSTACK:/service0012/char0013]# back
[MPY BTSTACK:/service0012/char0013]# disconnect
Attempting to disconnect from 28:CD:C1:0F:4B:AE
[CHG] Device 28:CD:C1:0F:4B:AE ServicesResolved: no
Successful disconnected
[CHG] Device 28:CD:C1:0F:4B:AE Connected: no
[bluetooth]# power off
Changing power off succeeded
[CHG] Controller B8:27:EB:4D:70:A6 Powered: no
[CHG] Controller B8:27:EB:4D:70:A6 Discovering: no
[bluetooth]# exit
```
### Query the services in the system bus
`dbus-send` comes with D-Bus.
We are going to send a message to the system bus. The message is addressed to "org.freedesktop.DBus" which is the service implemented by D-Bus itself. We use the single D-Bus instance, "/org/freedesktop/DBus". And we use the "Introspect" method of the "org.freedesktop.DBus.Introspectable". Hence, it is a method call. Finally, it is important to highlight that we must request that the reply gets printed, with "–print-reply" if we want to be able to watch it.
```sh
dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.Introspectable.Introspect | less
```
This method call has a long reply, but let me highlight some interesting parts. Right after the header, we get the description of the interface "org.freedesktop.DBus":
```xml
...
...
```
These are the methods, properties and signals related to handling connections to the bus and information about it. Methods may have parameters (args with direction "in") and results (args with direction "out") and both define the type of the expected data. Signals also declare the arguments, but they are broadcasted and no response is expected, so there is no need to use "direction."
Then we have an interface to expose the D-Bus properties:
```xml
...
```
And a description of the "org.freedesktop.DBus.Introspectable" interface that we have already used to obtain all the interfaces. Inception? Maybe.
```xml
```
Finally, we find three other interfaces:
```xml
...
...
...
```
Let's use the method of the first interface that tells us what is connected to the bus. In my case, I get:
```sh
dbus-send --system --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.ListNames
method return time=1698320750.822056 sender=org.freedesktop.DBus -> destination=:1.50 serial=3 reply_serial=2
array [
string "org.freedesktop.DBus"
string ":1.7"
string "org.freedesktop.login1"
string "org.freedesktop.timesync1"
string ":1.50"
string "org.freedesktop.systemd1"
string "org.freedesktop.Avahi"
string "org.freedesktop.PolicyKit1"
string ":1.43"
string "org.bluez"
string "org.freedesktop.ModemManager1"
string ":1.0"
string ":1.1"
string ":1.2"
string ":1.3"
string ":1.4"
string "fi.w1.wpa_supplicant1"
string ":1.5"
string ":1.6"
]
```
The "org.bluez" is the service that we want to use. We can use introspect with it:
```xml
dbus-send --system --print-reply=literal --dest=org.bluez /org/bluez org.freedesktop.DBus.Introspectable.Introspect |
xmllint --format - | less
```
> xmllint can be installed with `sudo apt-get install libxml2-utils`.
After the header, I get the following interfaces:
```xml
```
Have you noticed the node that represents the child object for the HCI0? We could also have learned about it using `busctl tree org.bluez`. And we can query that child object too. We will now obtain the information about HCI0 using introspection but send the message to BlueZ and refer to the HCI0 instance.
```sh
dbus-send --system --print-reply=literal --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Introspectable.Introspect | xmllint --format - | less
```
```xml
```
Let's check the status of the Bluetooth radio using D-Bus messages to query the corresponding property:
```sh
dbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:Powered
```
We can then switch the radio on, setting the same property:
```sh
dbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Set string:org.bluez.Adapter1 string:Powered variant:boolean:true
```
And check the status of the radio again to verify the change:
```sh
dbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.freedesktop.DBus.Properties.Get string:org.bluez.Adapter1 string:Powered
```
The next step is to start scanning, and it seems that we should use this command:
```sh
dbus-send --system --type=method_call --print-reply --dest=org.bluez /org/bluez/hci0 org.bluez.Adapter1.StartDiscovery
```
But this doesn't work because `dbus-send` exits almost immediately and BlueZ keeps track of the D-Bus clients that request the discovery.
### Capture the messages produced by `bluetoothctl`
Instead, we are going to use the command line utility `bluetoothctl` and monitor the messages that go through the system bus.
We start `dbus-monitor` for the system bus and redirect the output to a file. We launch `bluetoothctl` and inspect the log. This connects to the D-Bus with a "Hello" method. It invokes AddMatch to show interest in BlueZ. It does `GetManagedObjects` to find the objects that are managed by BlueZ.
We then select Low Energy (`menu scan`, `transport le`, `back`). This doesn't produce messages because it just configures the tool.
We start scanning (`scan on`), connect to the device (`connect XX:XX:XX:XX:XX:XX`), and stop scanning (`scan off`). In the log, the second message is a method call to start scanning (`StartDiscovery`), preceded by a call (to `SetDiscoveryFilter`) with LE as a parameter. Then, we find signals –one per device that is discoverable– with all the metadata of the device, including its MAC address, its name (if available), and the transmission power that is normally used to estimate how close a device is, among other properties. The app shows its interest in the devices it has found with an `AddMatch` method call, and we can see signals with properties updates.
Then, a call to the method `Connect` of the `org.bluez.Device1` interface is invoked with the path pointing to the desired device. Finally, when we stop scanning, we can find an immediate call to `StopDiscovery`, and the app declares that it is no longer interested in updates of the previously discovered devices with calls to the `RemoveMatch` method. A little later, an announcement signal tells us that the "connected" property of that device has changed, and then there's a signal letting us know that `InterfacesAdded` implemented `org.bluez.GattService1`, `org.bluez.GattCharacteristic1` for each of the services and characteristics. We get a signal with a "ServicesResolved" property stating that the present services are Generic Access Service, Generic Attribute Service, Device Information Service, and Environmental Sensing Service (0x1800, 0x1801, 0x180A, and 0x181A). In the process, the app uses `AddMatch` to show interest in the different services and characteristics.
We select the attribute for the temperature characteristic (`select-attribute /org/bluez/hci0/dev_28_CD_C1_0F_4B_AE/service0012/char0013`), which doesn't produce any D-Bus messages. Then, we `read` the characteristic that generates a method call to `ReadValue` of the `org.bluez.GattCharacteristic1` interface with the path that we have previously selected. Right after, we receive a method return message with the five bytes of that characteristic.
As for notifications, when we enable them (`notify on`), a method call to `StartNotify` is issued with the same parameters as the `ReadValue` one. The notification comes as a `PropertiesChanged` signal that contains the new value and then we send the `StopNotify` command. Both changes to the notification state produce signals that share the new state.
## Recap and future content
In this article, I have explained all the steps required to interact with the BLE peripheral from the command line. Then, I did some reverse engineering to understand how those steps translated into D-Bus messages. Find the [resources for this article and links to others.
In the next article, I will try to use the information that we have gathered about the D-Bus messages to interact with the Bluetooth stack using C++.
If you have questions or feedback, join me in the MongoDB Developer Community!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb57cbfa9d1521fb5/657704f0529e1390f6b953bc/Debian.jpg
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt177196690c5045e5/65770264529e137250b953a8/Bus.jpg | md | {
"tags": [
"C++",
"RaspberryPi"
],
"pageDescription": "In this new article, we will be focusing on the client side of the Bluetooth Low Energy communication: the BLE central.",
"contentType": "Tutorial"
} | Me and the Devil BlueZ: Implementing a BLE Central in Linux - Part 1 | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/ecommerce-search-openai | created | # Build an E-commerce Search Using MongoDB Vector Search and OpenAI
## Introduction
In this article, we will build a product search system using MongoDB Vector Search and OpenAI APIs. We will build a search API endpoint that receives natural language queries and delivers relevant products as results in JSON format. In this article, we will see how to generate vector embeddings using the OpenAI embedding model, store them in MongoDB, and query the same using Vector Search. We will also see how to use the OpenAI text generation model to classify user search inputs and build our DB query.
The API server is built using Node.js and Express. We will be building API endpoints for creating, updating, and searching. Also note that this guide focuses only on the back end and to facilitate testing, we will be using Postman. Relevant screenshots will be provided in the respective sections for clarity. The below GIF shows a glimpse of what we will be building.
.
```
git clone https://github.com/ashiqsultan/mongodb-vector-openai.git
```
2. Create a `.env` file in the root directory of the project.
```
touch .env
```
3. Create two variables in your `.env` file: **MONGODB_URI** and **OPENAI_API_KEY**.
You can follow the steps provided in the OpenAI docs to get the API key.
```
echo "MONGODB_URI=your_mongodb_uri" >> .env
echo "OPENAI_API_KEY=your_openai_api_key" >> .env
```
4. Install node modules.
```
npm install # (or) yarn install
```
5. Run `yarn run dev` or `npm run dev` to start the server.
```
npm run dev # (or) yarn run dev
```
If the `MONGODB_URI` is correct, it should connect without any error and start the server at port 5000. For the OpenAI API key, you need to create a new account.
. Once you have the connection string, just paste it in the `.env` file as `MONGODB_URI`. In our codebase, we have created a separate dbclient.ts file which exports a singleton function to connect with MongoDB. Now, we can call this function at the entry point file of our application like below.
```
// server.ts
import dbClient from './dbClient';
server.listen(app.get('port'), async () => {
try {
await dbClient();
} catch (error) {
console.error(error);
}
});
```
## Collection schema overview
You can refer to the schema model file in the codebase. We will keep the collection schema simple. Each product item will maintain the interface shown below.
```
interface IProducts {
name: string;
category: string;
description: string;
price: number;
embedding: number];
}
```
This interface is self-explanatory, with properties such as name, category, description, and price, representing typical attributes of a product. The unique addition is the embedding property, which will be explained in subsequent sections. This straightforward schema provides a foundation for organizing and storing product data efficiently.
## Setting up vector index for collection
To enable semantic search in our MongoDB collection, we need to set up [vector indexes. If that sounds fancy, in simpler terms, this allows us to query the collection using natural language.
Follow the step-by-step procedure outlined in the documentation to create a vector index from the Atlas UI.
Below is the config we need to provide in the JSON editor when creating the vector index.
```
{
"mappings": {
"dynamic": true,
"fields": {
"embedding": {
"dimensions": 1536,
"similarity": "euclidean",
"type": "knnVector"
}
}
}
}
```
For those who prefer visual guides, watch our video explaining the process.
The key variables in the index configuration are the field name in the collection to be indexed (here, it's called **embedding**) and the dimensions value (here, set to **1536**). The significance of this value will be discussed in the next section.
.
## Generating embedding using OpenAI
We have created a reusable util function in our codebase which will take a string as an input and return a vector embedding as output. This function can be used in places where we need to call the OpenAI embedding model.
```
async function generateEmbedding(inputText: string): Promise {
try {
const vectorEmbedding = await openai.embeddings.create({
input: inputText,
model: 'text-embedding-ada-002',
});
const embedding = vectorEmbedding.data0].embedding;
return embedding;
} catch (error) {
console.error('Error generating embedding:', error);
return null;
}
}
```
The function is fairly straightforward. The specific model employed in our example is `text-embedding-ada-002`. However, you have the flexibility to choose other embedding models but it's crucial to ensure that the output dimensions of the selected model match the dimensions we have set when initially creating the vector index.
## What should we embed for Vector Search?
Now that we know what an embedding is, let's discuss what to embed. For semantic search, you should embed all the fields that you intend to query. This includes any relevant information or features that you want to use as search criteria. In our product example, we will be embedding **the name of the product, its category, and its description**.
## Embed on create
To create a new product item, we need to make a POST call to “localhost:5000/product/” with the required properties **{name, category, description, price}**. This will call the [createOne service which handles the creation of a new product item.
```
// Example Product item
// product = {
// name: 'foo phone',
// category: Electronics,
// description: 'This phone has good camera',
// price: 150,
// };
const toEmbed = {
name: product.name,
category: product.category,
description: product.description,
};
// Generate Embedding
const embedding = await generateEmbedding(JSON.stringify(toEmbed));
const documentToInsert = {
…product,
embedding,
}
await productCollection.insertOne(documentToInsert);
```
In the code snippet above, we first create an object named `toEmbed` containing the fields intended for embedding. This object is then converted to a stringified JSON and passed to the `generateEmbedding` function. As discussed in the previous section, generateEmbedding will call the OpenAPI embedding model and return us the required embedding array. Once we have the embedding, the new product document is created using the `insertOne` function. The below screenshot shows the create request and its response.
” where id is the MongoDB document id. This will call the updateOne.ts service.
Let's make a PATCH request to update the name of the phone from “foo phone” to “Super Phone.”
```
// updateObj contains the extracted request body with updated data
const updateObj = {
name: “Super Phone"
};
const product = await collection.findOne({ _id });
const objToEmbed = {
name: updateObj.name || product.name,
category: updateObj.category || product.category,
description: updateObj.description || product.description,
};
const embedding = await generateEmbedding(JSON.stringify(objToEmbed));
updateObj.embedding = embedding;
const updatedDoc = await collection.findOneAndUpdate(
{ _id },
{ $set: updateObj },
{
returnDocument: 'after',
projection: { embedding: 0 },
}
);
```
In the above code, the variable `updateObj` contains the PATCH request body data. Here, we are only updating the name. Then, we use `findOne` to get the existing product item. The `objToEmbed` object is constructed to determine which fields to embed in the document. It incorporates both the new values from `updateObj` and the existing values from the `product` document, ensuring that any unchanged fields are retained.
In simple terms, we are re-generating the embedding array with the updated data with the same set of fields we used on the creation of the document. This is important to ensure that our search function works correctly and that the updated document stays relevant to its context.
. Let’s look at the search product function step by step.
```
const searchProducts = async (searchText: string): Promise<IProductDocument]> => {
try {
const embedding = await generateEmbedding(searchText); // Generate Embedding
const gptResponse = (await searchAssistant(searchText)) as IGptResponse;
…
```
In the first line, we are creating embedding using the same `generateEmbedding` function we used for create and update. Let’s park this for now and focus on the second function, `searchAssistant`.
### Search assistant function
This is a reusable function that is responsible for calling the OpenAI completion model. You can find the [searchAssistant file on GitHub. It's here we have described the prompt for the generative model with output instructions.
```
async function main(userMessage: string): Promise<any> {
const completion = await openai.chat.completions.create({
messages:
{
role: 'system',
content: `You are an e-commerce search assistant. Follow the below list of instructions for generating the response.
- You should only output JSON strictly following the Output Format Instructions.
- List of Categories: Books, Clothing, Electronics, Home & Kitchen, Sports & Outdoors.
- Identify whether user message matches any category from the List of Categories else it should be empty string. Do not invent category outside the provided list.
- Identify price range from user message. minPrice and maxPrice must only be number or null.
- Output Format Instructions for JSON: { category: 'Only one category', minPrice: 'Minimum price if applicable else null', maxPrice: 'Maximum Price if applicable else null' }
`,
},
{ role: 'user', content: userMessage },
],
model: 'gpt-3.5-turbo-1106',
response_format: { type: 'json_object' },
});
const outputJson = JSON.parse(completion.choices[0].message.content);
return outputJson;
}
```
### Prompt explanation
You can refer to the [Open AI Chat Completion docs to understand the function definition. Here, we will explain the system prompt. This is the place where we give some context to the model.
* First, we tell the model about its role and instruct it to follow the set of rules we are about to define.
* We explicitly instruct it to output only JSON following the “Output Instruction” we have provided within the prompt.
* Next, we provide a list of categories to classify the user request. This is hardcoded here but in a real-time scenario, we might generate a category list from DB.
* Next, we are instructing it to identify if users have mentioned any price so that we can use that in our aggregation query.
Let’s add some console logs before the return statement and test the function.
```
// … Existing code
const outputJson = JSON.parse(completion.choices0].message.content);
console.log({ userMessage });
console.log({ outputJson });
return outputJson;
```
With the console logs in place, make a GET request to /products with search query param. Example:
```
// Request
http://localhost:5000/product?search=phones with good camera under 160 dollars
// Console logs from terminal
{ userMessage: 'phones with good camera under 160 dollars' }
{ outputJson: { category: 'Electronics', minPrice: null, maxPrice: 160 } }
```
From the OpenAI response above, we can see that the model has classified the user message under the “Electronics” category and identified the price range. It has followed our output instructions, as well, and returned the JSON we desired. Now, let’s use this output and structure our aggregation pipeline.
### Aggregation pipeline
In our [searchProducts file, right after we get the `gptResponse`, we are calling a function called `constructMatch`. The purpose of this function is to construct the $match stage query object using the output we received from the GPT model — i.e., it will extract the category and min and max prices from the GPT response to generate the query.
**Example**
Let’s do a search that includes a price range: **“?search=show me some good programming books between 100 to 150 dollars”**.
.
```
const aggCursor = collection.aggregate<IProductDocument>(
{
$vectorSearch: {
index: VECTOR_INDEX_NAME,
path: 'embedding',
queryVector: embedding,
numCandidates: 150,
limit: 10,
},
},
matchStage,
{
$project: {
_id: 1,
name: 1,
category: 1,
description: 1,
price: 1,
score: { $meta: 'vectorSearchScore' },
},
},
]);
```
**The first stage** in our pipeline is the [$vector-search-stage.
* **index:** refers to the vector index name we provided when initially creating the index under the section **Setting up vector index for collection (mynewvectorindex). **
* **path:** the field name in our document that holds the vector values — in our case, the field name itself is **embedding. **
* **queryVector: **the embedded format of the search text. We have generated the embedding for the user’s search text using the same `generateEmebdding` function, and its value is added here.
* **numCandidates: **Number of nearest neighbors to use during the search. The value must be less than or equal to (<=) 10000. You can't specify a number less than the number of documents to return (limit).
* **Limit: **number of docs to return in the result.
Please refer to the vector search fields docs for more information regarding these fields. You can adjust the numCandidates and limit based on requirements.
**The second stage** is the match stage which just contains the query object we generated using the constructMatch function, as explained previously.
The third stage is the $project stage which only deals with what to show and how to show it. Here, you can omit the fields you don’t wish to return.
## Demonstration
Let’s see our search functionality in action. To do this, we will create a new product and make a search with related keywords. Later, we will update the same product and do a search with keywords matching the updated document.
### Create and search
We can create a new book using our POST request.
**Book 01**
```
{"name": "JavaScript 101",
"category": "Books",
"description": "This is a good book for learning JavaScript for beginners. It covers fundamental concepts such as variables, data types, operators, control flow, functions, and more.",
"price": 60
}
```
The below GIF shows how we can create a book from Postman and view the created book in MongoDB Atlas UI by filtering the category with Books.
.
If you wonder why we see all the books in our response, this is due to our limited sample data of three books. However, in real-world scenarios, if more relevant items are available in DB, then based on the search term, they will have higher scores and be prioritized.
### Update and search
Let’s update something in our books using our PATCH request. Here, we will update our JavaScript 101 book to a Python book using its document _id.
for more details. Thanks for reading.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt898d49694c91fd80/65eed985f2a292a194bf2b7f/01_Gif_demonstration_of_a_search_request_with_natural_language_as_input_returns_relevant_products_as_output..gif
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt83a03bb92d0eb2b4/65eed981a1e8159facd59535/02_high-level_design_for_create_operation.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt55947d21d7d10d84/65eed982a7eab4edfa913e0b/03_high-level_design_for_search_operation.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt39fcf72a5e07a264/65eed9808330b3377402c8eb/04_terminal_output_if_server_starts_successfully.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt32becefb4321630c/65eed9850c744dfb937bea1a/05_Creating_vector_index_from_atlas_ui_for_product_collection.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltde5e43178358d742/65eed9818330b3583802c8ef/06_Postman_screenshot_of_create_request_with_response.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf563211a378d0fad/65eed98068d57e89d7450abd/07_screenshot_of_created_data_from_MongoDB_Atlas.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd141bce8542392ab/65eed9806119723d59643b7e/08_screenshot_of_update_request_and_response_from_Postman.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbf7ec7769a8a9690/65eed98054369a30ca692466/09_screenshot_from_MongoDB_Atlas_of_the_updated_document.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt69a6410fde97fd0f/65eed981a7eab43012913e07/10_screenshot_of_product_search_request.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt613b325989282dd0/65eed9815a287d4b75f2c10d/11_console_logs_of_GPT_response_and_match_query.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt71e639731bd48d2c/65eed98468d57e1f28450ac1/12_GIF_showing_creation_of_book_from_postman_and_viewing_the_same_in_MongoDB_Atlas.gif
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2713fa0ec7f0778d/65eed981ba94f03cbe7cc9dd/13_List_of_inserted_books_in_MongoDB_Atlas_UI.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt251e5131b377a535/65eed981039fddbb7333555a/14_Search_API_call_with_search_text_I_want_to_learn_Javascript.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt4d94489e9bb93f6e/65eed9817a44b0024754744f/15_Search_API_call_with_search_text_I%E2%80%99m_preparing_for_coding_interview.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9d9d221d6c4424ba/65eed980f4a4cf78b814c437/16_Patch_request_to_update_the_JavaScript_book_to_Python_book_with_response.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd5b7817f0d21d3b4/65eed981e55fcb16fe232eb8/17_Book_list_in_Atlas_UI_showing_Javascript_book_has_been_renamed_to_python_book.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0a8e206b86f55176/65eed9810780b947d861ab80/18_Search_API_call_with_search_text_Python_for_beginners.png | md | {
"tags": [
"Atlas",
"AI"
],
"pageDescription": "Create an e-commerce semantic search utilizing MongoDB Vector Search and OpenAI models",
"contentType": "Article"
} | Build an E-commerce Search Using MongoDB Vector Search and OpenAI | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/secure-api-spring-microsoft-entraid | created | # Secure your API with Spring Data MongoDB and Microsoft EntraID
## Introduction
Welcome to our hands-on tutorial, where you'll learn how to build a RESTful API with Spring Data MongoDB, fortified by the security of Microsoft Entra ID and OAuth2. On this journey, we'll lead you through the creation of a streamlined to-do list API, showcasing not just how to set it up, but also how to secure it effectively.
This guide is designed to provide you with the tools and knowledge needed to implement a secure, functional API from the ground up. Let's dive in and start building something great together!
## Prerequisites
- A MongoDB account and cluster set up
- An Azure subscription (Get started for free)
- Java Development Kit (JDK) version 17 or higher
- Apache Maven
- A Spring Boot application — you can create a **Maven project** with the Spring Initializr; there are a couple of dependencies you will need:
- Spring Web
- OAuth2 Resource Server
- Azure Active Directory
- Select Java version 17 or higher and generate a **JAR**
You can follow along with this tutorial and build your project as you read or you can clone the repository directly:
```bash
git clone git@github.com:mongodb-developer/java-spring-boot-secure-todo-app.git
```
## Create our API with Spring Data MongoDB
Once these prerequisites are in place, we're ready to start setting up our Spring Boot secure RESTful API. Our first step will be to lay the foundation with `application.properties`.
```properties
spring.application.name=todo
spring.cloud.azure.active-directory.enabled=true
spring.cloud.azure.active-directory.profile.tenant-id=
spring.cloud.azure.active-directory.credential.client-id=
spring.security.oauth2.client.registration.azure.client-authentication-method=none
spring.security.oauth2.resourceserver.jwt.issuer-uri=https://login.microsoftonline.com//swagger-ui/oauth2-redirect.html
spring.data.mongodb.uri=
spring.data.mongodb.database=
```
- `spring.application.name=todo`: Defines the name of your Spring Boot application
- `spring.cloud.azure.active-directory...`: Integrates your application with Azure AD for authentication and authorization
- `spring.security.oauth2.client.registration.azure.client-authentication-method=none`: Specifies the authentication method for the OAuth2 client; setting it to `none` is used for public clients, where a client secret is not applicable
- `spring.security.oauth2.resourceserver.jwt.issuer-uri=https://login.microsoftonline.com/ {
}
```
Next, create a `service` package and a class TodoService. This will contain our business logic for our application.
```java
package com.example.todo.service;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.example.todo.model.Todo;
import com.example.todo.model.repository.TodoRepository;
import java.util.List;
import java.util.Optional;
@Service
public class TodoService {
private final TodoRepository todoRepository;
public TodoService(TodoRepository todoRepository) {
this.todoRepository = todoRepository;
}
public List findAll() {
return todoRepository.findAll();
}
public Optional findById(String id) {
return todoRepository.findById(id);
}
public Todo save(Todo todo) {
return todoRepository.save(todo);
}
public void deleteById(String id) {
todoRepository.deleteById(id);
}
}
```
To establish your API endpoints, create a `controller` package and a TodoController class. There are a couple things going on here. For each of the API endpoints we want to restrict access to, we use `@PreAuthorize("hasAuthority('SCOPE_Todo.')")` where `` corresponds to the scopes we will define in Microsoft Entra ID.
We have also disabled CORS here. In a production application, you will want to specify who can access this and probably not just allow all, but this is fine for this tutorial.
```java
package com.example.todo.controller;
import com.example.todo.model.Todo;
import com.example.todo.sevice.TodoService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.access.prepost.PreAuthorize;
import org.springframework.security.core.Authentication;
import org.springframework.web.bind.annotation.*;
import java.util.List;
@CrossOrigin(origins = "*")
@RestController
@RequestMapping("/api/todos")
public class TodoController {
public TodoController(TodoService todoService) {
this.todoService = todoService;
}
@GetMapping
public List getAllTodos() {
return todoService.findAll();
}
@GetMapping("/{id}")
public Todo getTodoById(@PathVariable String id) {
return todoService.findById(id).orElse(null);
}
@PostMapping
@PreAuthorize("hasAuthority('SCOPE_Todo.User')")
public Todo createTodo(@RequestBody Todo todo, Authentication authentication) {
return todoService.save(todo);
}
@PutMapping("/{id}")
@PreAuthorize("hasAuthority('SCOPE_Todo.User')")
public Todo updateTodo(@PathVariable String id, @RequestBody Todo todo) {
return todoService.save(todo);
}
@DeleteMapping("/{id}")
@PreAuthorize("hasAuthority('SCOPE_Todo.Admin')")
public void deleteTodo(@PathVariable String id) {
todoService.deleteById(id);
}
}
```
Now, we need to configure our Swagger UI for our app. Create a `config` package and an OpenApiConfiguration class. A lot of this is boilerplate, based on the demo applications provided by springdoc.org. We're setting up an authorization flow and specifying the scopes available in our application. We'll create these in a later part of this application, but pay attention to the API name when setting scopes (`.addString("api://todo/Todo.User", "Access todo as a user")`. You have an option to configure this later but it needs to be the same in the application and on Microsoft Entra ID.
```java
package com.example.todo.config;
import io.swagger.v3.oas.models.Components;
import io.swagger.v3.oas.models.OpenAPI;
import io.swagger.v3.oas.models.info.Info;
import io.swagger.v3.oas.models.security.OAuthFlow;
import io.swagger.v3.oas.models.security.OAuthFlows;
import io.swagger.v3.oas.models.security.Scopes;
import io.swagger.v3.oas.models.security.SecurityScheme;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
class OpenApiConfiguration {
@Value("${spring.cloud.azure.active-directory.profile.tenant-id}")
private String tenantId;
@Bean
OpenAPI customOpenAPI() {
OAuthFlow authorizationCodeFlow = new OAuthFlow();
authorizationCodeFlow.setAuthorizationUrl(String.format("https://login.microsoftonline.com/%s/oauth2/v2.0/authorize", tenantId));
authorizationCodeFlow.setRefreshUrl(String.format("https://login.microsoftonline.com/%s/oauth2/v2.0/token", tenantId));
authorizationCodeFlow.setTokenUrl(String.format("https://login.microsoftonline.com/%s/oauth2/v2.0/token", tenantId));
authorizationCodeFlow.setScopes(new Scopes()
.addString("api://todo/Todo.User", "Access todo as a user")
.addString("api://todo/Todo.Admin", "Access todo as an admin"));
OAuthFlows oauthFlows = new OAuthFlows();
oauthFlows.authorizationCode(authorizationCodeFlow);
SecurityScheme securityScheme = new SecurityScheme();
securityScheme.setType(SecurityScheme.Type.OAUTH2);
securityScheme.setFlows(oauthFlows);
return new OpenAPI()
.info(new Info().title("RESTful APIs for Todo"))
.components(new Components().addSecuritySchemes("Microsoft Entra ID", securityScheme));
}
}
```
The last thing we need to do is create a WebConfig class in our `config` package. Here, we just need to disable Cross-Site Request Forgery (CSRF).
```java
package com.example.todo.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
@Configuration
public class WebConfig {
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.csrf(AbstractHttpConfigurer::disable);
return http.build();
}
}
```
When using OAuth for authentication in a web application, the necessity of CSRF tokens depends on the specific context of your application and how OAuth is being implemented.
In our application, we are using a single-page application (SPAs) for interacting with our API. OAuth is often used with tokens (such as JWTs) obtained via the OAuth Authorization Code Flow with PKCE so the CSRF is not necessary. If your application still uses cookies (traditional web access) for maintaining session state post-OAuth flow, implement CSRF tokens to protect against CSRF attacks. For API serving an SPA, we will rely on bearer tokens.
## Expose your RESTful APIs in Microsoft Entra ID
It is time to register a new application with Microsoft Entra ID (formerly known as Azure Active Directory) and get everything ready to secure our RESTful API with OAuth2 authentication and authorization. Microsoft Entra ID is a comprehensive identity and access management (IAM) solution provided by Microsoft. It encompasses various services designed to help manage and secure access to applications, services, and resources across the cloud and on-premises environments.
1. Sign in to the Azure portal. If you have access to multiple tenants, select the tenant in which you want to register an application.
2. Search for and select the **Microsoft Entra ID** service.
- If you don't already have one, create one here.
1. From the left side menu, under **Manage**, select **App registrations** and **New registration**.
2. Enter a name for your application in the **Name** field. For this tutorial, we are going to stick with the classic CRUD example, a to-do list API, so we'll call it `TodoAPI`.
3. For **Supported account types**, select **Accounts in any organizational directory (Any Microsoft Entra directory - Multitenant) and personal Microsoft accounts**. This will allow the widest set of Microsoft entities.
4. Select **Register** to create the application.
5. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You need it to configure the `application.properties` file for this app.
6. Navigate to **Manage** and click on **Expose an API**. Locate the **Application ID URI** at the top of the page and click **Add**.
7. On the **Edit application ID URI** screen, it's necessary to generate a distinctive Application ID URI. Opt for the provided default `api://{client ID}` or choose a descriptive name like `api://todo` before hitting **Save**.
8. Go to **Manage**, click on **Expose an API**, then **Add a scope**, and provide the specified details:
- For **Scope name**, enter _ToDo.User_.
- For **Who can consent**, select **Admins and Users**.
- For **Admin consent display name**, enter _Create and edit ToDo data_.
- For **Admin consent description**, enter _Allows authenticated users to create and edit the ToDo data._
- For **State**, keep it enabled.
- Select **Add scope**.
9. Repeat the previous steps to add the other scopes: _ToDo.Admin_, which will grant the authenticated user permission to delete.
Now that we have our application created and our EntraID configured, we will look at how to request our access token. At this point, you can upload your API to Azure Spring Apps, following our tutorial, Getting Started With Azure Spring Apps and MongoDB Atlas, but we'll keep everything running local for this tutorial.
## Grant access to our client with Swagger
The RESTful APIs serve as a resource server, safeguarded by Microsoft Entra ID. To obtain an access token, you are required to register a different application within Microsoft Entra ID and assign permissions to the client application.
### Register the client application
We are going to register a second app in Microsoft Entra ID.
1. Repeat steps 1 through 6 above, but this time, name your application `TodoClient`.
2. On the app **Overview** page, look for the **Application (client) ID** value. Record it for later use. You need it to acquire an access token.
3. Select **API permissions** and **Add a permission**.
4. Under **My APIs**, select the `TodoAPI` application that you registered earlier.
Choose the permissions your client application needs to operate correctly. In this case, select both **ToDo.Admin** and **ToDo.User** permissions.
Confirm your selection by clicking on **Add permissions** to apply these to your `TodoClient` application.
5. Select **Grant admin consent for ``** to grant admin consent for the permissions you added.
### Add a user
Now that we have the API created and the client app registered, it is time to create our user to grant permission to. We are going to make a member in our Microsoft Entra tenant to interact with our `TodoAPI`.
1. Navigate to your Microsoft Entra ID and under **Manage**, choose **Users**.
2. Click on **New user** and then on **Create new user**.
3. In the **Create new user** section, fill in **User principal name**, **Display name**, and **Password**. The user will need to change this after their first sign-in.
4. Click **Review + create** to examine your entries. Press **Create** to finalize the creation of the user.
### Update the OAuth2 configuration for Swagger UI authorization
To connect our application for this tutorial, we will use Swagger. We need to refresh the OAuth2 settings for authorizing users in Swagger UI, allowing them to get access tokens via the `TodoClient` application.
1. Access your Microsoft Entra ID tenant, and navigate to the `TodoClient` app you've registered.
2. Click on **Manage**, then **Authentication**, choose **Add a platform**, and select **Single-page application**. For implicit grant and hybrid flows, choose both access tokens and ID tokens.
3. In the **Redirect URIs** section, input your application's URL or endpoint followed by `/swagger-ui/oauth2-redirect.html` as the OAuth2 redirect URL, and then click on **Configure**.
## Log into your application
Navigate to the app's published URL, then click on **Authorize** to initiate the OAuth2 authentication process. In the **Available authorizations** dialog, input the `TodoClient` app's client ID in the **client_id** box, check all the options under the **Scopes** field, leave the **client_secret** box empty, and then click **Authorize** to proceed to the Microsoft Entra sign-in page. After signing in with the previously mentioned user, you will be taken back to the **Available authorizations** dialog. Voila! You should be greeted with your successful login screen.
, or read more about securing your data with How to Implement Client-Side Field Level Encryption (CSFLE) in Java with Spring Data MongoDB.
Are you ready to start building with Atlas on Azure? Get started for free today with MongoDB Atlas on Azure Marketplace
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltacdf6418ee4a5504/66016e4fdf6972781d39cc8d/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt594d8d4172aae733/66016e4f1741ea64ba650c9e/image3.png | md | {
"tags": [
"Java",
"MongoDB",
"Azure",
"Spring"
],
"pageDescription": "Using Microsoft Entra ID, Spring Boot Security, and Spring Data MongoDB, make a secure rest API.",
"contentType": "Tutorial"
} | Secure your API with Spring Data MongoDB and Microsoft EntraID | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/cdc-kafka-relational-migrator | created | # The Great Continuous Migration: CDC Jobs With Kafka and Relational Migrator
Are you ready to *finally* move your relational data over to MongoDB while ensuring every change to your database is properly handled? While this process can be jarring, MongoDB’s Relational Migrator is here to help simplify things. In this tutorial, we will go through in-depth how to conduct change data captures from your relational data from MySQL to MongoDB Atlas using Confluent Cloud and Relational Migrator.
## What are CDC jobs?
Change data capture or CDC jobs are specific processes that track any and all changes in a database! Even if there is a small update to one row (or 100), a change data capture job will ensure that this change is accurately reflected. This is very important in a world where people want accurate results immediately — data needs to be updated constantly. From basic CRUD (create, read, update, delete) instances to more complex data changes, CDC jobs are incredibly important when dealing with data.
## What is MongoDB Relational Migrator?
MongoDB Relational Migrator is our tool to help developers migrate their relational databases to MongoDB. The great part about it is that Relational Migrator will actually help you to write new code or edit existing code to ensure your migration process works as smoothly as possible, as well as automate the conversion process of your database's schema design. This means there’s less complexity and downtime and fewer errors than if tasked with dealing with this manually.
## What is Confluent Cloud and why are we using it?
Confluent Cloud is a Kafka service used to handle real-time data streaming. We are using it to deal with streaming real-time changes from our relational database to our MongoDB Atlas cluster. The great thing about Confluent Cloud is it’s simple to set up and integrates seamlessly with a number of other platforms and connectors. Also, you don’t need Kafka to run production migrations as the embedded mode is sufficient for the majority of migrations.
We also recommend that users start off with the embedded version even if they are planning to use Relational Migrator in the future for a quick start since it has all of the same features, except for the additional resilience in long-running jobs.
Kafka can be relatively complex, so it’s best added to your migration job as a specific step to ensure there is limited confusion with the process. We recommend working immediately on your migration plan and schema design and then adding Kafka when planning your production cutover.
Let’s get started.
## Pre-requisites for success
- MongoDB Atlas account
- Amazon RDS account
- Confluent Cloud account
- MongoDB Relational Migrator — this tutorial uses version 1.5.
- MySQL
- MySQL Workbench — this tutorial uses version 8.0.36. Workbench is so you can visually interact with your MySQL database, so it is optional, but if you’d like to follow the tutorial exactly, please download it onto your machine.
## Download MongoDB Relational Migrator
Please make sure you download Relational Migrator on your machine. The version we are using for this tutorial is version 1.5.0. Make sure it works and you can see it in your browser before moving on.
## Create your sink database
While our relational database is our source database, where our data ends up is called our sink database. In this tutorial, we want our data and all our changes to end up in MongoDB, so let’s create a MongoDB Atlas cluster to ensure that happens.
If you need help creating a cluster, please refer to the documentation.
Please keep note of the region you’re creating your cluster in and ensure you are choosing to host your cluster in AWS. Keep your username and password somewhere safe since you’ll need them later on in this tutorial, and please make sure you’ve allowed access from anywhere (0.0.0.0/0) in your “Network Access” tab. If you do not have the proper network access in place, you will not be able to connect to any of the other necessary platforms. Note that “Access from Anywhere” is not recommended for production and is used for this tutorial for ease of reference.
Grab your cluster’s connection string and save it in a safe place. We will need it later.
## Get your relational database ready
For this tutorial, I created a relational database using MySQL Workbench. The data used is taken from Kaggle in the form of a `.csv` file, if you want to use the same one: World Happiness Index: 2019.
Once your dataset has been properly downloaded into your MySQL database, let’s configure our relational database to our Amazon RDS account. For this tutorial, please make sure you’ve downloaded your `.csv` file into your MySQL database either by using the terminal commands or by using MySQL Workbench.
We’re configuring our relational database to our Amazon RDS account so that instead of hosting our database locally, we can host it in the cloud, and then connect it to Confluent Cloud and ensure any changes to our database are accurately reflected when we eventually sync our data over to MongoDB Atlas.
## Create a database in Amazon RDS
As of right now, Confluent Cloud’s Custom Connector only supports Amazon instances, so please ensure you’re using Amazon RDS for your relational databases since other cloud providers will not work at the moment. Since it’s important to keep everything secure, you will need to ensure networking access, with the possibility of requiring AWS Privatelink.
Sign in to your Amazon account and head over to “Amazon RDS.” You can find it in the search bar at the top of the screen.
Click on “Databases” on the left-hand side of the screen. If you don’t have a database ready to use (specifically in your Amazon account), please create one by clicking the orange button.
You’ll be taken to this page. Please select the MySQL option:
After selecting this, scroll down and change the MySQL version to the version compatible with your version of Workbench. For the tutorial, we are using version `8.0.36`.
Then, please fill out the Settings area. For your `DB cluster identifier`, choose a name for your database cluster. Choose a `Master username`, hit the `Self managed` credentials toggle, and fill in a password. Please do not forget this username and password, you will need it throughout the tutorial to successfully set up your various connections.
For the rest of this database set-up process, you can keep everything `default` except please press the toggle to ensure the database allows Public Access. This is crucial! Follow the rest of the steps to complete and create your database.
When you see the green “Available” status button, that means your database is ready to go.
### Create a parameter group
Now that our database is set up, we need to create a parameter group and modify some things to ensure we can do CDC jobs. We need to make sure this part works in order to successfully handle our CDC jobs.
On the left-hand side of your Amazon RDS homepage, you’ll see the “Parameter groups” button. Please press that and create a new parameter group.
Under the dropdown “Parameter group family,” please pick `mysql8.0` since that is the version we are running for this tutorial. If you’re using something different, please feel free to use a different version. Give the parameter group a name and a description and hit the orange “create” button.
Once it’s created, click on the parameter name, hit the “Edit” button, search for `binlog_format`, and change the “Value” column from “MIXED” to “ROW.”
This is important to do because changing this setting allows for recording any database changes at a “row” level. This means each and every little change to your database will be accurately recorded. Without making this change, you won’t be able to properly conduct any CDC jobs.
Now, let’s associate our database with this new parameter group.
Click on “Databases,” choose the one we just created, and hit “Modify.” Scroll all the way down to “DB Parameter Group.” Click on the drop-down and associate it with the group you just created. As an example, here is mine:
Modify the instance and click “Save.” Once you’re done, go in and “Reboot” your database to ensure these changes are properly saved. Please keep in mind that you’re unable to reboot while the database is being modified and need to wait until it’s in the “Available” state.
Head over to the “Connectivity & security” tab in your database and copy your “Endpoint” under where it says “Endpoint & port.”
Now, we’re going to connect our Amazon RDS database to our MySQL Workbench!
## Connect Amazon RDS to relational database
Launch MySQL Workbench and click the “+” button to establish a new connection.
Your endpoint that was copied above will go into your “Hostname.” Keep the port the same. (It should be 3306.) Your username and password are from when you created your cluster. It should look something like this:
Click on “Test Connection” and you should see a successful connection.
> If you’re unable to connect when you click on “Test Connection,” go into your Amazon RDS database, click on the VPC security group, click on “Edit inbound rules,” click on “Add rule,” select “All traffic” under “Type,” select “Anywhere-IPv4,” and save it. Try again and it will work.
Now, run a simple SQL command in Workbench to test and see if you can interact with your database and see the logs in Amazon RDS. I’m just running a simple update statement:
```
UPDATE world_happiness_report
SET Score = 7.800
WHERE `Country or region` = 'Finland'
LIMIT 1;
```
This is just changing the original score of Finland from 7.769 to 7.8.
It’s been successfully changed and if we keep an eye on Amazon RDS, we don’t see any issues.
Now, let’s configure our Confluent Cloud account!
## Configure Confluent Cloud account
Our first step is to create a new environment. We can use a free account here as well:
On the cluster page, please choose the “Basic” tier. This tier is free as well. Please make sure you have configured your zones and your region for where you are. These need to match up with both your MongoDB Atlas cluster region and your Amazon RDS database region.
Once your cluster is configured, we need to take note of a number of keys and IDs in order to properly connect to Relational Migrator. We need to take note of the:
- Cluster ID.
- Environment ID.
- Bootstrap server.
- REST endpoint.
- Cloud API key and secret.
- Kafka API key and secret.
You can find most of these from your “Cluster Settings,” and the Environment ID can be found on the right-hand side of your environment page in Confluent.
For Cloud API keys, click on the three lines on the right-hand side of Confluent’s homepage.
Click on “Cloud API keys” and grab the “key” and “secret” if you’ve already created them, or create them if necessary.
For the Kafka API keys, head over to your Cluster Overview, and on the left-hand side, click “API Keys” to create them. Once again, save your “key” and “secret.”
All of this information is crucial since you’re going to need it to insert into your `user.properties` folder to configure the connection between Confluent Cloud and MongoDB’s Relational Migrator.
As you can see from the documentation linked above, your Cloud API keys will be saved in your `user.properties` file as:
- migrator.confluent.cloud-credentials.api-key
- migrator.confluent.cloud-credentials.api-secret
And your Kafka API keys as:
- migrator.confluent.kafka-credentials.api-key
- migrator.confluent.kafka-credentials.api-secret
Now that we have our Confluent Cloud configured and all our necessary information saved, let’s configure our connection to MongoDB Relational Migrator.
## Connect Confluent Cloud to MongoDB Relational Migrator
Prior to this step, please ensure you have successfully downloaded Relational Migrator locally.
We are going to use our terminal to access our `user.properties` file located inside our Relational Migrator download and edit it accordingly to ensure a smooth connection takes place.
Use the commands to find our file in your terminal window:
```
cd ~/Library/Application\ Support /MongoDB/Relational\ Migrator/
ls
```
Once you see your `user.properties` file, open it with:
```
nano user.properties
```
Once your file is opened, we need to make some edits. At the very top of the file, uncomment the line that says:
```
spring.profiles.active: confluent
```
Be sure to comment out anything else in this section that is uncommented. We only want the Confluent profile active. Immediately under this section, we need to add in all our keys from above. Do it as such:
```
migrator.confluent.environment.environment-id:
migrator.confluent.environment.cluster-id:
migrator.confluent.environment.bootstrap-server:
migrator.confluent.environment.rest-endpoint:
migrator.confluent.cloud-credentials.api-key:
migrator.confluent.cloud-credentials.api-secret:
migrator.confluent.kafka-credentials.api-key:
migrator.confluent.kafka-credentials.api-secret:
```
There is no need to edit anything else in this file. Just please make sure you’re using the correct server port: 8278.
Once this is properly edited, write it to the file using Ctr + O. Press enter, and exit the file using Ctr + X.
Now, once the file is saved, let’s run MongoDB Relational Migrator.
## Running MongoDB Relational Migrator
We can get it up and running straight from our terminal. Use the commands shown below to do so:
```
cd "/Applications/MongoDB Relational Migrator.app/Contents/app"
java -jar application-1.5.0.jar
```
This will open Spring and the Relational Migrator in your browser:
Once Relational Migrator is running in your browser, connect it to your MySQL database:
You want to put in your host name (what we used to connect our Amazon RDS to MySQL Workbench in the beginning), the database with your data in it (mine is called amazonTest but yours will be different), and then your username and password. Hit the “Test connection” button to ensure the connection is successful. You’ll see a green bar at the bottom if it is.
Now, we want to select the tables to use. We are just going to click our database:
Then, define your initial schema. We are just going to start with a recommended MongoDB schema because it’s a little easier to work with.
Once this is done, you’ll see what your relational schema will look like once it’s migrated as documents in MongoDB Atlas!
Now, click on the “Data Migration” tab at the top of the screen. Remember we created a MongoDB cluster at the beginning of this tutorial for our sink data? We need all that connection information.
First, enter in again all your AWS RDS information that we had loaded in earlier. That is our source data, and now we are setting up our destination, or sink, database.
Enter in the MongoDB connection string for your cluster. Please ensure you are putting in the correct username and password.
Then, hit “Test connection” to make sure you can properly connect to your Atlas database.
When you first specify that you want a continuous migration, you will get this message saying you need to generate a script to do so. Click the button and a script will download and then will be placed in your MySQL Workbench. The script looks like this:
```
/*
* Relational Migrator needs source database to allow change data capture.
* The following scripts must be executed on MySQL source database before starting migration.
* For more details, please see https://debezium.io/documentation/reference/stable/connectors/mysql.html#setting-up-mysql
*/
/*
* Before initiating migration job, the MySQL user is required to be able to connect to the source database.
* This MySQL user must have appropriate permissions on all databases for which the Relational Migrator is supposed to capture changes.
*
* Connect to Amazon RDS Mysql instance, follow the below link for instructions:
* https://dev.mysql.com/doc/mysql-cluster-excerpt/8.0/en/mysql-cluster-replication-schema.html
*
* Grant the required permissions to the user
*/
GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'anaiya'@'%' ;
/* Finalize the user’s permissions: */
FLUSH PRIVILEGES;
/* Furthermore, binary logging must be enabled for MySQL replication on AWS RDS instance. Please see the below for instructions:
* https://aws.amazon.com/premiumsupport/knowledge-center/enable-binary-logging-aurora/
*
* If the instance is using the default parameter group, you will need to create a new one before you can make any changes.
* For MySQL RDS instances, create a Parameter Group for your chosen MySQL version.
* For Aurora MySQL clusters, create a DB Cluster Parameter Group for your chosen MySQL version.
* Edit the group and set the "binlog_format" parameter to "ROW".
* Make sure your database or cluster is configured to use the new Parameter Group.
*
* Please note that you must reboot the database cluster or instance to apply changes, follow below for instructions:
* https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RebootCluster.html
*/
```
Run this script in MySQL Workbench by hitting the lightning button. You’ll know it was successful if you don’t see any error messages in Workbench. You will also see that in Relational Migrator, the “Generate Script” message is gone, telling you that you can now use continuous snapshot.
Start it and it’ll run! Your snapshot stage will finish first, and then your continuous stage will run:
While the continuous snapshot is running, make a change in your database. I am changing the happiness score for Finland from 7.8 to 5.8:
```
UPDATE world_happiness_report
SET Score = 5.800
WHERE `Country or region` = `Finland`
LIMIT 1;
```
Once you run your change in MySQL Workbench, click on the “Complete CDC” button in Relational Migrator.
Now, let’s check out our MongoDB Atlas cluster and see if the data is properly loaded with the correct schema and our change has been properly streamed:
As you can see, all your information from your original MySQL database has been migrated to MongoDB Atlas, and you’re capable of streaming in any changes to your database!
## Conclusion
In this tutorial, we have successfully migrated your MySQL data and set up continuous data captures to MongoDB Atlas using Confluent Cloud and MongoDB Relational Migrator. This is super important since it means you are able to see real-time changes in your MongoDB Atlas database which mirrors the changes impacting your relational database.
For more information and help, please use the following resources:
- MongoDB Relational Migrator
- Confluent Cloud
| md | {
"tags": [
"MongoDB",
"AWS",
"Kafka",
"SQL"
],
"pageDescription": "This tutorial explains how to configure CDC jobs on your relational data from MySQL Workbench to MongoDB Atlas using MongoDB Relational Migrator and Confluent Cloud.",
"contentType": "Tutorial"
} | The Great Continuous Migration: CDC Jobs With Kafka and Relational Migrator | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/agent-fireworksai-mongodb-langchain | created | # Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain
This tutorial provides a step-by-step guide on building an AI research assistant agent that uses MongoDB as the memory provider, Fireworks AI for function calling, and LangChain for integrating and managing conversational components.
This agent can assist researchers by allowing them to search for research papers with semantic similarity and vector search, using MongoDB as a structured knowledge base and a data store for conversational history.
This repository contains all the steps to implement the agent in this tutorial, including code snippets and explanations for setting up the agent's memory, integrating tools, and configuring the language model to interact effectively with humans and other systems.
**What to expect in this tutorial:**
- Definitions and foundational concepts of an agent
- Detailed understanding of the agent's components
- Step-by-step implementation guide for building a research assistance agent
- Insights into equipping agents with effective memory systems and knowledge management
----------
# What is an agent?
**An agent is an artificial computational entity with an awareness of its environment. It is equipped with faculties that enable perception through input, action through tool use, and cognitive abilities through foundation models backed by long-term and short-term memory.** Within AI, agents are artificial entities that can make intelligent decisions followed by actions based on environmental perception, enabled by large language models.
.
- Obtain a Fireworks AI key.
- Get instructions on how to obtain a MongoDB URI connection string, which is provided right after creating a MongoDB database.
```
import os
# Be sure to have all the API keys in your local environment as shown below
# Do not publish environment keys in production
# os.environ"OPENAI_API_KEY"] = "sk"
# os.environ["FIREWORKS_API_KEY"] = ""
# os.environ["MONGO_URI"] = ""
FIREWORKS_API_KEY = os.environ.get("FIREWORKS_API_KEY")
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY")
MONGO_URI = os.environ.get("MONGO_URI")
```
The code snippet above does the following:
1. Retrieving the environment variables: `os.environ.get()` enables retrieving the value assigned to an environment variable by name reference.
## Step 3: data ingestion into MongoDB vector database
This tutorial uses a [specialized subset of the arXiv dataset hosted on MongoDB, derived from the extensive original collection on the Hugging Face platform. This subset version encompasses over 50,000 scientific articles sourced directly from arXiv. Each record in the subset dataset has an embedding field, which encapsulates a 256-dimensional representation of the text derived by combining the authors' names, the abstracts, and the title of each paper.
These embeddings are generated using OpenAI's `text-embedding-3-small model`, which was selected primarily due to its minimal dimension size that takes less storage space. Read the tutorial, which explores ways to select appropriate embedding models for various use cases.
This dataset will act as the agent's knowledge base. The aim is that before using any internet search tools, the agent will initially attempt to answer a question using its knowledge base or long-term memory, which, in this case, are the arXiv records stored in the MongoDB vector database.
The following step in this section loads the dataset, creates a connection to the database, and ingests the records into the database.
The code below is the implementation step to obtain the subset of the arXiv dataset using the `datasets` library from Hugging Face. Before executing the code snippet below, ensure that an `HF_TOKEN` is present in your development environment; this is the user access token required for authorized access to resources from Hugging Face. Follow the instructions to get the token associated with your account.
```
import pandas as pd
from datasets import load_dataset
data = load_dataset("MongoDB/subset_arxiv_papers_with_embeddings")
dataset_df = pd.DataFrame(data"train"])
```
1. Import the pandas library using the namespace `pd` for referencing the library and accessing functionalities.
2. Import the datasets library to use the `load_dataset` method, which enables access to datasets hosted on the Hugging Face platform by referencing their path.
3. Assign the loaded dataset to the variable data.
4. Convert the training subset of the dataset to a pandas DataFrame and assign the result to the variable `dataset_df`.
Before executing the operations in the following code block below, ensure that you have created a MongoDB database with a collection and have obtained the URI string for the MongoDB database cluster. Creating a database and collection within MongoDB is made simple with MongoDB Atlas. [Register a free Atlas account or sign in to your existing Atlas account. Follow the instructions (select Atlas UI as the procedure) to deploy your first cluster.
The database for this tutorial is called `agent_demo` and the collection that will hold the records of the arXiv scientific papers metadata and their embeddings is called `knowledge`.
To enable MongoDB's vector search capabilities, a vector index definition must be defined for the field holding the embeddings. Follow the instructions here to create a vector search index. Ensure the name of your vector search index is `vector_index`.
Your vector search index definition should look something like what is shown below:
```
{
"fields":
{
"numDimensions": 256,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
Once your database, collection, and vector search index are fully configured, connect to your database and execute data ingestion tasks with just a few lines of code with PyMongo.
```
from pymongo import MongoClient
# Initialize MongoDB python client
client = MongoClient(MONGO_URI)
DB_NAME = "agent_demo"
COLLECTION_NAME = "knowledge"
ATLAS_VECTOR_SEARCH_INDEX_NAME = "vector_index"
collection = client.get_database(DB_NAME).get_collection(COLLECTION_NAME)
```
1. Import the `MongoClient` class from the PyMongo library to enable MongoDB connections in your Python application.
2. Utilize the MongoClient with your `MONGO_URI` to establish a connection to your MongoDB database. Replace `MONGO_URI` with your actual connection string.
3. Set your database name to `agent_demo` by assigning it to the variable `DB_NAME`.
4. Set your collection name to `knowledge` by assigning it to the variable `COLLECTION_NAME`.
5. Access the knowledge collection within the `agent_demo` database by using `client.get_database(DB_NAME).get_collection(COLLECTION_NAME)` and assigning it to a variable for easy reference.
6. Define the vector search index name as `vector_index` by assigning it to the variable `ATLAS_VECTOR_SEARCH_INDEX_NAME`, preparing for potential vector-based search operations within your collection.
The code snippet below outlines the ingestion process. First, the collection is emptied to ensure the tutorial is completed with a clean collection. The next step is to convert the pandas DataFrame into a list of dictionaries, and finally, the ingestion process is executed using the `insert_many()` method available on the PyMongo collection object.
```
# Delete any existing records in the collection
collection.delete_many({})
# Data Ingestion
records = dataset_df.to_dict('records')
collection.insert_many(records)
print("Data ingestion into MongoDB completed")
```
## Step 4: create LangChain retriever with MongoDB
The LangChain open-source library has an interface implementation that communicates between the user query and a data store. This interface is called a retriever.
A retriever is a simple, lightweight interface within the LangChain ecosystem that takes a query string as input and returns a list of documents or records that matches the query based on some similarity measure and score threshold.
The data store for the back end of the retriever for this tutorial will be a vector store enabled by the MongoDB database. The code snippet below shows the implementation required to initialize a MongoDB vector store using the MongoDB connection string and specifying other arguments. The final operation uses the vector store instance as a retriever.
```
from langchain_openai import OpenAIEmbeddings
from langchain_mongodb import MongoDBAtlasVectorSearch
embedding_model = OpenAIEmbeddings(model="text-embedding-3-small", dimensions=256)
# Vector Store Creation
vector_store = MongoDBAtlasVectorSearch.from_connection_string(
connection_string=MONGO_URI,
namespace=DB_NAME + "." + COLLECTION_NAME,
embedding= embedding_model,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
text_key="abstract"
)
`retriever` = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 5})
```
1. Start by importing `OpenAIEmbeddings` from langchain_openai and `MongoDBAtlasVectorSearch` from langchain_mongodb. These imports will enable you to generate text embeddings and interface with MongoDB Atlas for vector search operations.
2. Instantiate an `OpenAIEmbeddings` object by specifying the model parameter as "text-embedding-3-small" and the dimensions as 256. This step prepares the model for generating 256-dimensional vector embeddings from the query passed to the retriever.
3. Use the `MongoDBAtlasVectorSearch.from_connection_string` method to configure the connection to your MongoDB Atlas database. The parameters for this function are as follows:
- `connection_string`: This is the actual MongoDB connection string.
- `namespace`: Concatenate your database name (DB_NAME) and collection name (COLLECTION_NAME) to form the namespace where the records are stored.
- `embedding`: Pass the previously initialized embedding_model as the embedding parameter. Ensure the embedding model specified in this parameter is the same one used to encode the embedding field within the database collection records.
- `index_name`: Indicate the name of your vector search index. This index facilitates efficient search operations within the database.
- `text_key`: Specify "abstract" as the text_key parameter. This indicates that the abstract field in your documents will be the focus for generating and searching embeddings.
4. Create a `retriever` from your vector_store using the `as_retriever` method, tailored for semantic similarity searches. This setup enables the retrieval of the top five documents most closely matching the user's query based on vector similarity, using MongoDB's vector search capabilities for efficient document retrieval from your collection.
## Step 5: configure LLM using Fireworks AI
The agent for this tutorial requires an LLM as its reasoning and parametric knowledge provider. The agent's model provider is Fireworks AI. More specifically, the [FireFunction V1 model, which is Fireworks AI's function-calling model, has a context window of 32,768 tokens.
**What is function calling?**
**Function calling refers to the ability of large language models (LLMs) to select and use available tools to complete specific tasks**. First, the LLM chooses a tool by a name reference, which, in this context, is a function. It then constructs the appropriate structured input for this function, typically in the JSON schema that contains fields and values corresponding to expected function arguments and their values. This process involves invoking a selected function or an API with the input prepared by the LLM. The result of this function invocation can then be used as input for further processing by the LLM.
Function calling transforms LLMs' conditional probabilistic nature into a predictable and explainable model, mainly because the functions accessible by LLMs are constructed, deterministic, and implemented with input and output constraints.
Fireworks AI's firefunction model is based on Mixtral and is open-source. It integrates with the LangChain library, which abstracts some of the implementation details for function calling with LLMs with tool-calling capabilities. The LangChain library provides an easy interface to integrate and interact with the Fireworks AI function calling model.
The code snippet below initializes the language model with function-calling capabilities. The `Fireworks` class is instantiated with a specific model, "accounts/fireworks/models/firefunction-v1," and configured to use a maximum of 256 tokens.
```
import os
from langchain_fireworks import Fireworks
llm = Fireworks(
model="accounts/fireworks/models/firefunction-v1",
max_tokens=256)
```
That is all there is to configure an LLM for the LangChain agent using Fireworks AI. The agent will be able to select a function from a list of provided functions to complete a task. It generates function input as a structured JSON schema, which can be invoked and the output processed.
## Step 6: create tools for the agent
At this point, we’ve done the following:
- Ingested data into our knowledge base, which is held in a MongoDB vector database
- Created a retriever object to interface between queries and the vector database
- Configured the LLM for the agent
This step focuses on specifying the tools that the agent can use when attempting to execute operations to achieve its specified objective. The LangChain library has multiple methods of specifying and configuring tools for an agent. In this tutorial, two methods are used:
1. Custom tool definition with the `@tool` decorator
2. LangChain built-in tool creator using the `Tool` interface
LangChain has a collection of Integrated tools to provide your agents with. An agent can leverage multiple tools that are specified during its implementation. When implementing tools for agents using LangChain, it’s essential to configure the model's name and description. The name and description of the tool enable the LLM to know when and how to leverage the tool. Another important note is that LangChain tools generally expect single-string input.
The code snippet below imports the classes and methods required for tool configuration from various LangChain framework modules.
```
from langchain.agents import tool
from langchain.tools.retriever import create_retriever_tool
from langchain_community.document_loaders import ArxivLoader
```
- Import the `tool` decorator from `langchain.agents`. These are used to define and instantiate custom tools within the LangChain framework, which allows the creation of modular and reusable tool components.
- Lastly, `create_retriever_tool` from `langchain.tools.retriever` is imported. This method provides the capability of using configured retrievers as tools for an agent.
- Import `ArxivLoader` from `langchain_community.document_loaders`. This class provides a document loader specifically designed to fetch and load documents from the arXiv repository.
Once all the classes and methods required to create a tool are imported into the development environment, the next step is to create the tools.
The code snippet below outlines the creation of a tool using the LangChain tool decorator. The main purpose of this tool is to take a query from the user, which can be a search term or, for our specific use case, a term for the basis of research exploration, and then use the `ArxivLoader` to extract at least 10 documents that correspond to arXiv papers that match the search query.
The `get_metadata_information_from_arxiv` returns a list containing the metadata of each document returned by the search. The metadata includes enough information for the LLM to start research exploration or utilize further tools for a more in-depth exploration of a particular paper.
```
@tool
def get_metadata_information_from_arxiv(word: str) -> list:
"""
Fetches and returns metadata for a maximum of ten documents from arXiv matching the given query word.
Args:
word (str): The search query to find relevant documents on arXiv.
Returns:
list: Metadata about the documents matching the query.
"""
docs = ArxivLoader(query=word, load_max_docs=10).load()
# Extract just the metadata from each document
metadata_list = doc.metadata for doc in docs]
return metadata_list
```
To get more information about a specific paper, the `get_information_from_arxiv` tool created using the `tool` decorator returns the full document of a single paper by using the ID of the paper, entered as the input to the tool as the query for the `ArxivLoader` document loader. The code snippet below provides the implementation steps to create the `get_information_from_arxiv` tool.
```
@tool
def get_information_from_arxiv(word: str) -> list:
"""
Fetches and returns metadata for a single research paper from arXiv matching the given query word, which is the ID of the paper, for example: 704.0001.
Args:
word (str): The search query to find the relevant paper on arXiv using the ID.
Returns:
list: Data about the paper matching the query.
"""
doc = ArxivLoader(query=word, load_max_docs=1).load()
return doc
```
The final tool for the agent in this tutorial is the retriever tool. This tool encapsulates the agent's ability to use some form of knowledge base to answer queries initially. This is analogous to humans using previously gained information to answer queries before conducting some search via the internet or alternate information sources.
The `create_retriever_tool` takes in three arguments:
- retriever: This argument should be an instance of a class derived from BaseRetriever, responsible for the logic behind retrieving documents. In this use case, this is the previously configured retriever that uses MongoDB’s vector database feature.
- name: This is a unique and descriptive name given to the retriever tool. The LLM uses this name to identify the tool, which also indicates its use in searching a knowledge base.
- description: The third parameter provides a detailed description of the tool's purpose. For this tutorial and our use case, the tool acts as the foundational knowledge source for the agent and contains records of research papers from arXiv.
```
retriever_tool = create_retriever_tool(
retriever=retriever,
name="knowledge_base",
description="This serves as the base knowledge source of the agent and contains some records of research papers from Arxiv. This tool is used as the first step for exploration and research efforts."
)
```
LangChain agents require the specification of tools available for use as a Python list. The code snippet below creates a list named `tools` that consists of the three tools created in previous implementation steps.
```
tools = [get_metadata_information_from_arxiv, get_information_from_arxiv, retriever_tool]
```
## Step 7: prompting the agent
This step in the tutorial specifies the instruction taken to instruct the agent using defined prompts. The content passed into the prompt establishes the agent's execution flow and objective, making prompting the agent a crucial step in ensuring the agent's behaviour and output are as expected.
Constructing prompts for conditioning LLMs and chat models is genuinely an art form. Several prompt methods have emerged in recent years, such as ReAct and chain-of-thought prompt structuring, to amplify LLMs' ability to decompose a problem and act accordingly. The LangChain library turns what could be a troublesome exploration process of prompt engineering into a systematic and programmatic process.
LangChain offers the `ChatPromptTemplate.from_message()` class method to construct basic prompts with predefined roles such as "system," "human," and "ai." Each role corresponds to a different speaker type in the chat, allowing for structured dialogues. Placeholders in the message templates (like `{name}` or `{user_input}`) are replaced with actual values passed to the `invoke()` method, which takes a dictionary of variables to be substituted in the template.
The prompt template includes a variable to reference the chat history or previous conversation the agent has with other entities, either humans or systems. The `MessagesPlaceholder` class provides a flexible way to add and manage historical or contextual chat messages within structured chat prompts.
For this tutorial, the "system" role scopes the chat model into the specified role of a helpful research assistant; the chat model, in this case, is FireFunction V1 from Fireworks AI. The code snippet below outlines the steps to implement a structured prompt template with defined roles and variables for user inputs and some form of conversational history record.
```
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
agent_purpose = "You are a helpful research assistant"
prompt = ChatPromptTemplate.from_messages(
[
("system", agent_purpose),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad")
]
)
```
The `{agent_scratchpad}` represents the short-term memory mechanism of the agent. This is an essential agent component specified in the prompt template. The agent scratchpad is responsible for appending the intermediate steps of the agent operations, thoughts, and actions to the thought component of the prompt. The advantage of this short-term memory mechanism is the maintenance of context and coherence throughout an interaction, including the ability to revisit and revise decisions based on new information.
## Step 8: create the agent’s long-term memory using MongoDB
The LangChain and MongoDB integration makes incorporating long-term memory for agents a straightforward implementation process. The code snippet below demonstrates how MongoDB can store and retrieve chat history in an agent system.
LangChain provides the `ConversationBufferMemory` interface to store interactions between an LLM and the user within a specified data store, MongoDB, which is used for this tutorial. This interface also provides methods to extract previous interactions and format the stored conversation as a list of messages. The `ConversationBufferMemory` is the long-term memory component of the agent.
The main advantage of long-term memory within an agentic system is to have some form of persistent storage that acts as a state, enhancing the relevance of responses and task execution by using previous interactions. Although using an agent’s scratchpad, which acts as a short-term memory mechanism, is helpful, this temporary state is removed once the conversation ends or another session is started with the agent.
A long-term memory mechanism provides an extensive record of interaction that can be retrieved across multiple interactions occurring at various times. Therefore, whenever the agent is invoked to execute a task, it’s also provided with a recollection of previous interactions.
```
from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory
from langchain.memory import ConversationBufferMemory
def get_session_history(session_id: str) -> MongoDBChatMessageHistory:
return MongoDBChatMessageHistory(MONGO_URI, session_id, database_name=DB_NAME, collection_name="history")
memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=get_session_history("my-session")
)
```
- The function `get_session_history` takes a `session_id` as input and returns an instance of `MongoDBChatMessageHistory`. This instance is configured with a MongoDB URI (MONGO_URI), the session ID, the database name (DB_NAME), and the collection name (history).
- A `ConversationBufferMemory` instance is created and assigned to the variable memory. This instance is specifically designed to keep track of the chat_history.
- The chat_memory parameter of ConversationBufferMemory is set using the `get_session_history` function, which means the chat history is loaded from MongoDB based on the specified session ID ("my-session").
This setup allows for the dynamic retrieval of chat history for a given session, using MongoDB as the agent’s vector store back end.
## Step 9: agent creation
This is a crucial implementation step in this tutorial. This step covers the creation of your agent and configuring its brain, which is the LLM, the tools available for task execution, and the objective prompt that targets the agents for the completion of a specific task or objective. This section also covers the initialization of a LangChain runtime interface, `AgentExecutor`, that enables the execution of the agents with configured properties such as memory and error handling.
```
from langchain.agents import AgentExecutor, create_tool_calling_agent
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
handle_parsing_errors=True,
memory=memory,
)
```
- The `create_tool_calling_agent` function initializes an agent by specifying a language model (llm), a set of tools (tools), and a prompt template (prompt). This agent is designed to interact based on the structured prompt and leverage external tools within their operational framework.
- An `AgentExecutor` instance is created with the Tool Calling agent. The `AgentExecutor` class is responsible for managing the agent's execution, facilitating interaction with inputs, and intermediary steps such as error handling and logging. The `AgentExecutor` is also responsible for creating a recursive environment for the agent to be executed, and it passes the output of a previous iteration as input to the next iteration of the agent's execution.
- agent: The Tool Calling agent
- tools: A sequence of tools that the agent can use. These tools are predefined abilities or integrations that augment the agent's capabilities.
- handle_parsing_errors: Ensure the agent handles parsing errors gracefully. This enhances the agent's robustness by allowing it to recover from or ignore errors in parsing inputs or outputs.
- memory: Specifies the memory mechanism the agent uses to remember past interactions or data. This integration provides the agent additional context or historical interaction to ensure ongoing interactions are relevant and grounded in relative truth.
## Step 10: agent execution
The previous steps created the agent, prompted it, and initiated a runtime interface for its execution. This final implementation step covers the method to start the agent's execution and its processes.
In the LangChain framework, native objects such as models, retrievers, and prompt templates inherit the `Runnable` protocol. This protocol endows the LangChain native components with the capability to perform their internal operations. Objects implementing the Runnable protocol are recognized as runnable and introduce additional methods for initiating their process execution through a `.invoke()` method, modifying their behaviour, logging their internal configuration, and more.
The agent executor developed in this tutorial exemplifies a Runnable object. We use the `.invoke()` method on the `AgentExecutor` object to call the agent. The agent executor initialized it with a string input in the example code provided. This input is used as the `{input}` in the question component of the template or the agent's prompt.
```
agent_chain.invoke({"input": "Get me a list of research papers on the topic Prompt Compression"})
```
In the first initial invocation of the agent, the ideal steps would be as follows:
- The agent uses the retriever tool to access its inherent knowledge base and check for research papers that are semantically similar to the user input/instruction using vector search enabled by MongoDB Atlas.
- If the agent retrieves research papers from its knowledge base, it will provide it as its response.
- If the agent doesn’t find research papers from its knowledge base, it should use the `get_metadata_information_from_arxiv()` tool to retrieve a list of documents that match the term in the user input and return it as its response.
```
agent_executor.invoke({"input":"Get me the abstract of the first paper on the list"})
```
This next agent invocation demonstrates the agent's ability to reference conversational history, which is retrieved from the MongoDB database from the `chat_history` collection and used as input into the model.
In the second invocation of the agent, the ideal outcome would be as follows:
- The agent references research papers in its history or short-term memory and recalls the details of the first paper on the list.
- The agent uses the details of the first research paper on the list as input to the `get_information_from_arxiv()` tool to extract the abstract of the query paper.
----------
# Conclusion
This tutorial has guided you through building an AI research assistant agent, leveraging tools such as MongoDB, Fireworks AI, and LangChain. It’s shown how these technologies combine to create a sophisticated agent capable of assisting researchers by effectively managing and retrieving information from an extensive database of research papers.
If you have any questions regarding this training, head to the [forums.
If you want to explore more RAG and Agents examples, visit the GenAI Showcase repository.
Or, if you simply want to get a well-rounded understanding of the AI Stack in the GenAI era, read this piece.
----------
# FAQs
1. **What is an Agent?**
An agent is an artificial computational entity with an awareness of its environment. It is equipped with faculties that enable perception through input, action through tool use, and cognitive abilities through foundation models backed by long-term and short-term memory. Within AI, agents are artificial entities that can make intelligent decisions followed by actions based on environmental perception, enabled by large language models.
1. **What is the primary function of MongoDB in the AI agent?**
MongoDB serves as the memory provider for the agent, storing conversational history, vector embedding data, and operational data. It supports information retrieval through its vector database capabilities, enabling semantic searches between user queries and stored data.
2. **How does Fireworks AI enhance the functionality of the agent?**
Fireworks AI, through its FireFunction V1 model, enables the agent to generate responses to user queries and decide when to use specific tools by providing a structured input for the available tools.
3. **What are some key characteristics of AI agents?**
Agents are autonomous, introspective, proactive, reactive, and interactive. They can independently plan and reason, respond to stimuli with advanced methodologies, and interact dynamically within their environments.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc72cddd5357a7d9a/6627c077528fc1247055ab24/Screenshot_2024-04-23_at_15.06.25.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09001ac434120f7/6627c10e33301d39a8891e2e/Perception_(3).png | md | {
"tags": [
"Atlas",
"Python",
"AI",
"Pandas"
],
"pageDescription": "Creating your own AI agent equipped with a sophisticated memory system. This guide provides a detailed walkthrough on leveraging the capabilities of Fireworks AI, MongoDB, and LangChain to construct an AI agent that not only responds intelligently but also remembers past interactions.",
"contentType": "Tutorial"
} | Building an AI Agent With Memory Using MongoDB, Fireworks AI, and LangChain | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/spring-application-on-k8s | created | # MongoDB Orchestration With Spring & Atlas Kubernetes Operator
In this tutorial, we'll delve into containerization concepts, focusing on Docker, and explore deploying your Spring Boot application from a previous tutorial. By the tutorial's conclusion, you'll grasp Docker and Kubernetes concepts and gain hands-on experience deploying your application within a cloud infrastructure.
This tutorial is an extension of the previous tutorial where we explained how to write advanced aggregation queries in MongoDB using the Spring Boot framework. We will use the same GitHub repository to create this tutorial's deployment files.
We'll start by learning about containers, like digital packages that hold software. Then, we'll dive into Kubernetes, a system for managing those containers. Finally, we'll use Kubernetes to set up MongoDB and our Spring application, seeing how they work together.
## Prerequisites
1. A Spring Boot application running on your local machine
2. Elastic Kubernetes Service deployed on AWS using eksctl
3. A MongoDB Atlas account
## Understanding containerization
Often as a software developer, one comes across an issue where the features of the application work perfectly on the local machine, and many features seem to be broken on the client machine. This is where the concept of containers would come in.
In simple words, a container is just a simple, portable computing environment that contains everything an application needs to run. The process of creating containers for the application to run in any environment is known as containerization.
Containerization is a form of virtualization where an application, along with all its components, is packaged into a single container image. These containers operate in their isolated environment within the shared operating system, allowing for efficient and consistent deployment across different environments.
### Advantages of containerizing the application
1. **Portability**: The idea of “write once and run anywhere” encapsulates the essence of containers, enabling applications to seamlessly transition across diverse environments, thereby enhancing their portability and flexibility.
2. **Efficiency**: When configured properly, containers utilize the available resources, and also, isolated containers can perform their operations without interfering with other containers, allowing a single host to perform many functions. This makes the containerized application work efficiently and effectively.
3. **Better security**: Because containers are isolated from one another, you can be confident that your applications are running in their self-contained environment. That means that even if the security of one container is compromised, other containers on the same host remain secure.
### Comparing containerization and traditional virtualization methods
| | | |
|----------------------|-------------------------|--------------------------------------|
| **Aspect** | **Containers** | **Virtual Machines** |
| Abstraction Level | OS level virtualization | Hardware-level virtualization |
| Resource Overhead | Minimal | Higher |
| Isolation | Process Level | Stronger |
| Portability | Highly Portable | Less Portable |
| Deployment Speed | Fast | Slower |
| Footprint | Lightweight | Heavier |
| Startup Time | Almost instant | Longer |
| Resource Utilisation | Efficient | Less Efficient |
| Scalability | Easily Scalable | Scalable, but with resource overhead |
## Understanding Docker
Docker application provides the platform to develop, ship, and run containers. This separates the application from the infrastructure and makes it portable. It packages the application into lightweight containers that can run across without worrying about underlying infrastructures.
Docker containers have minimal overhead compared to traditional virtual machines, as they share the host OS kernel and only include necessary dependencies. Docker facilitates DevOps practices by enabling developers to build, test, and deploy applications in a consistent and automated manner. You can read more about Docker containers and the steps to install them on your local machine from their official documentation.
## Understanding Kubernetes
Kubernetes, often called K8s, is an open-source orchestration platform that automates containerized applications' deployment, scaling, and management. It abstracts away the underlying infrastructure complexity, allowing developers to focus on building and running their applications efficiently.
It simplifies the deployment and management of containerized applications at scale. Its architecture, components, and core concepts form the foundation for building resilient, scalable, and efficient cloud-native systems. The Kubernetes architectures have been helpful in typical use cases like microservices architecture, hybrid and multi-cloud deployments, and DevOps where continuous deployments are done.
Let's understand a few components related to Kubernetes:
The K8s environment works in the controller-worker node architecture and therefore, two nodes manage the communication. The Master Node is responsible for controlling the cluster and making decisions for the cluster whereas the Worker node(s) is responsible for running the application receiving instructions from the Master Node and resorting back to the status.
The other components of the Kubernetes cluster are:
**Pods**: The basic building block of Kubernetes, representing one or more containers deployed together on the same host
**ReplicaSets**: Ensures that a specified number of pod replicas are running at any given time, allowing for scaling and self-healing
**Services**: Provide networking and load balancing for pods, enabling communication between different parts of the application
**Volumes**: Persist data in Kubernetes, allowing containers to share and store data independently of the container lifecycle
**Namespaces**: Virtual clusters within a physical cluster, enabling multiple users, teams, or projects to share a Kubernetes cluster securely
The below diagrams give a detailed description of the Kubernetes architecture.
## Atlas Kubernetes Operator
Consider a use case where a Spring application running locally is connected to a database deployed on the Atlas cluster. Later, your organization introduces you to the Kubernetes environment and plans to deploy all the applications in the cloud infrastructure.
The question of how you will connect your Kubernetes application to the Atlas cluster running on a different environment will arise. This is when the Atlas Kubernetes Operator will come into the picture.
This operator allows you to manage the Atlas resources in the Kubernetes infrastructure.
For this tutorial, we will deploy the operator on the Elastic Kubernetes Service on the AWS infrastructure.
Step 1: Deploy an EKS cluster using _eksctl_. Follow the documentation, Getting Started with Amazon EKS - eksctl, to deploy the cluster. This step will take some time to deploy the cluster in the AWS.
I created the cluster using the command:
```bash
eksctl create cluster \
--name MongoDB-Atlas-Kubernetes-Operator \
--version 1.29 \
--region ap-south-1 \
--nodegroup-name linux-nodes \
--node-type t2.2xlarge \
--nodes 2
```
Step 2: Once the EKS cluster is deployed, run the command:
```bash
kubectl get ns
```
And you should see an output similar to this.
```bash
NAME STATUS AGE
default Active 18h
kube-node-lease Active 18h
kube-public Active 18h
kube-system Active 18h
```
Step 3: Register a new Atlas account or log in to your Atlas account.
Step 4: As the quick start tutorial mentioned, you need the API key for the project in your Atlas cluster. You can follow the documentation page if you don’t already have an API key.
Step 5: All files that are being discussed in the following sub-steps are available in the GitHub repository.
If you are following the above tutorials, the first step is to create the API keys. You need to make sure that while creating the API key for the project, you add the public IPs of the EC2 instances created using the command in Step 1 to the access list.
This is how the access list should look like:
Figure showing the addition of the Public IPs address to the API key access list.
The first step mentioned in the Atlas Kubernetes Operator documentation is to apply all the YAML file configurations to all the namespaces created in the Kubernetes environment. Before applying the YAML files, make sure to export the below variables using:
```bash
export VERSION=v2.2.0
export ORG_ID=
export PUBLIC_API_KEY=
export PRIVATE_API_KEY=
```
Then, apply the command below:
```bash
kubectl apply -f https://raw.githubusercontent.com/mongodb/mongodb-atlas-kubernetes/$VERSION/deploy/all-in-one.yaml
```
To let the Kubernetes Operator create the project in Atlas, you must have certain permissions using the API key at the organizational level in the Atlas UI.
You can create the API key using the Get Started with the Atlas Administration API documentation.
Once the API key is created, create the secret with the credentials using the below command:
```bash
kubectl create secret generic mongodb-atlas-operator-api-key \
--from-literal="orgId=$ORG_ID" \
--from-literal="publicApiKey=$PUBLIC_API_KEY" \
--from-literal="privateApiKey=$PRIVATE_API_KEY" \
-n mongodb-atlas-system
```
Label the secrets created using the below command:
```bash
kubectl label secret mongodb-atlas-operator-api-key atlas.mongodb.com/type=credentials -n mongodb-atlas-system
```
The next step is to create the YAML file to create the project and deployment using the project and deployment YAML files respectively.
Please ensure the deployment files mention the zone, instance, and region correctly.
The files are available in the Git repository in the atlas-kubernetes-operator folder.
In the initial **project.yaml** file, the specified content initiates the creation of a project within your Atlas deployment, naming it as indicated. With the provided YAML configuration, a project named "atlas-kubernetes-operator" is established, permitting access from all IP addresses (0.0.0.0/0) within the Access List.
project.yaml:
```bash
apiVersion: atlas.mongodb.com/v1
kind: AtlasProject
metadata:
name: project-ako
spec:
name: atlas-kubernetes-operator
projectIpAccessList:
- cidrBlock: "0.0.0.0/0"
comment: "Allowing access to database from everywhere (only for Demo!)"
```
> **Please note that 0.0.0.0 is not recommended in the production environment. This is just for test purposes.**
The next file named, **deployment.yaml** would create a new deployment in the project created above with the name specified as cluster0. The YAML also specifies the instance type as M10 in the AP_SOUTH_1 region. Please make sure you use the region close to you.
deployment.yaml:
```bash
apiVersion: atlas.mongodb.com/v1
kind: AtlasDeployment
metadata:
name: my-atlas-cluster
spec:
projectRef:
name: project-ako
deploymentSpec:
clusterType: REPLICASET
name: "cluster0"
replicationSpecs:
- zoneName: AP-Zone
regionConfigs:
- electableSpecs:
instanceSize: M10
nodeCount: 3
providerName: AWS
regionName: AP_SOUTH_1
priority: 7
```
The **user.yaml** file will create the user for your project. Before creating the user YAML file, create the secret with the password of your choice for the project.
```bash
kubectl create secret generic the-user-password --from-literal="password="
kubectl label secret the-user-password atlas.mongodb.com/type=credentials
```
user.yaml
```bash
apiVersion: atlas.mongodb.com/v1
kind: AtlasDatabaseUser
metadata:
name: my-database-user
spec:
roles:
- roleName: "readWriteAnyDatabase"
databaseName: "admin"
projectRef:
name: project-ako
username: theuser
passwordSecretRef:
name: the-user-password
```
Once all the YAML are created, apply these YAML files to the default namespace.
```bash
kubectl apply -f project.yaml
kubectl apply -f deployment.yaml
kubectl apply -f user.yaml
```
After this step, you should be able to see the deployment and user created for the project in your Atlas cluster.
## Deploying the Spring Boot application in the cluster
In this tutorial, we'll be building upon our existing guide found on Developer Center, MongoDB Advanced Aggregations With Spring Boot, and Amazon Corretto.
We'll utilize the same GitHub repository to create a DockerFile. If you're new to this, we highly recommend following the tutorial first before diving into containerizing the application.
There are certain steps to be followed to containerize the application.
Step 1: Create a JAR file for the application. This executable JAR will be needed to create the Docker image.
To create the JAR, do:
```bash
mvn clean package
```
and the jar would be stored in the target/ folder.
Step 2: The second step is to create the Dockerfile for the application. A Dockerfile is a text file that contains the information to create the Docker image of the application.
Create a file named Dockerfile with the following content. This file describes what will run into this container.
Step 3: Build the Docker image. The `docker build` command will read the specifications from the Dockerfile created above.
```bash
docker build -t mongodb_spring_tutorial:docker_image . –load
```
Step 4: Once the image is built, you will need to push it to a registry. In this example, we are using Docker Hub. You can create your account by following the documentation.
```bash
docker tag mongodb_spring_tutorial:docker_image /mongodb_spring_tutorial
docker push /mongodb_spring_tutorial
```
Once the Docker image has been pushed into the repo, the last step is to connect your application with the database running on the Atlas Kubernetes Operator.
### Connecting the application with the Atlas Kubernetes Operator
To make the connection, we need Deployment and Service files. While Deployments manage the lifecycle of pods, ensuring a desired state, Services provide a way for other components to access and communicate with those pods. Together, they form the backbone for managing and deploying applications in Kubernetes.
A Deployment in Kubernetes is a resource object that defines the desired state for your application. It allows you to declaratively manage a set of identical pods. Essentially, it ensures that a specified number of pod replicas are running at any given time.
A deployment file will have the following information. In the above app-deployment.yaml file, the following details are mentioned:
1. **apiVersion**: Specifies the Kubernetes API version
2. **kind**: Specifies that it is a type of Kubernetes resource, Deployment
3. **metadata**: Contains metadata about the Deployment, including its name
In the spec section:
The **replicas** specify the number of instances of the application. The name and image refer to the application image created in the above step and the name of the container that would run the image.
In the last section, we will specify the environment variable for SPRING_DATA_MONGODB_URI which will pick the value from the connectionStringStandardSrv of the Atlas Kubernetes Operator.
Create the deployment.yaml file:
```bash
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: springboot-application
template:
metadata:
labels:
app: springboot-application
spec:
containers:
- name: spring-app
image: /mongodb_spring_tutorial
ports:
- containerPort: 8080
env:
- name: SPRING_DATA_MONGODB_URI
valueFrom:
secretKeyRef:
name: atlas-kubernetes-operator-cluster0-theuser
key: connectionStringStandardSrv
- name: SPRING_DATA_MONGODB_DATABASE
value: sample_supplies
- name: LOGGING_LEVEL_ORG_SPRINGFRAMEWORK
value: INFO
- name: LOGGING_LEVEL_ORG_SPRINGFRAMEWORK_WEB
value: DEBUG
```
A Service in Kubernetes is an abstraction that defines a logical set of pods and a policy by which to access them. It enables other components within or outside the Kubernetes cluster to communicate with your application running on pods.
```bash
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
selector:
app: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: LoadBalancer
```
You can then apply those two files to your cluster, and Kubernetes will create all the pods and start the application.
```bash
kubectl apply -f ./*.yaml
```
Now, when you do…
```bash
kubectl get svc
```
…it will give you the output as below with an external IP link created. This link will be used with the default port to access the RESTful calls.
>In an ideal scenario, the service file is applied with type: ClusterIP but since we need test the application with the API calls, we would be specifying the type as LoadBalancer.
You can use the external IP allocated with port 8080 and test the APIs.
Or use the following command to store the external address to the `EXTERNAL_IP` variable.
```bash
EXTERNAL_IP=$(kubectl get svc|grep spring-app-service|awk '{print $4}')
echo $EXTERNAL_IP
```
It should give you the response as
```bash
a4874d92d36fe4d2cab1ccc679b5fca7-1654035108.ap-south-1.elb.amazonaws.com
```
By this time, you should be able to deploy Atlas in the Kubernetes environment and connect with the front-end and back-end applications deployed in the same environment.
Let us test a few REST APIs using the external IP created in the next section.
## Tests
Now that your application is deployed, running in Kubernetes, and exposed to the outside world, you can test it with the following curl commands.
1. Finding sales in London
2. Finding total sales:
3. Finding the total quantity of each item
As we conclude our exploration of containerization in Spring applications, we're poised to delve into Kubernetes and Docker troubleshooting. Let us move into the next section as we uncover common challenges and effective solutions for a smoother deployment experience.
## Common troubleshooting errors in Kubernetes
In a containerized environment, the path to a successful deployment can sometimes involve multiple factors. To navigate any hiccups along the way, it's wise to turn to certain commands for insights:
- Examine pod status:
```bash
kubectl describe pods -n
kubectl get pods -n
````
- Check node status:
```bash
kubectl get nodes
```
- Dive into pod logs:
```bash
kubectl get logs -f -n
```
- Explore service details:
```bash
kubectl get describe svc -n
```
During troubleshooting, encountering errors is not uncommon. Here are a few examples where you might seek additional information:
1. **Image Not Found**: This error occurs when attempting to execute a container with an image that cannot be located. It typically happens if the image hasn't been pulled successfully or isn't available in the specified Docker registry. It's crucial to ensure that the correct image name and tag are used, and if necessary, try pulling the image from the registry locally before running the container to ensure it’s there.
2. **Permission Denied:** Docker containers often operate with restricted privileges, especially for security purposes. If your application requires access to specific resources or directories within the container, it's essential to set appropriate file permissions and configure user/group settings accordingly. Failure to do so can result in permission-denied errors when trying to access these resources.
3. **Port Conflicts**:Running multiple containers on the same host machine, each attempting to use the same host port, can lead to port conflicts. This issue arises when the ports specified in the `docker run` command overlap with ports already in use by other containers or services on the host. To avoid conflicts, ensure that the ports assigned to each container are unique and not already occupied by other processes.
4. **Out of Disk Space**: Docker relies on disk space to store images, containers, and log files. Over time, these files can accumulate and consume a significant amount of disk space, potentially leading to disk space exhaustion. To prevent this, it's advisable to periodically clean up unused images and containers using the `docker system prune` command, which removes dangling images, unused containers, and other disk space-consuming artifacts.
5. **Container Crashes**: Containers may crash due to various reasons, including misconfigurations, application errors, or resource constraints. When a container crashes, it's essential to examine its logs using the `kubectl logs -f ` -n `` command. These logs often contain valuable error messages and diagnostic information that can help identify the underlying cause of the crash and facilitate troubleshooting and resolution.
6. **Docker Build Failures**: Building Docker images can fail due to various reasons, such as syntax errors in the Dockerfile, missing files or dependencies, or network issues during package downloads. It's essential to carefully review the Dockerfile for any syntax errors, ensure that all required files and dependencies are present, and troubleshoot any network connectivity issues that may arise during the build process.
7. **Networking Problems**: Docker containers may rely on network connectivity to communicate with other containers or external services. Networking issues, such as incorrect network configuration, firewall rules blocking required ports, or DNS misconfigurations, can cause connectivity problems. It's crucial to verify that the container is attached to the correct network, review firewall settings to ensure they allow necessary traffic, and confirm that DNS settings are correctly configured.
8. **Resource Constraints**: Docker containers may require specific CPU and memory resources to function correctly. Failure to allocate adequate resources can result in performance issues or application failures. When running containers, it's essential to specify resource limits using the `--cpu` and `--memory` flags to ensure that containers have sufficient resources to operate efficiently without overloading the host system.
You can specify in the resource section of the YAML file as:
```bash
docker_container:
name: my_container
resources:
cpu: 2
memory: 4G
```
## Conclusion
Throughout this tutorial, we've covered essential aspects of modern application deployment, focusing on containerization, Kubernetes orchestration, and MongoDB management with Atlas Kubernetes Operator. Beginning with the fundamentals of containerization and Docker, we proceeded to understand Kubernetes' role in automating application deployment and management. By deploying Atlas Operator on AWS's EKS, we seamlessly integrated MongoDB into our Kubernetes infrastructure. Additionally, we containerized a Spring Boot application, connecting it to Atlas for database management. Lastly, we addressed common Kubernetes troubleshooting scenarios, equipping you with the skills needed to navigate challenges in cloud-native environments. With this knowledge, you're well-prepared to architect and manage sophisticated cloud-native applications effectively.
To learn more, please visit the resource, What is Container Orchestration? and reach out with any specific questions.
As you delve deeper into your exploration and implementation of these concepts within your projects, we encourage you to actively engage with our vibrant MongoDB community forums. Be sure to leverage the wealth of resources available on the MongoDB Developer Center and documentation to enhance your proficiency and finesse your abilities in harnessing the power of MongoDB and its features.
| md | {
"tags": [
"MongoDB",
"Java",
"AWS"
],
"pageDescription": "Learn how to use Spring application in production using Atlas Kubernetes Operator",
"contentType": "Article"
} | MongoDB Orchestration With Spring & Atlas Kubernetes Operator | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/amazon-sagemaker-and-mongodb-vector-search-part-2 | created | # Part #2: Create Your Model Endpoint With Amazon SageMaker, AWS Lambda, and AWS API Gateway
Welcome to Part 2 of the `Amazon SageMaker + Atlas Vector Search` series. In Part 1, I showed you how to set up an architecture that uses both tools to create embeddings for your data and how to use those to then semantically search through your data.
In this part of the series, we will look into the actual doing. No more theory! Part 2 will show you how to create the REST service described in the architecture.
The REST endpoint will serve as the encoder that creates embeddings (vectors) that will then be used in the next part of this series to search through your data semantically. The deployment of the model will be handled by Amazon SageMaker, AWS's all-in-one ML service. We will expose this endpoint using AWS Lambda and AWS API Gateway later on to make it available to the server app.
## Amazon SageMaker
Amazon SageMaker is a cloud-based, machine-learning platform that enables developers to build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.
## Getting Started With Amazon SageMaker
Amazon SageMaker JumpStart helps you quickly and easily get started with machine learning. The solutions are fully customizable and support one-click deployment and fine-tuning of more than 150 popular open-source models, such as natural language processing, object detection, and image classification models.
It includes a number of popular solutions:
- Extract and analyze data: Automatically extract, process, and analyze documents for more accurate investigation and faster decision-making.
- Fraud detection: Automate detection of suspicious transactions faster and alert your customers to reduce potential financial loss.
- Churn prediction: Predict the likelihood of customer churn and improve retention by honing in on likely abandoners and taking remedial actions such as promotional offers.
- Personalized recommendations: Deliver customized, unique experiences to customers to improve customer satisfaction and grow your business rapidly.
## Let's set up a playground for you to try it out!
> Before we start, make sure you choose a region that is supported for `RStudio` (more on that later) and `JumpStart`. You can check both on the Amazon SageMaker pricing page by checking if your desired region appears in the `On-Demand Pricing` list.
On the main page of Amazon SageMaker, you'll find the option to `Set up for a single user`. This will set up a domain and a quick-start user.
A QuickSetupDomain is basically just a default configuration so that you can get started deploying models and trying out SageMaker. You can customize it later to your needs.
The initial setup only has to be done once, but it might take several minutes. When finished, Amazon SageMaker will notify you that the new domain is ready.
Amazon SageMaker Domain supports Amazon SageMaker machine learning (ML) environments and contains the following:
- The domain itself, which holds an AWS EC2 that models will be deployed onto. This inherently contains a list of authorized users and a variety of security, application, policy, and Amazon Virtual Private Cloud (Amazon VPC) configurations.
- The `UserProfile`, which represents a single user within a domain that you will be working with.
- A `shared space`, which consists of a shared JupyterServer application and shared directory. All users within the domain have access to the same shared space.
- An `App`, which represents an application that supports the reading and execution experience of the user’s notebooks, terminals, and consoles.
After the creation of the domain and the user, you can launch the SageMaker Studio, which will be your platform to interact with SageMaker, your models, and deployments for this user.
Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for machine learning that lets you build, train, debug, deploy, and monitor your machine learning models.
Here, we’ll go ahead and start with a new JumpStart solution.
All you need to do to set up your JumpStart solution is to choose a model. For this tutorial, we will be using an embedding model called `All MiniLM L6 v2` by Hugging Face.
When choosing the model, click on `Deploy` and SageMaker will get everything ready for you.
You can adjust the endpoint to your needs but for this tutorial, you can totally go with the defaults.
As soon as the model shows its status as `In service`, everything is ready to be used.
Note that the endpoint name here is `jumpstart-dft-hf-textembedding-all-20240117-062453`. Note down your endpoint name — you will need it in the next step.
## Using the model to create embeddings
Now that the model is set up and the endpoint ready to be used, we can expose it for our server application.
We won’t be exposing the SageMaker endpoint directly. Instead, we will be using AWS API Gateway and AWS Lambda.
Let’s first start by creating the lambda function that uses the endpoint to create embeddings.
AWS Lambda is an event-driven, serverless computing platform provided by Amazon as a part of Amazon Web Services. It is designed to enable developers to run code without provisioning or managing servers. It executes code in response to events and automatically manages the computing resources required by that code.
In the main AWS Console, go to `AWS Lambda` and click `Create function`.
Choose to `Author from scratch`, give your function a name (`sageMakerLambda`, for example), and choose the runtime. For this example, we’ll be running on Python.
When everything is set correctly, create the function.
The following code snippet assumes that the lambda function and the Amazon SageMaker endpoint are deployed in the same AWS account. All you have to do is replace `` with your actual endpoint name from the previous section.
Note that the `lambda_handler` returns a status code and a body. It’s ready to be exposed as an endpoint, for using AWS API Gateway.
```
import json
import boto3
sagemaker_runtime_client = boto3.client("sagemaker-runtime")
def lambda_handler(event, context):
try:
# Extract the query parameter 'query' from the event
query_param = event.get('queryStringParameters', {}).get('query', '')
if query_param:
embedding = get_embedding(query_param)
return {
'statusCode': 200,
'body': json.dumps({'embedding': embedding})
}
else:
return {
'statusCode': 400,
'body': json.dumps({'error': 'No query parameter provided'})
}
except Exception as e:
return {
'statusCode': 500,
'body': json.dumps({'error': str(e)})
}
def get_embedding(synopsis):
input_data = {"text_inputs": synopsis}
response = sagemaker_runtime_client.invoke_endpoint(
EndpointName="",
Body=json.dumps(input_data),
ContentType="application/json"
)
result = json.loads(response"Body"].read().decode())
embedding = result["embedding"][0]
return embedding
```
Don’t forget to click `Deploy`!
![Lambda code editor
One last thing we need to do before we can use this lambda function is to make sure it actually has permission to execute the SageMaker endpoint. Head to the `Configuration` part of your Lambda function and then to `Permissions`. You can just click on the `Role Name` link to get to the associated role in AWS Identity and Access Management (IAM).
In IAM, you want to choose `Add permissions`.
You can choose `Attach policies` to attach pre-created policies from the IAM policy list.
For now, let’s use the `AmazonSageMakerFullAccess`, but keep in mind to select only those permissions that you need for your specific application.
## Exposing your lambda function via AWS API Gateway
Now, let’s head to AWS API Gateway, click `Create API`, and then `Build` on the `REST API`.
Choose to create a new API and name it. In this example, we’re calling it `sageMakerApi`.
That’s all you have to do for now. The API endpoint type can stay on regional, assuming you created the lambda function in the same region. Hit `Create API`.
First, we need to create a new resource.
The resource path will be `/`. Pick a name like `sageMakerResource`.
Next, you'll get back to your API overview. This time, click `Create method`. We need a GET method that integrates with a lambda function.
Check the `Lambda proxy integration` and choose the lambda function that you created in the previous section. Then, create the method.
Finally, don’t forget to deploy the API.
Choose a stage. This will influence the URL that we need to use (API Gateway will show you the full URL in a moment). Since we’re still testing, `TEST` might be a good choice.
This is only a test for a tutorial, but before deploying to production, please also add security layers like API keys. When everything is ready, the `Resources` tab should look something like this.
When sending requests to the API Gateway, we will receive the query as a URL query string parameter. The next step is to configure API Gateway and tell it so, and also tell it what to do with it.
Go to your `Resources`, click on `GET` again, and head to the `Method request` tab. Click `Edit`.
In the `URL query string parameters` section, you want to add a new query string by giving it a name. We chose `query` here. Set it to `Required` but not cached and save it.
The new endpoint is created. At this point, we can grab the URL and test it via cURL to see if that part worked fine. You can find the full URL (including stage and endpoint) in the `Stages` tab by opening the stage and endpoint and clicking on `GET`. For this example, it’s `https://4ug2td0e44.execute-api.ap-northeast-2.amazonaws.com/TEST/sageMakerResource`. Your URL should look similar.
Using the Amazon Cloud Shell or any other terminal, try to execute a cURL request:
```
curl -X GET 'https://4ug2td0e44.execute-api.ap-northeast-2.amazonaws.com/TEST/sageMakerResource?query=foo'
```
If everything was set up correctly, you should get a result that looks like this (the array contains 384 entries in total):
```
{"embedding": 0.01623343490064144, -0.007662375457584858, 0.01860642433166504, 0.031969036906957626,................... -0.031003709882497787, 0.008777940645813942]}
```
Your embeddings REST service is ready. Congratulations! Now you can convert your data into a vector with 384 dimensions!
In the next and final part of the tutorial, we will be looking into using this endpoint to prepare vectors and execute a vector search using MongoDB Atlas.
✅ [Sign-up for a free cluster.
✅ Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
✅ Get help on our Community Forums.
| md | {
"tags": [
"Atlas",
"Python",
"AI",
"AWS",
"Serverless"
],
"pageDescription": "In this series, we look at how to use Amazon SageMaker and MongoDB Atlas Vector Search to semantically search your data.",
"contentType": "Tutorial"
} | Part #2: Create Your Model Endpoint With Amazon SageMaker, AWS Lambda, and AWS API Gateway | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/connectors/deploying-kubernetes-operator | created | # Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud
This article is part of a three-parts series on deploying MongoDB across multiple Kubernetes clusters using the operators.
- Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud
- Mastering MongoDB Ops Manager
- Deploying MongoDB Across Multiple Kubernetes Clusters With MongoDBMulti
Deploying and managing MongoDB on Kubernetes can be a daunting task. It requires creating and configuring various Kubernetes resources, such as persistent volumes, services, and deployments, which can be time-consuming and require a deep understanding of both Kubernetes and MongoDB products. Furthermore, tasks such as scaling, backups, and upgrades must be handled manually, which can be complex and error-prone. This can impact the reliability and availability of your MongoDB deployment and may require frequent manual intervention to keep it running smoothly. Additionally, it can be hard to ensure that your MongoDB deployment is running in the desired state and is able to recover automatically from failures.
Fortunately, MongoDB offers operators, which are software extensions to the Kubernetes API that use custom resources to manage applications and their components. The MongoDB Operator translates human knowledge of creating a MongoDB instance into a scalable, repeatable, and standardized method, and leverages Kubernetes features to operate MongoDB for you. This makes it easier to deploy and manage MongoDB on Kubernetes, providing advanced features and functionality for running MongoDB in cloud-native environments.
There are three main Kubernetes operators available for deploying and managing MongoDB smoothly and efficiently in Kubernetes environments:
- The MongoDB Community Kubernetes Operator is an open-source operator that is available for free and can be used to deploy and manage MongoDB Replica Set on any Kubernetes cluster. It provides basic functionality for deploying and managing MongoDB but does not include some of the more advanced features available in the Enterprise and Atlas operators.
- The MongoDB Enterprise Kubernetes Operator is a commercial Kubernetes operator included with the MongoDB Enterprise subscription. It allows you to easily deploy and manage any type of MongoDB deployment (standalone, replica set, sharded cluster) on Kubernetes, providing advanced features and functionality for deploying and managing MongoDB in cloud-native environments.
- The MongoDB Atlas Kubernetes Operator is an operator that is available as part of the Atlas service. It allows you to quickly deploy and manage MongoDB on the Atlas cloud platform, providing features such as automatic provisioning and scaling of MongoDB clusters, integration with Atlas features and services, and automatic backups and restores. You can learn more about this operator in our blog post on application deployment in Kubernetes.
This article will focus on the Enterprise Operator. The MongoDB Enterprise Kubernetes Operator seamlessly integrates with other MongoDB Enterprise features and services, such as MongoDB Ops Manager (which can also run on Kubernetes) and MongoDB Cloud Manager. This allows you to easily monitor, back up, upgrade, and manage your MongoDB deployments from a single, centralized location, and provides access to a range of tools and services for managing, securing, and optimizing your deployment.
## MongoDB Enterprise Kubernetes Operator
The MongoDB Enterprise Kubernetes Operator automates the process of creating and managing MongoDB instances in a scalable, repeatable, and standardized manner. It uses the Kubernetes API and tools to handle the lifecycle events of a MongoDB cluster, including provisioning storage and computing resources, configuring network connections, setting up users, and making changes to these settings as needed. This helps to ease the burden of manually configuring and managing stateful applications, such as databases, within the Kubernetes environment.
## Kubernetes Custom Resource Definitions
Kubernetes CRDs (Custom Resource Definitions) is a feature in Kubernetes that allows users to create and manage custom resources in their Kubernetes clusters. Custom resources are extensions of the Kubernetes API that allow users to define their own object types and associated behaviors. With CRDs, you can create custom resources that behave like built-in Kubernetes resources, such as StatefulSets, Deployments, Pods, and Services, and manage them using the same tools and interfaces. This allows you to extend the functionality of Kubernetes and tailor it to their specific needs and requirements.
The MongoDB Enterprise Operator currently provides the following custom resources for deploying MongoDB on Kubernetes:
- MongoDBOpsManager Custom Resource
- MongoDB Custom Resource
- Standalone
- ReplicaSet
- ShardedCluster
- MongoDBUser Custom Resource
- MongoDBMulti
Example of Ops Manager and MongoDB Custom Resources on Kubernetes
## Installing and configuring Enterprise Kubernetes Operator
For this tutorial, we will need the following tools:
- gcloud
- gke-cloud-auth-plugin
- Helm
- kubectl
- kubectx
- Git
## GKE Kubernetes cluster creation
To start, let's create a Kubernetes cluster in a new project. We will be using GKE Kubernetes. I use this script to create the cluster. The cluster will have four worker nodes and act as Ops Manager and MongoDB Enterprise Operators Kubernetes Cluster.
```bash
CLUSTER_NAME=master-operator
ZONE=us-south1-a
K8S_VERSION=1.23
MACHINE=n2-standard-2
gcloud container clusters create "${CLUSTER_NAME}" \
--zone "${ZONE}" \
--machine-type "${MACHINE}" --cluster-version="${K8S_VERSION}" \
--disk-type=pd-standard --num-nodes 4
```
Now that the cluster has been created, we need to obtain the credentials.
```bash
gcloud container clusters get-credentials "${CLUSTER_NAME}" \
--zone "${ZONE}"
```
Display the newly created cluster.
```bash
gcloud container clusters list
NAME LOCATION MASTER_VERSION NUM_NODES STATUS
master-operator us-south1-a 1.23.14-gke.1800 4 RUNNING
```
We can also display Kubernetes full cluster name using `kubectx`.
```bash
kubectx
```
You should see your cluster listed here. Make sure your context is set to master cluster.
```bash
kubectx $(kubectx | grep "master-operator" | awk '{print $1}')
```
We are able to start MongoDB Kubernetes Operator installation on our newly created Kubernetes cluster!
## Enterprise Kubernetes Operator
We can install the MongoDB Enterprise Operator with a single line Helm command. The first step is to add the MongoDB Helm Charts for Kubernetes repository to Helm.
```bash
helm repo add mongodb https://mongodb.github.io/helm-charts
```
I want to create the operator in a separate, dedicated Kubernetes namespace (the operator uses `default` namespace by default). This will allow me to isolate the operator and any resources it creates from other resources in my cluster. The following command will install the CRDs and the Enterprise Operator in the `mongodb-operator`namespace. The operator will be watching only the `mongodb-operator` namespace. You can read more about setting up the operator to watch more namespaces in the official MongoDB documentation.
Start by creating the `mongodb-operator`namespace.
```bash
NAMESPACE=mongodb-operator
kubectl create ns "${NAMESPACE}"
```
Install the MongoDB Kubernetes Operator and set it to watch only the `mongodb-operator` namespace.
```bash
HELM_CHART_VERSION=1.16.3
helm install enterprise-operator mongodb/enterprise-operator \
--namespace "${NAMESPACE}" \
--version="${HELM_CHART_VERSION}" \
--set operator.watchNamespace="${NAMESPACE}"
```
The namespace has been created and the operator is running! You can see this by listing the pods in the newly created namespace.
```bash
kubectl get ns
NAME STATUS AGE
default Active 4m9s
kube-node-lease Active 4m11s
kube-public Active 4m12s
kube-system Active 4m12s
mongodb-operator Active 75s
```
```bash
kubectl get po -n "${NAMESPACE}"
NAME READY STATUS RESTARTS AGE
mongodb-enterprise-operator-649bbdddf5 1/1 Running 0 7m9s
```
You can see that the helm chart is running with this command.
```bash
helm list --namespace "${NAMESPACE}"
NAME NAMESPACE REVISION VERSION
enterprise-operator mongodb-operator 1 deployed enterprise-operator-1.17.2
```
### Verify the installation
You can verify that the installation was successful and is currently running with the following command.
```bash
helm get manifest enterprise-operator --namespace "${NAMESPACE}"
```
Let's display Custom Resource Definitions installed in the step above in the watched namespace.
```bash
kubectl -n "${NAMESPACE}" get crd | grep -E '^(mongo|ops)'
mongodb.mongodb.com 2022-12-30T16:17:07Z
mongodbmulti.mongodb.com 2022-12-30T16:17:08Z
mongodbusers.mongodb.com 2022-12-30T16:17:09Z
opsmanagers.mongodb.com 2022-12-30T16:17:09Z
```
All required service accounts has been created in watched namespace.
```bash
kubectl -n "${NAMESPACE}" get sa | grep -E '^(mongo)'
mongodb-enterprise-appdb 1 36s
mongodb-enterprise-database-pods 1 36s
mongodb-enterprise-operator 1 36s
mongodb-enterprise-ops-manager 1 36s
```
Validate if the Kubernetes Operator was installed correctly by running the following command and verify the output.
```bash
kubectl describe deployments mongodb-enterprise-operator -n \
"${NAMESPACE}"
```
Finally, double-check watched namespaces.
```bash
kubectl describe deploy mongodb-enterprise-operator -n "${NAMESPACE}" | grep WATCH
WATCH_NAMESPACE: mongodb-operator
```
The MongoDB Enterprise Operator is now running in your GKE cluster.
## MongoDB Atlas Kubernetes Operator
It's worth mentioning another operator here --- a new service that integrates Atlas resources with your Kubernetes cluster. Atlas can be deployed in multi-cloud environments including Google Cloud. The Atlas Kubernetes Operator allows you to deploy and manage cloud-native applications that require data services in a single control plane with secure enterprise platform integration.
This operator is responsible for managing resources in Atlas using Kubernetes custom resources, ensuring that the configurations of projects, database deployments, and database users in Atlas are consistent with each other. The Atlas Kubernetes Operator uses the `AtlasProject`, `AtlasDeployment`, and `AtlasDatabaseUser` Custom Resources that you create in your Kubernetes cluster to manage resources in Atlas.
These custom resources allow you to define and configure the desired state of your projects, database deployments, and database users in Atlas. To learn more, head over to our blog post on application deployment in Kubernetes with the MongoDB Atlas Operator.
## Conclusion
Upon the successful installation of the Kubernetes Operator, we are able to use the capabilities of the MongoDB Enterprise Kubernetes Operator to run MongoDB objects on our Kubernetes cluster. The Operator enables easy deploy of the following applications into Kubernetes clusters:
- MongoDB --- replica sets, sharded clusters, and standalones --- with authentication, TLS, and many more options.
- Ops Manager --- enterprise management, monitoring, and backup platform for MongoDB. The Operator can install and manage Ops Manager in Kubernetes for you. Ops Manager can manage MongoDB instances both inside and outside Kubernetes. Installing Ops Manager is covered in the second article of the series.
- MongoMulti --- Multi-Kubernetes-cluster deployments allow you to add MongoDB instances in global clusters that span multiple geographic regions for increased availability and global distribution of data. This is covered in the final part of this series.
Want to see the MongoDB Enterprise Kubernetes Operator in action and discover all the benefits it can bring to your Kubernetes deployment? Continue reading the next blog of this series and we'll show you how to best utilize the Operator for your needs | md | {
"tags": [
"Connectors",
"Kubernetes"
],
"pageDescription": "Learn how to deploy the MongoDB Enterprise Kubernetes Operator in this tutorial.",
"contentType": "Tutorial"
} | Deploying the MongoDB Enterprise Kubernetes Operator on Google Cloud | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/mongodb-atlas-terraform-database-users-vault | created | # MongoDB Atlas With Terraform: Database Users and Vault
In this tutorial, I will show how to create a user for the MongoDB database in Atlas using Terraform and how to store this credential securely in HashiCorp Vault. We saw in the previous article, MongoDB Atlas With Terraform - Cluster and Backup Policies, how to create a cluster with configured backup policies. Now, we will go ahead and create our first user. If you haven't seen the previous articles, I suggest you look to understand how to get started.
This article is for anyone who intends to use or already uses infrastructure as code (IaC) on the MongoDB Atlas platform or wants to learn more about it.
Everything we do here is contained in the provider/resource documentation:
- mongodbatlas_database_user
- vault_kv_secret_v2
> Note: We will not use a backend file. However, for productive implementations, it is extremely important and safer to store the state file in a remote location such as S3, GCS, Azurerm, etc.
## Creating a User
At this point, we will create our first user using Terraform in MongoDB Atlas and store the URI to connect to my cluster in HashiCorp Vault. For those unfamiliar, HashiCorp Vault is a secrets management tool that allows you to securely store, access, and manage sensitive credentials such as passwords, API keys, certificates, and more. It is designed to help organizations protect their data and infrastructure in complex, distributed IT environments. In it, we will store the connection URI of the user that will be created with the cluster we created in the last article.
Before we begin, make sure that all the prerequisites mentioned in the previous article are properly configured: Install Terraform, create an API key in MongoDB Atlas, and set up a project and a cluster in Atlas. These steps are essential to ensure the success of creating your database user.
### Configuring HashiCorp Vault to run on Docker
The first step is to run HashiCorp Vault so that we can test our module. It is possible to run Vault on Docker Local. If you don't have Docker installed, you can download it. After downloading Docker, we will download the image we want to run — in this case, from Vault. To do this, we will execute a command in the terminal `docker pull vault:1.13.3` or download using Docker Desktop.
## Creating the Terraform version file
The version file continues to have the same purpose, as mentioned in other articles, but we will add the version of the Vault provider as something new.
```
terraform {
required_version = ">= 0.12"
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
version = "1.14.0"
}
vault = {
source = "hashicorp/vault"
version = "4.0.0"
}
}
}
```
### Defining the database user and vault resource
After configuring the version file and establishing the Terraform and provider versions, the next step is to define the user resource in MongoDB Atlas. This is done by creating a .tf file — for example, main.tf — where we will create our module. As we are going to make a module that will be reusable, we will use variables and default values so that other calls can create users with different permissions, without having to write a new module.
```
# ------------------------------------------------------------------------------
# RANDOM PASSWORD
# ------------------------------------------------------------------------------
resource "random_password" "default" {
length = var.password_length
special = false
}
# ------------------------------------------------------------------------------
# DATABASE USER
# ------------------------------------------------------------------------------
resource "mongodbatlas_database_user" "default" {
project_id = data.mongodbatlas_project.default.id
username = var.username
password = random_password.default.result
auth_database_name = var.auth_database_name
dynamic "roles" {
for_each = var.roles
content {
role_name = try(roles.value"role_name"], null)
database_name = try(roles.value["database_name"], null)
collection_name = try(roles.value["collection_name"], null)
}
}
dynamic "scopes" {
for_each = var.scope
content {
name = scopes.value["name"]
type = scopes.value["type"]
}
}
dynamic "labels" {
for_each = local.tags
content {
key = labels.key
value = labels.value
}
}
}
resource "vault_kv_secret_v2" "default" {
mount = var.vault_mount
name = var.secret_name
data_json = jsonencode(local.secret)
}
```
At the beginning of the file, we have the random_password resource that is used to generate a random password for our user. In the mongodbatlas_database_user resource, we will specify our user details. We are placing some values as variables as done in other articles, such as name and auth_database_name with a default value of admin. Below, we create three dynamic blocks: roles, scopes, and labels. For roles, it is a list of maps that can contain the name of the role (read, readWrite, or some other), the database_name, and the collection_name. These values can be optional if you create a user with atlasAdmin permission, as in this case, it does not. It is necessary to specify a database or collection, or if you wanted, to specify only the database and not a specific collection. We will do an example. For the scopes block, the type is a DATA_LAKE or a CLUSTER. In our case, we will specify a cluster, which is the name of our created cluster, the demo cluster. And the labels serve as tags for our user.
Finally, we define the vault_kv_secret_v2 resource that will create a secret in our Vault. It receives the mount where it will be created and the name of the secret. The data_json is the value of the secret; we are creating it in the locals.tf file that we will evaluate below. It is a JSON value — that is why we are encoding it.
In the variable.tf file, we create variables with default values:
```
variable "project_name" {
description = "The name of the Atlas project"
type = string
}
variable "cluster_name" {
description = "The name of the Atlas cluster"
type = string
}
variable "password_length" {
description = "The length of the password"
type = number
default = 20
}
variable "username" {
description = "The username of the database user"
type = string
}
variable "auth_database_name" {
description = "The name of the database in which the user is created"
type = string
default = "admin"
}
variable "roles" {
description = < Note: Remember to export the environment variables with the public and private key.
```terraform
export MONGODB_ATLAS_PUBLIC_KEY="your_public_key"
export MONGODB_ATLAS_PRIVATE_KEY=your_private_key"
```
Now, we run init and then plan, as in previous articles.
We assess that our plan is exactly what we expect and run the apply to create it.
When running the `terraform apply` command, you will be prompted for approval with `yes` or `no`. Type `yes`.
Now, let's look in Atlas to see if the user was created successfully...
![User displayed in database access][6]
![Access permissions displayed][7]
Let's also look in the Vault to see if our secret was created.
![MongoDB secret URI][8]
It was created successfully! Now, let's test if the URI is working perfectly.
This is the format of the URI that is generated:
`mongosh "mongodb+srv://usr_myapp:@/admin?retryWrites=true&majority&readPreference=secondaryPreferred"`
![Mongosh login ][9]
We connect and will make an insertion to evaluate whether the permissions are adequate — initially, in db1 in collection1.
![Command to insert to db and acknowledged][10]
Success! Now, in db3, make sure it will not have permission in another database.
![Access denied to unauthroized collection][11]
Excellent — permission denied, as expected.
We have reached the end of this series of articles about MongoDB. I hope they were enlightening and useful for you!
To learn more about MongoDB and various tools, I invite you to visit the [Developer Center to read the other articles.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3adf134a1cc654f8/661cefe94c473591d2ee4ca7/image2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt01b4534800d306c0/661cefe912f2752a7aeff578/image8.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltabb003cbf7efb6fa/661cefe936f462858244ec50/image1.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2c34530c41490c28/661cefe90aca6b12ed3273b3/image7.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt867d41655e363848/661cefe931ff3a1d35a41344/image9.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcdaf7406e85f79d5/661cefe936f462543444ec54/image3.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbb0d0b37cd3e7e23/661cefe91c390d5d3c98ec3d/image10.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0dc4c9ad575c4118/661cefe9ba18470cf69b8c14/image6.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc6c61799f656701f/661cf85d4c4735186bee4ce7/image5.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdd01eaae2a3d9d24/661cefe936f462254644ec58/image11.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt05fe248cb479b18a/661cf85d4c47359b89ee4ce5/image4.png | md | {
"tags": [
"Atlas",
"Terraform"
],
"pageDescription": "Learn how to create a user for MongoDB and secure their credentials securely in Hashicorp Vault.",
"contentType": "Tutorial"
} | MongoDB Atlas With Terraform: Database Users and Vault | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-one-click-deployment-integ | created | # Single Click to Success: Deploying on Netlify, Vercel, Heroku, and Render with Atlas
MongoDB One-Click Starters are pre-configured project templates tailored for specific development stacks, designed to be deployed with just a few clicks. The primary purpose of these starters is to streamline the process of setting up new projects by providing a battle-tested structure that includes MongoDB Atlas as the database.
By utilizing MongoDB One-Click Starters, developers can significantly speed up project setup, reduce configuration errors, and promote best practices in using MongoDB. These starters eliminate the need to start from scratch or spend time configuring the database, allowing developers to focus more on the core features of their applications.
In this document, we will cover detailed insights into four specific MongoDB One-Click Starters:
1. Netlify MongoDB Starter
1. Vercel MongoDB Next.js FastAPI Starter
1. Heroku MERN Atlas Starter
1. Render MERN Atlas Starter
For each starter, we will provide a single-click deploy button as well as information on how to deploy and effectively use that starter to kickstart your projects efficiently.
## Netlify MongoDB Starter
--------------------------------------------------------------------------------
The Netlify MongoDB Starter is a template specifically designed for projects that intend to utilize MongoDB paired with Netlify, particularly focusing on JAMstack applications. This starter template comes equipped with key features that streamline the development process and enhance the functionality of applications built on this stack.
**Frameworks**:
- Next.js
- React
**Key features**:
**Pre-configured environment for serverless functions**: The starter provides a seamless environment setup for serverless functions, enabling developers to create dynamic functionalities without the hassle of server management.
**Integrated MongoDB connection**: With an integrated MongoDB connection, developers can easily leverage the powerful features of MongoDB for storing and managing data within their applications.
**Ideal use cases**:
The Netlify MongoDB Starter is ideal for the following scenarios:
**Rapid prototyping**: Developers looking to quickly prototype web applications that require a backend database can benefit from the pre-configured setup of this starter template.
**Full-fledged applications with minimal server management**: For projects aiming to build comprehensive applications with minimal server management overhead, the Netlify MongoDB Starter offers a robust foundation.
### Deployment guide
To deploy the Netlify MongoDB Starter, follow these steps:
**Clone the GitHub repository**:
Click the “Deploy to Netlify” button or clone the repository from Netlify MongoDB Starter GitHub repository to your local machine using Git.
**Setting up environment variables for MongoDB connection**:
Within the cloned repository, set up the necessary environment variables to establish a connection with your MongoDB database.
### Exploring and customizing the Starter:
To explore and modify the Netlify MongoDB Starter for custom use, consider the following tips:
**Directory structure**: Familiarize yourself with the directory structure of the starter to understand the organization of files and components.
**Netlify functions**: Explore the pre-configured serverless functions and customize them to suit your application's requirements.
## Vercel MongoDB Next FastAPI Starter
--------------------------------------------------------------------------------
The Vercel MongoDB Next.js FastAPI Starter is a unique combination designed for developers who seek a powerful setup to effectively utilize MongoDB in applications requiring both Next.js for frontend development and FastAPI for backend API services, all while being hosted on Vercel. This starter kit offers a seamless integration between Next.js and FastAPI, enabling developers to build web applications with a dynamic front end and a robust backend API.
**Frameworks**:
- Next.js
- React
- FastAPI
**Key features**:
**Integration**: The starter provides a smooth integration between Next.js and FastAPI, allowing developers to work on the front end and back end seamlessly.
**Database**: It leverages MongoDB Atlas as the database solution, offering a reliable and scalable option for storing application data.
**Deployment**: Easy deployment on Vercel provides developers with a hassle-free process to host their applications and make them accessible on the web.
**Ideal Use Cases**:
The Vercel MongoDB Next FastAPI Starter is ideal for developers looking to build modern web applications that require a dynamic front end powered by Next.js and a powerful backend API using FastAPI. Use cases include building AI applications, e-commerce platforms, content management systems, or any application requiring real-time data updates and user interactions.
### Step-by-step deployment guide
**Use starter kit**: Click “Deploy” or clone or download the starter kit from the GitHub repository
Configuration:
Configure MongoDB Atlas: Set up a database cluster on MongoDB Atlas and obtain the connection string.
Vercel setup: Create an account on Vercel and install the Vercel CLI for deployment.
**Environment setup**:
Create a `.env` file in the project root to store environment variables like the MongoDB connection string.
Configure the necessary environment variables in the `.env` file.
**Deployment**:
Use the Vercel CLI to deploy the project to Vercel by running the command after authentication.
Follow the prompts to deploy the application on Vercel.
**Customizations**:
For specific application needs, developers can customize the starter kit by:
- Adding additional features to the front end using Next.js components and libraries.
- Extending the backend API functionality by adding more endpoints and services in FastAPI.
- Integrating other third-party services or databases to suit the project requirements.
By leveraging the flexibility and capabilities of the Vercel MongoDB Next FastAPI Starter, developers can efficiently create and deploy modern web applications with a well-integrated frontend and backend system that utilizes MongoDB for data management.
## Heroku MERN Atlas Starter
--------------------------------------------------------------------------------
The Heroku MERN Atlas Starter is meticulously designed for developers looking to effortlessly deploy MERN stack applications, which combine MongoDB, Express.js, React, and Node.js, on the Heroku platform. This starter kit boasts key features that simplify the deployment process, including seamless Heroku integration, pre-configured connectivity to MongoDB Atlas, and a structured scaffolding for implementing CRUD (Create, Read, Update, Delete) operations.
Ideal for projects requiring a robust and versatile technology stack spanning both client-side and server-side components, the Heroku MERN Atlas Starter is best suited for building scalable web applications. By leveraging the functionalities provided within this starter kit, developers can expedite the development process and focus on crafting innovative solutions rather than getting bogged down by deployment complexities.
### Deployment Guide
To begin utilizing the Heroku MERN Atlas Starter, developers can click the “Deploy to Heroku” button or first clone the project repository from GitHub using the Heroku MERN Atlas starter repository.Subsequently, configuring Heroku and MongoDB details is a straightforward process, enabling developers to seamlessly set up their deployment environment.
Upon completion of the setup steps, deploying and running the application on Heroku becomes a breeze. Developers can follow a structured deployment guide provided within the starter kit to ensure a smooth transition from development to the production environment. It is recommended that readers explore the source code of the Heroku MERN Atlas Starter to foster a deeper understanding of the implementation details and to tailor the starter kit to their specific project requirements.
Embark on your journey with the Heroku MERN Atlas Starter today to experience a streamlined deployment process and unleash the full potential of MERN stack applications.
## Render MERN Atlas Starter
--------------------------------------------------------------------------------
Render MERN Atlas Starter is a specialized variant tailored for developers who prefer leveraging Render's platform for hosting MERN stack applications. This starter pack is designed to simplify and streamline the process of setting up a full-stack application on Render, with integrated support for MongoDB Atlas, a popular database service offering flexibility and scalability.
**Key Features**:
**Automatic deployments**: It facilitates seamless deployments directly from GitHub repositories, ensuring efficient workflow automation.
**Free SSL certificates**: It comes with built-in support for SSL certificates, guaranteeing secure communication between the application and the users.
**Easy scaling options**: Render.com provides hassle-free scalability options, allowing applications to adapt to varying levels of demand effortlessly.
**Use cases**:
Render MERN Atlas Starter is especially beneficial for projects that require straightforward deployment and easy scaling capabilities. It is ideal for applications where rapid development cycles and quick scaling are essential, such as prototyping new ideas, building MVPs, or deploying small- to medium-sized web applications.
## Deployment guide
To deploy the Render MERN Atlas Starter on Render, follow these steps:
**Setting up MongoDB Atlas Database**: Create a MongoDB Atlas account and configure a new database instance according to your application's requirements.
**Linking project to Render from GitHub**: Click “Deploy to Render” or share the GitHub repository link containing your MERN stack application code with Render. This enables Render to automatically fetch code updates for deployments.
**Configuring deployment settings**: On Render, specify the deployment settings, including the environment variables, build commands, and other configurations relevant to your application.
Feel free to use the repository link for the Render MERN Atlas Starter.
We encourage developers to experiment with the Render MERN Atlas Starter to explore its architecture and customization possibilities fully. By leveraging this starter pack, developers can quickly launch robust MERN stack applications on Render and harness the benefits of its deployment and scaling features.
## Conclusion
In summary, the MongoDB One-Click Starters provide an efficient pathway for developers to rapidly deploy and integrate MongoDB into various application environments. Whether you’re working with Netlify, Vercel, Heroku, or Render, these starters offer a streamlined setup process, pre-configured features, and seamless MongoDB Atlas integration. By leveraging these starters, developers can focus more on building robust applications rather than the intricacies of deployment and configuration. Embrace these one-click solutions to enhance your development workflow and bring your MongoDB projects to life with ease.
Ready to elevate your development experience? Dive into the world of MongoDB One-Click Starters today and unleash the full potential of your projects, register to Atlas and start building today!
Have questions or want to engage with our community, visit MongoDB community.
| md | {
"tags": [
"Atlas",
"Python",
"JavaScript",
"Next.js",
"Vercel",
"Netlify"
],
"pageDescription": "Explore the 'MongoDB One-Click Starters: A Comprehensive Guide' for an in-depth look at deploying MongoDB with Netlify, Vercel, Heroku, and Render. This guide covers essential features, ideal use cases, and step-by-step deployment instructions to kickstart your MongoDB projects.",
"contentType": "Quickstart"
} | Single Click to Success: Deploying on Netlify, Vercel, Heroku, and Render with Atlas | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/go/interact-aws-lambda-function-go | created | # Interact with MongoDB in an AWS Lambda Function Using Go
If you're a Go developer and you're looking to go serverless, AWS Lambda is a solid choice that will get you up and running in no time. But what happens when you need to connect to your database? With serverless functions, also known as functions as a service (FaaS), you can never be sure about the uptime of your function or how it has chosen to scale automatically with demand. For this reason, concurrent connections to your database, which aren't infinite, happen a little differently. In other words, we want to be efficient in how connections and interactions to the database are made.
In this tutorial, we'll see how to create a serverless function using the Go programming language and that function will connect to and query MongoDB Atlas in an efficient manner.
## The prerequisites
To narrow the scope of this particular tutorial, there are a few prerequisites that must be met prior to starting:
- A MongoDB Atlas cluster with network access and user roles already configured.
Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via AWS Marketplace.
- The sample MongoDB Atlas dataset loaded.
- Knowledge of the Go programming language.
- An Amazon Web Services (AWS) account with a basic understanding of AWS Lambda.
We won't go through the process of deploying a MongoDB Atlas cluster in this tutorial, including the configuration of network allow lists or users. As long as AWS has access through a VPC or global IP allow and a user that can read from the sample databases, you'll be fine.
If you need help getting started with MongoDB Atlas, check out this tutorial on the subject.
The point of this tutorial is not to explore the ins and outs of AWS Lambda, but instead see how to include MongoDB in our workflow. For this reason, you should have some knowledge of AWS Lambda and how to use it prior to proceeding.
## Build an AWS Lambda function with Golang and MongoDB
To kick things off, we need to create a new Go project on our local computer. Execute the following commands from your command line:
```bash
mkdir lambdaexample
cd lambdaexample
go mod init lambdaexample
```
The above commands will create a new project directory and initialize the use of Go Modules for our AWS Lambda and MongoDB dependencies.
Next, execute the following commands from within your project:
```bash
go get go.mongodb.org/mongo-driver/mongo
go get github.com/aws/aws-lambda-go/lambda
```
The above commands will download the Go driver for MongoDB and the AWS Lambda SDK.
Finally, create a **main.go** file in your project. The **main.go** file will be where we add all our project code.
Within the **main.go** file, add the following code:
```go
package main
import (
"context"
"os"
"github.com/aws/aws-lambda-go/lambda"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/bson/primitive"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
type EventInput struct {
Limit int64 `json:"limit"`
}
type Movie struct {
ID primitive.ObjectID `bson:"_id" json:"_id"`
Title string `bson:"title" json:"title"`
Year int32 `bson:"year" json:"year"`
}
var client, err = mongo.Connect(context.Background(), options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
func HandleRequest(ctx context.Context, input EventInput) (]Movie, error) {
if err != nil {
return nil, err
}
collection := client.Database("sample_mflix").Collection("movies")
opts := options.Find()
if input.Limit != 0 {
opts = opts.SetLimit(input.Limit)
}
cursor, err := collection.Find(context.Background(), bson.M{}, opts)
if err != nil {
return nil, err
}
var movies []Movie
if err = cursor.All(context.Background(), &movies); err != nil {
return nil, err
}
return movies, nil
}
func main() {
lambda.Start(HandleRequest)
}
```
Don't worry, we're going to break down what the above code does and how it relates to your serverless function.
First, you'll notice the following two data structures:
```go
type EventInput struct {
Limit int64 `json:"limit"`
}
type Movie struct {
ID primitive.ObjectID `bson:"_id" json:"_id"`
Title string `bson:"title" json:"title"`
Year int32 `bson:"year" json:"year"`
}
```
In this example, `EventInput` represents any input that can be sent to our AWS Lambda function. The `Limit` field will represent how many documents the user wants to return with their request. The data structure can include whatever other fields you think would be helpful.
The `Movie` data structure represents the data that we plan to return back to the user. It has both BSON and JSON annotations on each of the fields. The BSON annotation maps the MongoDB document fields to the local variable and the JSON annotation maps the local field to data that AWS Lambda can understand.
We will be using the **sample_mflix** database in this example and that database has a **movies** collection. Our `Movie` data structure is meant to map documents in that collection. You can include as many or as few fields as you want, but only the fields included will be returned to the user.
Next, we want to handle a connection to the database:
```go
var client, err = mongo.Connect(context.Background(), options.Client().ApplyURI(os.Getenv("ATLAS_URI")))
```
The above line creates a database client for our application. It uses an `ATLAS_URI` environment variable with the connection information. We'll set that later in AWS Lambda.
We don't want to establish a database connection every time the function is executed. We only want to connect when the function starts. We don't have control over when a function starts, so the correct solution is to connect outside of the `HandleRequest` function and outside of the `main` function.
Most of our magic happens in the `HandleRequest` function:
```go
func HandleRequest(ctx context.Context, input EventInput) ([]Movie, error) {
if err != nil {
return nil, err
}
collection := client.Database("sample_mflix").Collection("movies")
opts := options.Find()
if input.Limit != 0 {
opts = opts.SetLimit(input.Limit)
}
cursor, err := collection.Find(context.Background(), bson.M{}, opts)
if err != nil {
return nil, err
}
var movies []Movie
if err = cursor.All(context.Background(), &movies); err != nil {
return nil, err
}
return movies, nil
}
```
Notice in the declaration of the function we are accepting the `EventInput` and we're returning a slice of `Movie` to the user.
When we first enter the function, we check to see if there was an error. Remember, the connection to the database could have failed, so we're catching it here.
Once again, for this example we're using the **sample_mflix** database and the **movies** collection. We're storing a reference to this in our `collection` variable.
Since we've chosen to accept user input and this input happens to be related to how queries are done, we are creating an options variable. One of our many possible options is the limit, so if we provide a limit, we should probably set it. Using the options, we execute a `Find` operation on the collection. To keep this example simple, our filter criteria is an empty map which will result in all documents from the collection being returned — of course, the maximum being whatever the limit was set to.
Rather than iterating through a cursor of the results in our function, we're choosing to do the `All` method to load the results into our `movies` slice.
Assuming there were no errors along the way, we return the result and AWS Lambda should present it as JSON.
We haven't uploaded our function yet!
## Building and packaging the AWS Lambda function with Golang
Since Go is a compiled programming language, you need to create a binary before uploading it to AWS Lambda. There are certain requirements that come with this job.
First, we need to worry about the compilation operating system and CPU architecture. AWS Lambda expects Linux and AMD64, so if you're using something else, you need to make use of the Go cross compiler.
For best results, execute the following command:
```bash
env GOOS=linux GOARCH=amd64 go build
```
The above command will build the project for the correct operating system and architecture regardless of what computer you're using.
Don't forget to add your binary file to a ZIP archive after it builds. In our example, the binary file should have a **lambdaexample** name unless you specify otherwise.
![AWS Lambda MongoDB Go Project
Within the AWS Lambda dashboard, upload your project and confirm that the handler and architecture are correct.
Before testing the function, don't forget to update your environment variables within AWS Lambda.
You can get your URI string from the MongoDB Atlas dashboard.
Once done, you can test everything using the "Test" tab of the AWS Lambda dashboard. Provide an optional "limit" for the "Event JSON" and check the results for your movies!
## Conclusion
You just saw how to use MongoDB with AWS Lambda and the Go runtime! AWS makes it very easy to use Go for serverless functions and the Go driver for MongoDB makes it even easier to use with MongoDB.
As a further reading exercise, it is worth checking out the MongoDB Go Quick Start as well as some documentation around connection pooling in serverless functions. | md | {
"tags": [
"Go",
"AWS",
"Serverless"
],
"pageDescription": "In this tutorial, we'll see how to create a serverless function using the Go programming language and that function will connect to and query MongoDB Atlas in an efficient manner.",
"contentType": "Tutorial"
} | Interact with MongoDB in an AWS Lambda Function Using Go | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/mongodb/vector-search-hashicorp | created | # Leveraging Atlas Vector Search With HashiCorp Terraform: Empowering Semantic Search in Modern Applications
Last year, MongoDB announced the general availability of Atlas Vector Search, a new capability in Atlas that allows developers to search across data stored in MongoDB based on its semantic meaning using high dimensional vectors (i.e., “embeddings”) created by machine learning models.
This allows developers to build intelligent applications that can understand and process human language in a way traditional, text-based search methods cannot since they will only produce an exact match for the query.
For example, searching for “warm winter jackets” on an e-commerce website that only supports text-based search might return products with the exact match keywords "warm," "winter," and "jackets." Vector search, on the other hand, understands the semantic meaning of "warm winter jackets'' as apparel designed for cold temperatures. It retrieves items that are not only labeled as "winter jackets” but are specifically designed for warmth, including products that might be described with related terms like "insulated," giving users more helpful search results.
Integrating Atlas Vector Search with infrastructure-as-code (IaC) tools like HashiCorp Terraform can then streamline and optimize your development workflows, ensuring that sophisticated search capabilities are built directly into the infrastructure deployment process.
This guide will walk you through how to get started with Atlas Vector Search through our HashiCorp Terraform Atlas provider. Let’s get started!
### Pre-requisites
- Create a MongoDB Atlas account.
- Install HashiCorp Terraform on your terminal or sign up for a free Terraform Cloud account.
- Create MongoDB Atlas programmatic API keys and associate them with Terraform.
- Select an IDE of your choice. For this tutorial, we will be using VS Code.
## Step 1: Deploy Atlas dedicated cluster with Atlas Search Nodes
First, we need to deploy basic Atlas resources to get started. This includes an Atlas project, an M10 dedicated Atlas cluster (which is pay-as-you-go, great for development and low-traffic applications), a database user, and an IP Access List Entry.
**Note**: When configuring your MongoDB Atlas cluster with Terraform, it's important to restrict IP access to only the IP address from which the Terraform script will be deployed. This minimizes the risk of unauthorized access.
In addition, as part of this tutorial, we will be using Atlas Search Nodes (optional). These provide dedicated infrastructure for Atlas Search and Vector Search workloads, allowing you to fully scale search independent of database needs. Incorporating Search Nodes into your Atlas deployment allows for better performance at scale and delivers workload isolation, higher availability, and the ability to optimize resource usage.
Lastly, when using Terraform to manage infrastructure, it is recommended to maintain organized file management practices. Typically, your Terraform configurations/scripts will be written in files with the `.tf` extension, such as `main.tf`. This file, which we are using in this tutorial, contains the primary configuration details for deploying resources and should be located ideally in a dedicated project directory on your local machine or on Terraform Cloud.
See the below Terraform script as part of our `main.tf` file:
```
terraform {
required_providers {
mongodbatlas = {
source = "mongodb/mongodbatlas"
}
}
required_version = ">= 0.13"
}
resource "mongodbatlas_project" "exampleProject" {
name = "exampleProject"
org_id = "63234d3234ec0946eedcd7da"
}
resource "mongodbatlas_advanced_cluster" "exampleCluster" {
project_id = mongodbatlas_project.exampleProject.id
name = "ClusterExample"
cluster_type = "REPLICASET"
replication_specs {
region_configs {
electable_specs {
instance_size = "M10"
node_count = 3
}
provider_name = "AWS"
priority = 7
region_name = "US_EAST_1"
}
}
}
resource "mongodbatlas_search_deployment" "exampleSearchNode" {
project_id = mongodbatlas_project.exampleProject.id
cluster_name = mongodbatlas_advanced_cluster.exampleCluster.name
specs =
{
instance_size = "S20_HIGHCPU_NVME"
node_count = 2
}
]
}
resource "mongodbatlas_database_user" "testUser" {
username = "username123"
password = "password-test123"
project_id = mongodbatlas_project.exampleProject.id
auth_database_name = "admin"
roles {
role_name = "readWrite"
database_name = "dbforApp"
}
}
resource "mongodbatlas_project_ip_access_list" "test" {
project_id = mongodbatlas_project.exampleProject.id
ip_address = "174.218.210.1"
}
```
**Note**: Before deploying, be sure to store your MongoDB Atlas programmatic API keys created as part of the prerequisites as [environment variables. To deploy, you can use the below commands from the terminal:
```
terraform init
terraform plan
terraform apply
```
## Step 2: Create your collections with vector data
For this tutorial, you can create your own collection of vectorized data if you have data to use.
Alternatively, you can use our sample data. This is great for testing purposes. The collection you can use is the "sample_mflix.embedded_movies" which already has embeddings generated by Open AI.
To use sample data, from the Atlas UI, go into the Atlas cluster Overview page and select “Atlas Search” at the top of the menu presented.
Then, click “Load a Sample Dataset.”
## Step 3: Add vector search index in Terraform configuration
Now, head back over to Terraform and create an Atlas Search index with type “vectorSearch.” If you are using the sample data, also include a reference to the database “sample_mflix” and the collection “embedded_movies.”
Lastly, you will need to set the “fields” parameter as per our example below. See our documentation to learn more about how to index fields for vector search and the associated required parameters.
```
resource "mongodbatlas_search_index" "test-basic-search-vector" {
name = "test-basic-search-index"
project_id = mongodbatlas_project.exampleProject.id
cluster_name = mongodbatlas_advanced_cluster.exampleCluster.name
type = "vectorSearch"
database = "sample_mflix"
collection_name = "embedded_movies"
fields = <<-EOF
{
"type": "vector",
"path": "plot_embedding",
"numDimensions": 1536,
"similarity": "euclidean"
}]
EOF
}
```
To deploy again, you can use the below commands from the terminal:
```
terraform init
terraform plan
terraform apply
```
If your deployment was successful, you should be greeted with “Apply complete!”
![(Terraform in terminal showcasing deployment)
To confirm, you should be able to see your newly created Atlas Search index resource in the Atlas UI with Index Type “vectorSearch” and Status as “ACTIVE.”
## Step 4: Get connection string and connect to the MongoDB Shell to begin Atlas Vector Search queries
While still in the Atlas UI, go back to the homepage, click “Connect” on your Atlas cluster, and select “Shell.”
This will generate your connection string which you can use in the MongoDB Shell to connect to your Atlas cluster.
### All done
Congratulations! You have everything that you need now to run your first Vector Search queries.
With the above steps, teams can leverage Atlas Vector Search indexes and dedicated Search Nodes for the Terraform MongoDB Atlas provider to build a retrieval-augmented generation, semantic search, or recommendation system with ease.
The HashiCorp Terraform Atlas provider is open-sourced under the Mozilla Public License v2.0 and we welcome community contributions. To learn more, see our contributing guidelines.
The fastest way to get started is to create a MongoDB Atlas account from the AWS Marketplace or Google Cloud Marketplace. To learn more about the Terraform provider, check out the documentation, solution brief, and tutorials, or get started today.
Go build with MongoDB Atlas and the HashiCorp Terraform Atlas provider today!
| md | {
"tags": [
"MongoDB",
"Terraform"
],
"pageDescription": "Learn how to leverage Atlas Vector Search with HashiCorp Terraform in this tutorial.",
"contentType": "Tutorial"
} | Leveraging Atlas Vector Search With HashiCorp Terraform: Empowering Semantic Search in Modern Applications | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/quickstart-vectorsearch-mongodb-python | created | # Quick Start 2: Vector Search With MongoDB and OpenAI
This quick start will guide you through how to perform vector search using MongoDB Atlas and OpenAI API.
**Code (Python notebook)**: View on Github or Open in Colab
### What you will learn
- Creating a vector index on Atlas
- Performing vector search using OpenAI embeddings
### Pre-requisites
- A free Atlas account — create one now!
- A Python Jupyter notebook environment — we recommend Google Colab. It is a free, cloud-based environment and very easy to get up and running.
### Suggested
You may find this quick start helpful in getting Atlas and a Python client running:
Getting Started with MongoDB Atlas and Python.
### Vector search: beyond keyword matching
In the realm of information retrieval, keyword search has long been the standard. This method involves matching exact words within texts to find relevant information. For instance, if you're trying to locate a film but can only recall that its title includes the word "battle," a keyword search enables you to filter through content to find matches.
However, what if your memory of a movie is vague, limited to a general plot or theme rather than specific titles or keywords? This is where vector search steps in, revolutionizing how we find information. Unlike keyword search, vector search delves into the realm of **semantics**, allowing for the retrieval of content **based on the meanings behind the words**.
Consider you're trying to find a movie again, but this time, all you remember is a broad plot description like "humans fight aliens." Traditional search methods might leave you combing through endless irrelevant results. Vector search, however, uses advanced algorithms to understand the contextual meaning of your query, capable of guiding you to movies that align with your description — such as "Terminator" — even if the exact words aren't used in your search terms.
## Big picture
Let's understand how all the pieces fit together.
We are going to use the **embedded_movies** collection in the Atlas sample data. This one **already has embeddings calculated** for plots, making our lives easier.
Here is how it all works. When a semantic search query is issued (e.g., "fatalistic sci-fi movies"):
- Steps 1 and 2: **We call the OpenAI API to get embeddings** for the query text.
- Step 3: Send the **embedding to Atlas** to perform a vector search.
- Step 4: **Atlas returns relevant search results using Vector Search**.
Here is a visual:
## Understanding embeddings
Embeddings are an interesting way of transforming different types of data — whether it's text, images, audio, or video — into a numerical format, specifically, into an array known as a “vector.” This conversion allows the data to be processed and understood by machines.
Take text data as an example: Words can be converted into numbers, with each unique word assigned its own distinct numerical value. These numerical representations can vary in size, ranging anywhere from 128 to 4096 elements.
However, what sets embeddings apart is their ability to capture more than just random sequences of numbers. They actually preserve some of the inherent meaning of the original data. For instance, words that share similar meanings tend to have embeddings that are closer together in the numerical space.
To illustrate, consider a simplified scenario where we plot the embeddings of several words on a two-dimensional graph for easier visualization. Though in practice, embeddings can span many dimensions (from 128 to 4096), this example helps clarify the concept. On the graph, you'll notice that items with similar contexts or meanings — like different types of fruits or various pets — are positioned closer together. This clustering is a key strength of embeddings, highlighting their ability to capture and reflect the nuances of meaning and similarity within the data.
## How to create embeddings
So, how do we go about creating these useful embeddings? Thankfully, there's a variety of embedding models out there designed to transform your text, audio, or video data into meaningful numerical representations.
Some of these models are **proprietary**, meaning they are owned by certain companies and accessible **mainly through their APIs**. OpenAI is a notable example of a provider offering such models.
There are also **open-source models** available. These can be freely downloaded and operated on your own computer. Whether you opt for a proprietary model or an open-source option depends on your specific needs and resources.
Hugging Face's embedding model leaderboard is a great place to start looking for embedding models. They periodically test available embedding models and rank them according to various criteria.
You can read more about embeddings:
- Explore some of the embedding choices: RAG Series Part 1: How to Choose the Right Embedding Model for Your Application, by Apoorva Joshi
- The Beginner’s Guide to Text Embeddings
- Getting Started With Embeddings
## Step 1: Setting up Atlas in the cloud
Here is a quick guide adopted from the official documentation. Refer to the documentation for full details.
### Create a free Atlas account
Sign up for Atlas and log into your account.
### Create a free instance
- You can choose any cloud instance.
- Choose the “FREE” tier, so you won't incur any costs.
- Follow the setup wizard and give your instance a name.
- Note your username and password to connect to the instance.
- Configuring IP access: Add 0.0.0.0/0 to the IP access list. This makes it available to connect from Google Colab. (Note: This makes the instance available from any IP address, which is okay for a test instance). See the screenshot below for how to add the IP:
### Load sample data
Next, we'll load the default sample datasets in Atlas, which may take a few minutes.
### View sample data
In the Atlas UI, explore the **embedded_movies** collection within the **sample_mflix** database to view document details like title, year, and plot.
### Inspect embeddings
Fortunately, the **sample_mflix.embedded_movies** dataset already includes vector embeddings for plots, generated with OpenAI's **text-embedding-ada-002** model. By inspecting the **plot_embedding** attribute in the Atlas UI, as shown in the screenshot below, you'll find it comprises an array of 1536 numbers.
Congrats! You now have an Atlas cluster, with some sample data. 👏
## Step 2: Create Atlas index
Before we can run a vector search, we need to create a vector index. Creating an index allows Atlas to execute queries faster. Here is how to create a vector index.
### Navigate to the Atlas Vector Search UI
### Choose “Create a Vector Search Index”
### Create a vector index as follows
Let's define a vector index as below. Here is what the parameters mean.
- **"type": "vector"** — This indicates we are defining a vector index.
- **"path": "plot_embedding"** — This is the attribute we are indexing — in our case, the embedding data of plot.
- **"numDimensions": 1536** — This indicates the dimension of the embedding field. This has to match the embedding model we have used (in our case, the OpenAI model).
- **"similarity": "dotProduct"** — Finally, we are defining the matching algorithm to be used by the vector index. The choices are **euclidean**, **cosine**, and **dotProduct**. You can read more about these choices in How to Index Fields for Vector Search.
Index name: **idx_plot_embedding**
Index definition
```
{
"fields":
{
"type": "vector",
"path": "plot_embedding",
"numDimensions": 1536,
"similarity": "dotProduct"
}
]
}
```
![Figure 11: Creating a vector index
Wait until the index is ready to be used
## Step 3: Configuration
We will start by setting the following configuration parameters:
- Atlas connection credentials — see below for a step-by-step guide.
- OpenAI API key — get it from the OpenAI dashboard.
Here is how you get the **ATLAS_URI** setting.
- Navigate to the Atlas UI.
- Select your database.
- Choose the “Connect” option to proceed.
- Within the connect section, click on “Drivers” to view connection details.
- Finally, copy the displayed ATLAS_URI value for use in your application's configuration.
See these screenshots as guidance.
## On to code
Now, let's look at the code. We will walk through and execute the code step by step. You can also access the fully functional Python notebook at the beginning of this guide.
Start by setting up configurations for **ATLAS_URI** and **OPENAI_API_KEY**.
(Run this code block in your Google Colab under Step 3.)
```
# We will keep all global variables in an object to not pollute the global namespace.
class MyConfig(object):
pass
MY_CONFIG = MyConfig()
MY_CONFIG.ATLAS_URI = "Enter your Atlas URI value here" ## TODO
MY_CONFIG.OPENAI_API_KEY = "Enter your OpenAI API Key here" ## TODO
```
Pro tip 💡
We will keep all global variables in an object called **MY_CONFIG** so as not to pollute the global namespace. **MyConfig** is just a placeholder class to hold our variables and settings.
## Step 4: Install dependencies
Let's install the dependencies required. We are installing two packages:
- **pymongo**: Python library to connect to MongoDB Atlas instances
- **openai**: For calling the OpenAI library
(Run this code block in your Google Colab under Step 4.)
```
!pip install openai==1.13.3 pymongo==4.6.2
```
Pro tip 💡
You will notice that we are specifying a version (openai==1.13.3) for packages we are installing. This ensures the versions we are installing are compatible with our code. This is a good practice and is called **version pinning** or **freezing**.
## Step 5: AtlasClient and OpenAIClient
### AtlasClient
AtlasClient
This class handles establishing connections, running queries, and performing a vector search on MongoDB Atlas.
(Run this code block in your Google Colab under Step 5.)
```
from pymongo import MongoClient
class AtlasClient ():
def __init__ (self, altas_uri, dbname):
self.mongodb_client = MongoClient(altas_uri)
self.database = self.mongodb_clientdbname]
## A quick way to test if we can connect to Atlas instance
def ping (self):
self.mongodb_client.admin.command('ping')
def get_collection (self, collection_name):
collection = self.database[collection_name]
return collection
def find (self, collection_name, filter = {}, limit=10):
collection = self.database[collection_name]
items = list(collection.find(filter=filter, limit=limit))
return items
# https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/
def vector_search(self, collection_name, index_name, attr_name, embedding_vector, limit=5):
collection = self.database[collection_name]
results = collection.aggregate([
{
'$vectorSearch': {
"index": index_name,
"path": attr_name,
"queryVector": embedding_vector,
"numCandidates": 50,
"limit": limit,
}
},
## We are extracting 'vectorSearchScore' here
## columns with 1 are included, columns with 0 are excluded
{
"$project": {
'_id' : 1,
'title' : 1,
'plot' : 1,
'year' : 1,
"search_score": { "$meta": "vectorSearchScore" }
}
}
])
return list(results)
def close_connection(self):
self.mongodb_client.close()
```
**Initializing class**:
The constructor (__init__) function takes two arguments:
ATLAS URI (that we obtained from settings)
Database to connect
**Ping**:
This is a handy method to test if we can connect to Atlas.
**find**
This is the “search” function. We specify the collection to search and any search criteria using filters.
**vector_search**
This is a key function that performs vector search on MongoDB Atlas. It takes the following parameters:
- collection_name: **embedded_movies**
- index_name: **idx_plot_embedding**
- attr_name: **"plot_embedding"**
- embedding_vector: Embeddings returned from the OpenAI API call
- limit: How many results to return
The **$project** section extracts the attributes we want to return as search results.
(This code block is for review purposes. No need to execute.)
```
results = collection.aggregate([
{
'$vectorSearch': {
"index": index_name,
"path": attr_name,
"queryVector": embedding_vector,
"numCandidates": 50,
"limit": limit,
}
},
## We are extracting 'vectorSearchScore' here
## columns with 1 are included, columns with 0 are excluded
{
"$project": {
'_id' : 1,
'title' : 1,
'plot' : 1,
'year' : 1,
"search_score": { "$meta": "vectorSearchScore" }
}
}
])
```
Also, note this line:
```
"search_score": { "$meta": "vectorSearchScore" }
```
This particular line extracts the search score of the vector search. The search score ranges from 0.0 to 1.0. Scores close to 1.0 are a great match.
### OpenAI client
This is a handy class for OpenAI interaction.
(Run this code block in your Google Colab under Step 5.)
```
from openai import OpenAI
class OpenAIClient():
def __init__(self, api_key) -> None:
self.client = OpenAI(
api_key= api_key, # defaults to os.environ.get("OPENAI_API_KEY")
)
# print ("OpenAI Client initialized!")
def get_embedding(self, text: str, model="text-embedding-ada-002") -> list[float]:
text = text.replace("\n", " ")
resp = self.client.embeddings.create (
input=[text],
model=model )
return resp.data[0].embedding
```
**Initializing class**:
This class is initialized with the OpenAI API key.
**get_embedding method**:
- **text**: This is the text we are trying to get embeddings for.
- **model**: This is the embedding model. Here we are specifying the model **text-embedding-ada-002** because this is the model that is used to create embeddings in our sample data. So we want to use the same model to encode our query string.
## Step 6: Connect to Atlas
Initialize the Atlas client and do a quick connectivity test. We are connecting to the **sample_mflix** database and the **embedded_movies** collection. This dataset is loaded as part of the setup (Step 1).
If everything goes well, the connection will succeed.
(Run this code block in your Google Colab under Step 6.)
```
MY_CONFIG.DB_NAME = 'sample_mflix'
MY_CONFIG.COLLECTION_NAME = 'embedded_movies'
MY_CONFIG.INDEX_NAME = 'idx_plot_embedding'
atlas_client = AtlasClient (MY_CONFIG.ATLAS_URI, MY_CONFIG.DB_NAME)
atlas_client.ping()
print ('Connected to Atlas instance! We are good to go!')
```
***Troubleshooting***
If you get a “connection failed” error, make sure **0.0.0.0/0** is added as an allowed IP address to connect (see Step 1).
## Step 7: Initialize the OpenAI client
Initialize the OpenAI client with the OpenAI API key.
(Run this code block in your Google Colab under Step 7.)
```
openAI_client = OpenAIClient (api_key=MY_CONFIG.OPENAI_API_KEY)
print ("OpenAI client initialized")
```
## Step 8: Let's do a vector search!
Now that we have everything set up, let's do a vector search! We are going to query movie plots, not just based on keywords but also meaning. For example, we will search for movies where the plot is "humans fighting aliens."
This function takes one argument: **query** string.
1. We convert the **query into embeddings**. We do this by calling the OpenAI API. We also time the API call (t1b - t1a) so we understand the network latencies.
2. We send the embeddings (we just got back from OpenAI) to Atlas to **perform a vector search** and get the results.
3. We are printing out the results returned by the vector search.
(Run this code block in your Google Colab under Step 8.)
```
import time
# Handy function
def do_vector_search (query:str) -> None:
query = query.lower().strip() # cleanup query string
print ('query: ', query)
# call openAI API to convert text into embedding
t1a = time.perf_counter()
embedding = openAI_client.get_embedding(query)
t1b = time.perf_counter()
print (f"Getting embeddings from OpenAI took {(t1b-t1a)*1000:,.0f} ms")
# perform a vector search on Atlas
# using embeddings (returned from OpenAI above)
t2a = time.perf_counter()
movies = atlas_client.vector_search(collection_name=MY_CONFIG.COLLECTION_NAME, index_name=MY_CONFIG.INDEX_NAME, attr_name='plot_embedding', embedding_vector=embedding,limit=10 )
t2b = time.perf_counter()
# and printing out the results
print (f"Altas query returned {len (movies)} movies in {(t2b-t2a)*1000:,.0f} ms")
print()
for idx, movie in enumerate (movies):
print(f'{idx+1}\nid: {movie["_id"]}\ntitle: {movie["title"]},\nyear: {movie["year"]}' +
f'\nsearch_score(meta):{movie["search_score"]}\nplot: {movie["plot"]}\n')
```
### First query
Here is our first query. We want to find movies where the plot is about "humans fighting aliens."
(Run this code block in your Google Colab under Step 8.)
```
query="humans fighting aliens"
do_vector_search (query=query)
```
We will see search results like this:
```
query: humans fighting aliens
using cached embeddings
Altas query returned 10 movies in 138 ms
1
id: 573a1398f29313caabce8f83
title: V: The Final Battle,
year: 1984
search_score(meta):0.9573556184768677
plot: A small group of human resistance fighters fight a desperate guerilla war against the genocidal extra-terrestrials who dominate Earth.
2
id: 573a13c7f29313caabd75324
title: Falling Skies,
year: 2011è
search_score(meta):0.9550596475601196
plot: Survivors of an alien attack on earth gather together to fight for their lives and fight back.
3
id: 573a139af29313caabcf0cff
title: Starship Troopers,
year: 1997
search_score(meta):0.9523435831069946
plot: Humans in a fascistic, militaristic future do battle with giant alien bugs in a fight for survival.
...
year: 2002
search_score(meta):0.9372057914733887
plot: A young woman from the future forces a local gunman to help her stop an impending alien invasion which will wipe out the human race.
```
***Note the score***
In addition to movie attributes (title, year, plot, etc.), we are also displaying search_score. This is a meta attribute — not really part of the movies collection but generated as a result of the vector search.
This is a number between 0 and 1. Values closer to 1 represent a better match. The results are sorted from best match down (closer to 1 first). [Read more about search score.
***Troubleshooting***
No search results?
Make sure the vector search index is defined and active (Step 2)!
### Sample Query 2
(Run this code block in your Google Colab under Step 8.)
```
query="relationship drama between two good friends"
do_vector_search (query=query)
```
Sample results will look like the following:
```
query: relationship drama between two good friends
using cached embeddings
Altas query returned 10 movies in 71 ms
1
id: 573a13a3f29313caabd0dfe2
title: Dark Blue World,
year: 2001
search_score(meta):0.9380425214767456
plot: The friendship of two men becomes tested when they both fall for the same woman.
2
id: 573a13a3f29313caabd0e14b
title: Dark Blue World,
year: 2001
search_score(meta):0.9380425214767456
plot: The friendship of two men becomes tested when they both fall for the same woman.
3
id: 573a1399f29313caabcec488
title: Once a Thief,
year: 1991
search_score(meta):0.9260045289993286
plot: A romantic and action packed story of three best friends, a group of high end art thieves, who come into trouble when a love-triangle forms between them.
...
year: 1987
search_score(meta):0.9181452989578247
plot: A modern day Romeo & Juliet story is told in New York when an Italian boy and a Chinese girl become lovers, causing a tragic conflict between ethnic gangs.
```
## Conclusion
There we go! We have successfully performed a vector search combining Atlas and the OpenAI API.
To summarize, in this quick start, we have accomplished the following:
- Set up Atlas in the cloud
- Loaded sample data into our Atlas cluster
- Set up a vector search index
- Performed a vector search using OpenAI embeddings and Atlas
As we can see, **vector search** is very powerful as it can fetch results based on the semantic meaning of search terms instead of just keyword matching. Vector search allows us to build more powerful applications.
## Next steps
Here are some suggested resources for you to explore:
- Atlas Vector Search Explained in 3 Minutes
- Audio Find - Atlas Vector Search for Audio
- The MongoDB community forums —a great place to ask questions and get help from fellow developers!
| md | {
"tags": [
"Atlas",
"Python",
"AI"
],
"pageDescription": "This quick start will guide you through how to perform vector search using MongoDB Atlas and OpenAI API. ",
"contentType": "Quickstart"
} | Quick Start 2: Vector Search With MongoDB and OpenAI | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/realm/realm-flex-sync-tutorial | created | # Using Realm Flexible Sync in Your App—an iOS Tutorial
## Introduction
In January 2022, we announced the release of the Realm Flexible Sync preview—an opportunity for developers to take it for a spin and give us feedback. Flexible Sync is now Generally Available as part of MongoDB Atlas Device Sync. That article provided an overview of the benefits of flexible sync and how it works. TL;DR: You typically don't want to sync the entire backend database to every device—whether for capacity or security concerns. Flexible Sync lets the developer provide queries to control exactly what the mobile app asks to sync, together with backend rules to ensure users can only access the data that they're entitled to.
This post builds on that introduction by showing how to add flexible sync to the RChat mobile app. I'll show how to configure the backend Atlas app, and then what code needs adding to the mobile app.
Everything you see in this tutorial can be found in the flex-sync branch of the RChat repo.
## Prerequisites
- Xcode 13.2+
- iOS 15+
- Realm-Swift 10.32.0+
- MongoDB 5.0+
## The RChat App
RChat is a messaging app. Users can add other users to a chat room and then share messages, images, and location with each other.
All of the user and chat message data is shared between instances of the app via Atlas Device Sync.
There's a common Atlas backend app. There are frontend apps for iOS and Android. This post focuses on the backend and the iOS app.
## Configuring the Realm Backend App
The backend app contains a lot of functionality that isn't connected to the sync functionality, and so I won't cover that here. If you're interested, then check out the original RChat series.
As a starting point, you can install the app. I'll then explain the parts connected to Atlas Device Sync.
### Import the Backend Atlas App
1. If you don't already have one, create a MongoDB Atlas Cluster, keeping the default name of `Cluster0`. The Atlas cluster must be running MongoDB 5.0 or later.
2. Install the Realm CLI and create an API key pair.
3. Download the repo and install the Atlas app:
```bash
git clone https://github.com/ClusterDB/RChat.git
git checkout flex-sync
cd RChat/RChat-Realm/RChat
realm-cli login --api-key --private-api-key
realm-cli import # Then answer prompts, naming the app RChat
```
4. From the Atlas UI, click on the "App Services" tab and you will see the RChat app. Open it and copy the App Id. You'll need to use this before building the iOS app.
### How Flexible Sync is Enabled in the Back End
#### Schema
The schema represents how the data will be stored in MongoDB Atlas **and*- what the Swift (and Kotlin) model classes must contain.
Each collection/class requires a schema. If you enable the "Developer Mode" option, then Atlas will automatically define the schema based on your Swift or Kotlin model classes. In this case, your imported `App` includes the schemas, and so developer mode isn't needed. You can view the schemas by browsing to the "Schema" section in the Atlas UI:
You can find more details about the schema/model in Building a Mobile Chat App Using Realm – Data Architecture, but note that for flexible sync (as opposed to the original partition-based sync), the `partition` field has been removed.
We're interested in the schema for three collections/model-classes:
**User:**
```json
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"conversations": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"displayName": {
"bsonType": "string"
},
"id": {
"bsonType": "string"
},
"members": {
"bsonType": "array",
"items": {
"bsonType": "object",
"properties": {
"membershipStatus": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required":
"membershipStatus",
"userName"
],
"title": "Member"
}
},
"unreadCount": {
"bsonType": "long"
}
},
"required": [
"unreadCount",
"id",
"displayName"
],
"title": "Conversation"
}
},
"lastSeenAt": {
"bsonType": "date"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
},
"userPreferences": {
"bsonType": "object",
"properties": {
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
}
},
"required": [],
"title": "UserPreferences"
}
},
"required": [
"_id",
"userName",
"presence"
],
"title": "User"
}
```
`User` documents/objects represent users of the app.
**Chatster:**
```json
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"avatarImage": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"displayName": {
"bsonType": "string"
},
"lastSeenAt": {
"bsonType": "date"
},
"presence": {
"bsonType": "string"
},
"userName": {
"bsonType": "string"
}
},
"required": [
"_id",
"presence",
"userName"
],
"title": "Chatster"
}
```
`Chatster` documents/objects represent a read-only subset of instances of `User` documents. `Chatster` is needed because there's a subset of `User` data that we want to make accessible to all users. E.g., I want everyone to be able to see my username, presence status, and avatar image, but I don't want them to see which chat rooms I'm a member of.
Device Sync lets you control which users can sync which documents. When this article was first published, you couldn't sync just a subset of a document's fields. That's why `Chatster` was needed. At some point, I can remove `Chatster` from the app.
**ChatMessage:**
```json
{
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"author": {
"bsonType": "string"
},
"authorID": {
"bsonType": "string"
},
"conversationID": {
"bsonType": "string"
},
"image": {
"bsonType": "object",
"properties": {
"_id": {
"bsonType": "string"
},
"date": {
"bsonType": "date"
},
"picture": {
"bsonType": "binData"
},
"thumbNail": {
"bsonType": "binData"
}
},
"required": [
"_id",
"date"
],
"title": "Photo"
},
"location": {
"bsonType": "array",
"items": {
"bsonType": "double"
}
},
"text": {
"bsonType": "string"
},
"timestamp": {
"bsonType": "date"
}
},
"required": [
"_id",
"authorID",
"conversationID",
"text",
"timestamp"
],
"title": "ChatMessage"
}
```
There's a `ChatMessage` document object for every message sent to any chat room.
#### Flexible Sync Configuration
You can view and edit the sync configuration by browsing to the "Sync" section of the Atlas UI:
![Enabling Atlas Flexible Device Sync in the Atlas UI
For this deployment, I've selected the Atlas cluster to use. **That cluster must be running MongoDB 5.0 or later**.
You must specify which fields the mobile app can use in its sync filter queries. Without this, you can't refer to those fields in your sync queries or permissions. You are currently limited to 10 fields.
Scrolling down, you can see the sync permissions:
The UI has flattened the permissions JSON document; here's a version that's easier to read:
```json
{
"rules": {
"User":
{
"name": "anyone",
"applyWhen": {},
"read": {
"_id": "%%user.id"
},
"write": {
"_id": "%%user.id"
}
}
],
"Chatster": [
{
"name": "anyone",
"applyWhen": {},
"read": true,
"write": false
}
],
"ChatMessage": [
{
"name": "anyone",
"applyWhen": {},
"read": true,
"write": {
"authorID": "%%user.id"
}
}
]
},
"defaultRoles": [
{
"name": "all",
"applyWhen": {},
"read": {},
"write": {}
}
]
}
```
The `rules` component contains a sub-document for each of our collections. Each of those sub-documents contain an array of roles. Each role contains:
- The `name` of the role, this should be something that helps other developers understand the purpose of the role (e.g., "admin," "owner," "guest").
- `applyWhen`, which defines whether the requesting user matches the role or not. Each of our collections have a single role, and so `applyWhen` is set to `{}`, which always evaluates to true.
- A read rule—how to decide whether this user can view a given document. This is where our three collections impose different rules:
- A user can read and write to their own `User` object. No one else can read or write to it.
- Anyone can read any `Chatster` document, but no one can write to them. Note that these documents are maintained by database triggers to keep them consistent with their associated `User` document.
- The author of a `ChatMessage` is allowed to write to it. Anyone can read any `ChatMessage`. Ideally, we'd restrict it to just members of the chat room, but permissions don't currently support arrays—this is another feature that I'm keen to see added.
## Adding Flexible Sync to the iOS App
As with the back end, the iOS app is too big to cover in its entirety in this post. I'll explain how to build and run the app and then go through the components relevant to Flexible Sync.
### Configure, Build, and Run the RChat iOS App
You've already downloaded the repo containing the iOS app, but you need to change directory before opening and running the app:
```bash
cd ../../RChat-iOS
open RChat.xcodeproj
```
Update `RChatApp.swift` with your App Id (you copied that from the Atlas UI when configuring your backend app). In Xcode, select your device or simulator before building and running the app (⌘R). Select a second device or simulator and run the app a second time (⌘R).
On each device, provide a username and password and select the "Register new user" checkbox:
![iOS screenshot of registering a new user through the RChat app
Once registered and logged in on both devices, you can create a new chat room, invite your second user, and start sharing messages and photos. To share location, you first need to enable it in the app's settings.
### Key Pieces of the iOS App Code
#### The Model
You've seen the schemas that were defined for the "User," "Chatster," and "ChatMessage" collections in the back end Atlas app. Each of those collections has an associated Realm `Object` class in the iOS app. Sub-documents map to embedded objects that conform to `RealmEmbeddedObject`:
Let's take a close look at each of these classes:
**User Class**
``` swift
class User: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id = UUID().uuidString
@Persisted var userName = ""
@Persisted var userPreferences: UserPreferences?
@Persisted var lastSeenAt: Date?
@Persisted var conversations = List()
@Persisted var presence = "On-Line"
}
class UserPreferences: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var displayName: String?
@Persisted var avatarImage: Photo?
}
class Photo: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var _id = UUID().uuidString
@Persisted var thumbNail: Data?
@Persisted var picture: Data?
@Persisted var date = Date()
}
class Conversation: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var id = UUID().uuidString
@Persisted var displayName = ""
@Persisted var unreadCount = 0
@Persisted var members = List()
}
class Member: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var userName = ""
@Persisted var membershipStatus = "User added, but invite pending"
}
```
**Chatster Class**
```swift
class Chatster: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id = UUID().uuidString // This will match the _id of the associated User
@Persisted var userName = ""
@Persisted var displayName: String?
@Persisted var avatarImage: Photo?
@Persisted var lastSeenAt: Date?
@Persisted var presence = "Off-Line"
}
class Photo: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var _id = UUID().uuidString
@Persisted var thumbNail: Data?
@Persisted var picture: Data?
@Persisted var date = Date()
}
```
**ChatMessage Class**
```swift
class ChatMessage: Object, ObjectKeyIdentifiable {
@Persisted(primaryKey: true) var _id = UUID().uuidString
@Persisted var conversationID = ""
@Persisted var author: String? // username
@Persisted var authorID: String
@Persisted var text = ""
@Persisted var image: Photo?
@Persisted var location = List()
@Persisted var timestamp = Date()
}
class Photo: EmbeddedObject, ObjectKeyIdentifiable {
@Persisted var _id = UUID().uuidString
@Persisted var thumbNail: Data?
@Persisted var picture: Data?
@Persisted var date = Date()
}
```
#### Accessing Synced Realm Data
Any iOS app that wants to sync Realm data needs to create a Realm `App` instance, providing the Realm App ID so that the Realm SDK can connect to the backend Realm app:
```swift
let app = RealmSwift.App(id: "rchat-xxxxx") // TODO: Set the Realm application ID
```
When a SwiftUI view (in this case, `LoggedInView`) needs to access synced data, the parent view must flag that flexible sync will be used. It does this by passing the Realm configuration through the SwiftUI environment:
```swift
LoggedInView(userID: $userID)
.environment(\.realmConfiguration,
app.currentUser!.flexibleSyncConfiguration())
```
`LoggedInView` can then access two variables from the SwiftUI environment:
```swift
struct LoggedInView: View {
...
@Environment(\.realm) var realm
@ObservedResults(User.self) var users
```
The users variable is a live query containing all synced `User` objects in the Realm. But at this point, no `User` documents have been synced because we haven't subscribed to anything.
That's easy to fix. We create a new function (`setSubscription`) that's invoked when the view is opened:
```swift
struct LoggedInView: View {
...
@Binding var userID: String?
...
var body: some View {
ZStack {
...
}
.onAppear(perform: setSubscription)
}
private func setSubscription() {
let subscriptions = realm.subscriptions
subscriptions.update {
if let currentSubscription = subscriptions.first(named: "user_id") {
print("Replacing subscription for user_id")
currentSubscription.updateQuery(toType: User.self) { user in
user._id == userID!
}
} else {
print("Appending subscription for user_id")
subscriptions.append(QuerySubscription(name: "user_id") { user in
user._id == userID!
})
}
}
}
}
```
Subscriptions are given a name to make them easier to work with. I named this one `user_id`.
The function checks whether there's already a subscription named `user_id`. If there is, then the function replaces it. If not, then it adds the new subscription. In either case, the subscription is defined by passing in a query that finds any `User` documents/objects where the `_id` field matches the current user's ID.
The subscription should sync exactly one `User` object to the realm, and so the code for the view's body can work with the `first` object in the results:
```swift
struct LoggedInView: View {
...
@ObservedResults(User.self) var users
@Binding var userID: String?
...
var body: some View {
ZStack {
if let user = users.first {
...
ConversationListView(user: user)
...
}
}
.navigationBarTitle("Chats", displayMode: .inline)
.onAppear(perform: setSubscription)
}
}
```
Other views work with different model classes and sync queries. For example, when the user clicks on a chat room, a new view is opened that displays all of the `ChatMessage`s for that conversation:
```swift
struct ChatRoomBubblesView: View {
...
@ObservedResults(ChatMessage.self, sortDescriptor: SortDescriptor(keyPath: "timestamp", ascending: true)) var chats
@Environment(\.realm) var realm
...
var conversation: Conversation?
...
var body: some View {
VStack {
...
}
.onAppear { loadChatRoom() }
}
private func loadChatRoom() {
...
setSubscription()
...
}
private func setSubscription() {
let subscriptions = realm.subscriptions
subscriptions.update {
if let conversation = conversation {
if let currentSubscription = subscriptions.first(named: "conversation") {
currentSubscription.updateQuery(toType: ChatMessage.self) { chatMessage in
chatMessage.conversationID == conversation.id
}
} else {
subscriptions.append(QuerySubscription(name: "conversation") { chatMessage in
chatMessage.conversationID == conversation.id
})
}
}
}
}
}
```
In this case, the query syncs all `ChatMessage` objects where the `conversationID` matches the `id` of the `Conversation` object passed to the view.
The view's body can then iterate over all of the matching, synced objects:
```swift
struct ChatRoomBubblesView: View {
...
@ObservedResults(ChatMessage.self,
sortDescriptor: SortDescriptor(keyPath: "timestamp", ascending: true)) var chats
...
var body: some View {
...
ForEach(chats) { chatMessage in
ChatBubbleView(chatMessage: chatMessage,
authorName: chatMessage.author != user.userName ? chatMessage.author : nil,
isPreview: isPreview)
}
...
}
}
```
As it stands, there's some annoying behavior. If you open conversation A, go back, and then open conversation B, you'll initially see all of the messages from conversation A. The reason is that it takes a short time for the updated subscription to replace the `ChatMessage` objects in the synced Realm. I solve that by explicitly removing the subscription (which purges the synced objects) when closing the view:
```swift
struct ChatRoomBubblesView: View {
...
@Environment(\.realm) var realm
...
var body: some View {
VStack {
...
}
.onDisappear { closeChatRoom() }
}
private func closeChatRoom() {
clearSubscription()
...
}
private func clearSunscription() {
print("Leaving room, clearing subscription")
let subscriptions = realm.subscriptions
subscriptions.update {
subscriptions.remove(named: "conversation")
}
}
}
```
I made a design decision that I'd use the same name ("conversation") for this view, regardless of which conversation/chat room it's working with. An alternative would be to create a unique subscription whenever a new chat room is opened (including the ID of the conversation in the name). I could then avoid removing the subscription when navigating away from a chat room. This second approach would come with two advantages:
1. The app should be more responsive when navigating between chat rooms (if you'd previously visited the chat room that you're opening).
2. You can switch between chat rooms even when the device isn't connected to the internet.
The disadvantages of this approach would be:
1. The app could end up with a lot of subscriptions (and there's a cost to them).
2. The app continues to store all of the messages from any chat room that you've ever visited from this device. That consumes extra device storage and network bandwidth as messages from all of those rooms continue to be synced to the app.
A third approach would be to stick with a single subscription (named "conversations") that matches every `ChatMessage` object. The view would then need to apply a filter on the resulting `ChatMessage` objects so it only displayed those for the open chat room. This has the same advantages as the second approach, but can consume even more storage as the device will contain messages from all chat rooms—including those that the user has never visited.
Note that a different user can log into the app from the same device. You don't want that user to be greeted with someone else's data. To avoid that, the app removes all subscriptions when a user logs out:
```swift
struct LogoutButton: View {
...
@Environment(\.realm) var realm
var body: some View {
Button("Log Out") { isConfirming = true }
.confirmationDialog("Are you that you want to logout",
isPresented: $isConfirming) {
Button("Confirm Logout", role: .destructive, action: logout)
Button("Cancel", role: .cancel) {}
}
.disabled(state.shouldIndicateActivity)
}
private func logout() {
...
clearSubscriptions()
...
}
private func clearSubscriptions() {
let subscriptions = realm.subscriptions
subscriptions.update {
subscriptions.removeAll()
}
}
}
```
## Conclusion
In this article, you've seen how to include Flexible Sync in your mobile app. I've shown the code for Swift, but the approach would be the same when building apps with Kotlin, Javascript, or .NET.
Since this post was initially released, Flexible Sync has evolved to include more query and permission operators. For example, array operators (that would allow me to add tighter restrictions on who can ask to read which chat messages).
You can now limit which fields from a document get synced to a given user. This could allow the removal of the `Chatster` collection, as it's only there to provide a read-only view of a subset of `User` fields to other users.
Want to suggest an enhancement or up-vote an existing request? The most effective way is through our feedback portal.
Got questions? Ask them in our Community forum.
| md | {
"tags": [
"Realm",
"iOS"
],
"pageDescription": "How to use Realm Flexible Sync in your app. Worked example of an iOS chat app.",
"contentType": "Tutorial"
} | Using Realm Flexible Sync in Your App—an iOS Tutorial | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/advanced-rag-langchain-mongodb | created | # Adding Semantic Caching and Memory to Your RAG Application Using MongoDB and LangChain
# Introduction
Retrieval-augmented generation (RAG) is an architectural design pattern prevalent in modern AI applications that provides generative AI functionalities. RAG has gained adoption within generative applications due to its additional benefit of grounding the responses and outputs of large language models (LLMs) with some relevant, factual, and updated information. The key contribution of RAG is the supplementing of non-parametric knowledge with the parametric knowledge of the LLM to generate adequate responses to user queries.
Modern AI applications that leverage LLMs and generative AI require more than effective response capabilities. AI engineers and developers should consider two other functionalities before moving RAG applications to production. Semantic caching and memory are two important capabilities for generative AI applications that extend the usefulness of modern AI applications by reducing infrastructure costs, response latency, and conversation storage.
**Semantic caching is a process that utilizes a data store to keep a record of the queries and their results based on the semantics or context within the queries themselves.**
This means that, as opposed to a traditional cache that caches data based on exact matches of data requests or specific identifiers, a semantic cache understands and leverages the meaning and relationships inherent in the data. Within an LLM or RAG application, this means that user queries that are both exact matches and contextually similar to any queries that have been previously cached will benefit from an efficient information retrieval process.
Take, for example, an e-commerce platform's customer support chatbot; integrating semantic caching enables the system to respond to inquiries by understanding the context behind user queries. So, whether a customer asks about the "best smartphone for night photography" or "a phone for night photos," the chatbot can leverage its semantic cache to pull relevant, previously stored responses, improving both the efficiency and relevance of its answers.
LLM-powered chatbot interfaces are now prevalent in generative AI applications. Still, the conversations held between LLM and application users must be stored and retrieved to create a coherent and contextually relevant interaction history. The benefits of having a reference of interaction history lie in providing additional context to LLMs, understanding previously held conversations, improving the personalization of GenAI applications, and enabling the chatbot to provide more accurate responses to queries.
MongoDB Atlas vector search capabilities enable the creation of a semantic cache, and the new LangChain-MongoDB integration makes integrating this cache in RAG applications easier. The LangChain-MongoDB integration also makes implementing a conversation store for interactions with RAG applications easier.
**Here's what’s covered in this tutorial:**
- How to implement memory and storage of conversation history using LangChain and MongoDB
- How to implement semantic cache using LangChain and MongoDB
- Overview of semantic cache and memory utilization within RAG applications
The following GitHub repository contains all implementations presented in this tutorial, along with other use cases and examples of RAG implementations.
----------
# Step 1: Installing required libraries
This section guides you through the installation process of the essential libraries needed to implement the RAG application, complete with memory and history capabilities, within your current development environment. Here is the list of required libraries:
- **datasets**: Python library to get access to datasets available on Hugging Face Hub
- **langchain**: Python toolkit for LangChain
- **langchain-mongodb**: Python package to use MongoDB as a vector store, semantic cache, chat history store, etc., in LangChain
- **langchain-openai**: Python package to use OpenAI models with LangChain
- **pymongo**: Python toolkit for MongoDB
- **pandas**: Python library for data analysis, exploration, and manipulation
```
! pip install -qU datasets langchain langchain-mongodb langchain-openai pymongo pandas
```
Do note that this tutorial utilizes OpenAI embedding and base models. To access the models, ensure you have an OpenAI API key.
In your development environment, create a reference to the OpenAI API key.
```
import getpass
OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key:")
```
----------
# Step 2: Database setup
To handle the requirements for equipping the RAG application with the capabilities of storing interaction or conversation history and a semantic cache, two new collections must be created alongside the collection that will hold the main application data.
Creating a database and collection within MongoDB is made simple with MongoDB Atlas.
1. Register a free Atlas account or sign in to your existing Atlas account.
2. Follow the instructions (select Atlas UI as the procedure) to deploy your first cluster.
3. Create the database: \`langchain\_chatbot\`.
4. Within the database\` langchain\_chatbot\`, create the following collections:
- `data` : Hold all data that acts as a knowledge source for the chatbot.
- `history` : Hold all conversations held between the chatbot and the application user.
- `semantic_cache` : Hold all queries made to the chatbot along with their LLM responses.
5. Create a vector search index named `vector_index` for the `data` collection. This index enables the RAG application to retrieve records as additional context to supplement user queries via vector search. Below is the JSON definition of the `data` collection vector search index.
```
{
"fields":
{
"numDimensions": 1536,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
6\. Create a [vector search index with a text filter named `vector_index` for the `semantic_cache` collection. This index enables the RAG application to retrieve responses to queries semantically similar to a current query asked by the application user. Below is the JSON definition of the `semantic_cache` collection vector search index.
```
{
"fields":
{
"numDimensions": 1536,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
},
{
"path": "llm_string",
"type": "filter"
}
]
}
```
By the end of this step, you should have a database with three collections and two defined vector search indexes. The final step of this section is to obtain the connection URI string to the created Atlas cluster to establish a connection between the databases and the current development environment. Follow the steps to [get the connection string from the Atlas UI.
In your development environment, create a reference to the MongoDB URI string.
```
MONGODB_URI = getpass.getpass("Enter your MongoDB connection string:")
```
----------
# Step 3: Download and prepare the dataset
This tutorial uses MongoDB’s embedded_movies dataset. A datapoint within the movie dataset contains information corresponding to a particular movie; plot, genre, cast, runtime, and more are captured for each data point. After loading the dataset into the development environment, it is converted into a Pandas data frame object, which enables data structure manipulation and analysis with relative ease.
```
from datasets import load_dataset
import pandas as pd
data = load_dataset("MongoDB/embedded_movies")
df = pd.DataFrame(data"train"])
# Only keep records where the fullplot field is not null
df = df[df["fullplot"].notna()]
# Renaming the embedding field to "embedding" -- required by LangChain
df.rename(columns={"plot_embedding": "embedding"}, inplace=True)
```
**The code above executes the following operations:**
- Import the `load_dataset` module from the `datasets` library, which enables the appropriate dataset to be loaded for this tutorial by specifying the path. The full dataset is loaded environment and referenced by the variable `data`.
- Only the dataset's train partition is required to be utilized; the variable `df` holds a reference to the dataset training partition as a Pandas DataFrame.
- The DataFrame is filtered to only keep records where the `fullplot` field is not null. This step ensures that any subsequent operations or analyses that rely on the `fullplot` field, such as the embedding process, will not be hindered by missing data. The filtering process uses pandas' notna() method to check for non-null entries in the `fullplot` column.
- The column `plot_embedding` in the DataFrame is renamed to `embedding`. This step is necessary for compatibility with LangChain, which requires an input field named embedding.
By the end of the operations in this section, we have a full dataset that acts as a knowledge source for the chatbot and is ready to be ingested into the `data` collection in the `langchain_chatbot` database.
----------
# Step 4: Create a naive RAG chain with MongoDB Vector Store
Before adding chat history and caching, let’s first see how to create a simple RAG chain using LangChain, with MongoDB as the vector store. Here’s what the workflow looks like:
![Naive RAG workflow][1]
The user question is embedded, and relevant documents are retrieved from the MongoDB vector store. The retrieved documents, along with the user query, are passed as a prompt to the LLM, which generates an answer to the question.
Let’s first ingest data into a MongoDB collection. We will use this collection as the vector store for our RAG chain.
```
from pymongo import MongoClient
# Initialize MongoDB python client
client = MongoClient(MONGODB_URI)
DB_NAME = "langchain_chatbot"
COLLECTION_NAME = "data"
ATLAS_VECTOR_SEARCH_INDEX_NAME = "vector_index"
collection = client[DB_NAME][COLLECTION_NAME]
```
The code above creates a MongoDB client and defines the database `langchain_chatbot` and collection `data` where we will store our data. Remember, you will also need to create a vector search index to efficiently retrieve data from the MongoDB vector store, as documented in Step 2 of this tutorial. To do this, refer to our official [vector search index creation guide.
While creating the vector search index for the `data` collection, ensure that it is named `vector_index` and that the index definition looks as follows:
```
{
"fields":
{
"numDimensions": 1536,
"path": "embedding",
"similarity": "cosine",
"type": "vector"
}
]
}
```
> *NOTE*: We set `numDimensions` to `1536` because we use OpenAI’s `text-embedding-ada-002` model to create embeddings.
Next, we delete any existing documents from the \`data\` collection and ingest our data into it:
```
# Delete any existing records in the collection
collection.delete_many({})
# Data Ingestion
records = df.to_dict('records')
collection.insert_many(records)
print("Data ingestion into MongoDB completed")
```
Ingesting data into a MongoDB collection from a pandas DataFrame is a straightforward process. We first convert the DataFrame to a list of dictionaries and then utilize the `insert_many` method to bulk ingest documents into the collection.
With our data in MongoDB, let’s use it to construct a vector store for our RAG chain:
```
from langchain_openai import OpenAIEmbeddings
from langchain_mongodb import MongoDBAtlasVectorSearch
# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY, model="text-embedding-ada-002")
# Vector Store Creation
vector_store = MongoDBAtlasVectorSearch.from_connection_string(
connection_string=MONGODB_URI,
namespace=DB_NAME + "." + COLLECTION_NAME,
embedding= embeddings,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
text_key="fullplot"
)
```
We use the `from_connection_string` method of the `MongoDBAtlasVectorSearch` class from the `langchain_mongodb` integration to create a MongoDB vector store from a MongoDB connection URI. The `get_connection_string` method takes the following arguments:
- **connection_string**: MongoDB connection URI
- **namespace**: A valid MongoDB namespace (database and collection)
- **embedding**: Embedding model to use to generate embeddings for a vector search
- **index_name**: MongoDB Atlas vector search index name
- **text_key**: Field in the ingested documents that contain the text
The next step is to use the MongoDB vector store as a retriever in our RAG chain. In LangChain, a retriever is an interface that returns documents given a query. You can use a vector store as a retriever by using the `as_retriever` method:
```
retriever = vector_store.as_retriever(search_type="similarity", search_kwargs={"k": 5})
```
`as_retriever` can take arguments such as `search_type` — i.e., what metric to use to retrieve documents. Here, we choose `similarity` since we want to retrieve the most similar documents to a given query. We can also specify additional search arguments such as `k` — i.e., the number of documents to retrieve. In our example, we set it to 5, which means the 5 most similar documents will be retrieved for a given query.
The final step is to put all of these pieces together to create a RAG chain.
> NOTE: Chains in LangChain are a sequence of calls either to an LLM, a
> tool, or a data processing step. The recommended way to compose chains
> in LangChain is using the [LangChain Expression
> Language
> (LCEL). Each component in a chain is referred to as a `Runnable` and
> can be invoked, streamed, etc., independently of other components in
> the chain.
```python
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
# Generate context using the retriever, and pass the user question through
retrieve = {"context": retriever | (lambda docs: "\n\n".join(d.page_content for d in docs])), "question": RunnablePassthrough()}
template = """Answer the question based only on the following context: \
{context}
Question: {question}
"""
# Defining the chat prompt
prompt = ChatPromptTemplate.from_template(template)
# Defining the model to be used for chat completion
model = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
# Parse output as a string
parse_output = StrOutputParser()
# Naive RAG chain
naive_rag_chain = (
retrieve
| prompt
| model
| parse_output
)
```
The code snippet above does the following:
- Defines the `retrieve` component: It takes the user input (a question) and sends it to the `retriever` to obtain similar documents. It also formats the output to match the input format expected by the next Runnable, which in this case is a dictionary with `context` and `question` as keys. The `RunnablePassthrough()` call for the `question` key indicates that the user input is simply passed through to the next stage under the `question` key.
- Defines the `prompt` component: It crafts a prompt by populating a prompt template with the `context` and `question` from the `retrieve` stage.
- Defines the `model` component: This specifies the chat model to use. We use OpenAI — unless specified otherwise, the `gpt-3.5-turbo` model is used by default.
- Defines the `parse_output` component: A simple output parser parses the result from the LLM into a string.
- Defines a `naive_rag_chain`: It uses LCEL pipe ( | ) notation to chain together the above components.
Let’s test out our chain by asking a question. We do this using the \`invoke()\` method, which is used to call a chain on an input:
```
naive_rag_chain.invoke("What is the best movie to watch when sad?")
Output: Once a Thief
```
> NOTE: With complex chains, it can be hard to tell whether or not
> information is flowing through them as expected. We highly recommend
> using [LangSmith for debugging and
> monitoring in such cases. Simply grab an API
> key and add the following lines
> to your code to view
> traces
> in the LangSmith UI:
```
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=
```
----------
# Step 5: Create a RAG chain with chat history
Now that we have seen how to create a simple RAG chain, let’s see how to add chat message history to it and persist it in MongoDB. The workflow for this chain looks something like this:
.
----------
# FAQs
1\. **What is retrieval-augmented generation (RAG)?**
RAG is a design pattern in AI applications that enhances the capabilities of large language models (LLMs) by grounding their responses with relevant, factual, and up-to-date information. This is achieved by supplementing LLMs' parametric knowledge with non-parametric knowledge, enabling the generation of more accurate and contextually relevant responses.
2\. **How does integrating memory and chat history enhance RAG applications?**
Integrating memory and chat history into RAG applications allows for the retention and retrieval of past interactions between the large language model (LLM) and users. This functionality enriches the model's context awareness, enabling it to generate responses that are relevant to the immediate query and reflect the continuity and nuances of ongoing conversations. By maintaining a coherent and contextually relevant interaction history, RAG applications can offer more personalized and accurate responses, significantly enhancing the user experience and the application's overall effectiveness.
3\. **Why is semantic caching important in RAG applications?**
Semantic caching stores the results of user queries and their associated responses based on the query's semantics. This approach allows for efficient information retrieval when semantically similar queries are made in the future, reducing API calls to LLM providers and lowering both latency and operational costs.
4\. **How does MongoDB Atlas support RAG applications?**
MongoDB Atlas offers vector search capabilities, making it easier to implement semantic caches and conversation stores within RAG applications. This integration facilitates the efficient retrieval of semantically similar queries and the storage of interaction histories, enhancing the application's overall performance and user experience.
5\. **How can semantic caching reduce query execution times in RAG applications?**
RAG applications can quickly retrieve cached answers for semantically similar queries without recomputing them by caching responses to queries based on their semantic content. This significantly reduces the time to generate responses, as demonstrated by the decreased query execution times upon subsequent similar queries.
6\. **What benefits does the LangChain-MongoDB integration offer?**
This integration simplifies the process of adding semantic caching and memory capabilities to RAG applications. It enables the efficient management of conversation histories and the implementation of semantic caches using MongoDB's powerful vector search features, leading to improved application performance and user experience.
7\. **How does one measure the impact of semantic caching on a RAG application?**
By monitoring query execution times before and after implementing semantic caching, developers can observe the efficiency gains the cache provides. A noticeable reduction in execution times for semantically similar queries indicates the cache's effectiveness in improving response speeds and reducing operational costs.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2f4885e0ad80cf6c/65fb18fda1e8151092d5d332/Screenshot_2024-03-20_at_17.12.00.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6653138666384116/65fb2b7996251beeef7212b8/Screenshot_2024-03-20_at_18.31.05.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd29de502dea33ac5/65fb1de1f4a4cf95f4150473/Screenshot_2024-03-20_at_16.39.13.png | md | {
"tags": [
"Atlas",
"Python",
"AI",
"Pandas"
],
"pageDescription": "This guide outlines how to enhance Retrieval-Augmented Generation (RAG) applications with semantic caching and memory using MongoDB and LangChain. It explains integrating semantic caching to improve response efficiency and relevance by storing query results based on semantics. Additionally, it describes adding memory for maintaining conversation history, enabling context-aware interactions. \n\nThe tutorial includes steps for setting up MongoDB, implementing semantic caching, and incorporating these features into RAG applications with LangChain, leading to improved response times and enriched user interactions through efficient data retrieval and personalized experiences.",
"contentType": "Tutorial"
} | Adding Semantic Caching and Memory to Your RAG Application Using MongoDB and LangChain | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/how-use-cohere-embeddings-rerank-modules-mongodb-atlas | created | # How to Use Cohere Embeddings and Rerank Modules with MongoDB Atlas
The daunting task that developers currently face while developing solutions powered by the retrieval augmented generation (RAG) framework is the choice of retrieval mechanism. Augmenting the large language model (LLM) prompt with relevant and exhaustive information creates better responses from such systems.. One is tasked with choosing the most appropriate embedding model in the case of semantic similarity search. Alternatively, in the case of full-text search implementation, you have to be thorough about your implementation to achieve a precise recall and high accuracy in your results. Sometimes, the solutions require a combined implementation that benefits from both retrieval mechanisms.
If your current full-text search scoring workflow is leaving things to be desired, or if you find yourself spending too much time writing numerous lines of code to get semantic search functionality working within your applications, then Cohere and MongoDB can help. To prevent these issues from holding you back from leveraging powerful AI search functionality or machine learning within your application, Cohere and MongoDB offer easy-to-use and fully managed solutions.
Cohere is an AI company specializing in large language models.
1. With a powerful tool for embedding natural language in their projects, it can help you represent more accurate, relevant, and engaging content as embeddings. The Cohere language model also offers a simple and intuitive API that allows you to easily integrate it with your existing workflows and platforms.
2. The Cohere Rerank module is a component of the Cohere natural language processing system that helps to select the best output from a set of candidates. The module uses a neural network to score each candidate based on its relevance, semantic similarity, theme, and style. The module then ranks the candidates according to their scores and returns the top N as the final output.
MongoDB Atlas is a fully managed developer data platform service that provides scalable, secure, and reliable data storage and access for your applications. One of the key features of MongoDB Atlas is the ability to perform vector search and full-text search on your data, which can enhance the capabilities of your AI/ML-driven applications. MongoDB Atlas can help you build powerful and flexible AI/ML-powered applications that can leverage both structured and unstructured data. You can easily create and manage search indexes, perform queries, and analyze results using MongoDB Atlas's intuitive interface, APIs, and drivers. MongoDB Atlas Vector Search provides a unique feature — pre-filtering and post-filtering on vector search queries — that helps users control the behavior of their vector search results, thereby improving the accuracy and retrieval performance, and saving money at the same time.
Therefore, with Cohere and MongoDB Atlas, we can demonstrate techniques where we can easily power a semantic search capability on your private dataset with very few lines of code. Additionally, you can enhance the existing ranking of your full-text search retrieval systems using the Cohere Rerank module. Both techniques are highly beneficial for building more complex GenAI applications, such as RAG- or LLM-powered summarization or data augmentation.
## What will we do in this tutorial?
### Store embeddings and prepare the index
1. Use the Cohere Embed Jobs to generate vector embeddings for the first time on large datasets in an asynchronous and scheduled manner.
2. Add vector embeddings into MongoDB Atlas, which can store and index these vector embeddings alongside your other operational/metadata.
3. Finally, prepare the indexes for both vector embeddings and full-text search on our private dataset.
### Search with vector embeddings
1. Write a simple Python function to accept search terms/phrases and pass it through the Cohere embed API again to get a query vector.
2. Take these resultant query vector embeddings and perform a vector search query using the $vectorsearch operator in the MongoDB Aggregation Pipeline.
3. Pre-filter documents using meta information to narrow the search across your dataset, thereby speeding up the performance of vector search results while retaining accuracy.
4. The retrieved semantically similar documents can be post-filtered (relevancy score) to demonstrate a higher degree of control over the semantic search behaviour.
### Search with text and Rerank with Cohere
1. Write a simple Python function to accept search terms/phrases and prepare a query using the $search operator and MongoDB Aggregation Pipeline.
2. Take these resultant documents and perform a reranking operation of the retrieved documents to achieve higher accuracy with full-text search results using the Cohere rerank module.
- Cohere CLI tool
Also, if you have not created a MongoDB Atlas instance for yourself, you can follow the tutorial to create one. This will provide you with your `MONGODB_CONNECTION_STR`.
Run the following lines of code in Jupyter Notebook to initialize the Cohere secret or API key and MongoDB Atlas connection string.
```python
import os
import getpass
# cohere api key
try:
cohere_api_key = os.environ"COHERE_API_KEY"]
except KeyError:
cohere_api_key = getpass.getpass("Please enter your COHERE API KEY (hit enter): ")
# MongoDB connection string
try:
MONGO_CONN_STR = os.environ["MONGODB_CONNECTION_STR"]
except KeyError:
MONGO_CONN = getpass.getpass("Please enter your MongoDB Atlas Connection String (hit enter): ")
```
### Load dataset from the S3 bucket
Run the following lines of code in Jupyter Notebook to read data from an AWS S3 bucket directly to a pandas dataframe.
```python
import pandas as pd
import s3fs
df = pd.read_json("s3://ashwin-partner-bucket/cohere/movies_sample_dataset.jsonl", orient="records", lines=True)
df.to_json("./movies_sample_dataset.jsonl", orient="records", lines=True)
df[:3]
```
![Loaded AWS S3 Dataset][2]
### Initialize and schedule the Cohere embeddings job to embed the "sample_movies" dataset
Here we will create a movies dataset in Cohere by uploading our sample movies dataset that we fetched from the S3 bucket and have stored locally. Once we have created a dataset, we can use the Cohere embed jobs API to schedule a batch job to embed all the entire dataset.
You can run the following lines of code in your Jupyter Notebook to upload your dataset to Cohere and schedule an embedding job.
```python
import cohere
co_client = cohere.Client(cohere_api_key, client_name='mongodb')
# create a dataset in Cohere Platform
dataset = co_client.create_dataset(name='movies',
data=open("./movies_sample_dataset.jsonl",'r'),
keep_fields=["overview","title","year"],
dataset_type="embed-input").wait()
dataset.wait()
dataset
dataset.wait()
# Schedule an Embedding job to run on the entire movies dataset
embed_job = co_client.create_embed_job(dataset_id=dataset.id,
input_type='search_document',
model='embed-english-v3.0',
truncate='END')
embed_job.wait()
output_dataset = co_client.get_dataset(embed_job.output.id)
results = list(map(lambda x:{"text":x["text"], "embedding": x["embeddings"]["float"]},output_dataset))
len(results)
```
### How to initialize MongoDB Atlas and insert data to a MongoDB collection
Now that we have created the vector embeddings for our sample movies dataset, we can initialize the MongoDB client and insert the documents into our collection of choice by running the following lines of code in the Jupyter Notebook.
```python
from pymongo import MongoClient
mongo_client = MongoClient(MONGO_CONN_STR)
# Upload documents along with vector embeddings to MongoDB Atlas Collection
output_collection = mongo_client["sample_mflix"]["cohere_embed_movies"]
if output_collection.count_documents({})>0:
output_collection.delete_many({})
e = output_collection.insert_many(results)
```
### Programmatically create vector search and full-text search index
With the latest update to the **Pymongo** Python package, you can now create your vector search index as well as full-text search indexes from the Python client itself. You can also create vector indexes using the MongoDB Atlas UI or `mongosh`.
Run the following lines of code in your Jupyter Notebook to create search and vector search indexes on your new collection.
```
output_collection.create_search_index({"definition":
{"mappings":
{"dynamic": true,
"fields": {
"embedding" : {
"dimensions": 1024,
"similarity": "cosine",
"type": "vector"
},
"fullplot":
}}},
"name": "default"
}
)
```
### Query MongoDB vector index using $vectorSearch
MongoDB Atlas brings the flexibility of using vector search alongside full-text search filters. Additionally, you can apply range, string, and numeric filters using the aggregation pipeline. This allows the end user to control the behavior of the semantic search response from the search engine. The below lines of code will demonstrate how you can perform vector search along with pre-filtering on the **year** field to get movies earlier than **1990.** Plus, you have better control over the relevance of returned results, so you can perform post-filtering on the response using the MongoDB Query API. In this demo, we are filtering on the **score** field generated as a result of performing the vector similarity between the query and respective documents, using a heuristic to retain only the accurate results.
Run the below lines of code in Jupyter Notebook to initialize a function that can help you achieve **vector search + pre-filter + post-filter**.
```python
def query_vector_search(q, prefilter = {}, postfilter = {},path="embedding",topK=2):
ele = co_client.embed(model="embed-english-v3.0",input_type="search_query",texts=[q])
query_embedding = ele.embeddings[0]
vs_query = {
"index": "default",
"path": path,
"queryVector": query_embedding,
"numCandidates": 10,
"limit": topK,
}
if len(prefilter)>0:
vs_query["filter"] = prefilter
new_search_query = {"$vectorSearch": vs_query}
project = {"$project": {"score": {"$meta": "vectorSearchScore"},"_id": 0,"title": 1, "release_date": 1, "overview": 1,"year": 1}}
if len(postfilter.keys())>0:
postFilter = {"$match":postfilter}
res = list(output_collection.aggregate([new_search_query, project, postFilter]))
else:
res = list(output_collection.aggregate([new_search_query, project]))
return res
```
#### Vector search query example
Run the below lines of code in Jupyter Notebook cell and you can see the following results.
```python
query_vector_search("romantic comedy movies", topK=5)
```
![Vector Search Query Example Results][3]
#### Vector search query example with prefilter
```python
query_vector_search("romantic comedy movies", prefilter={"year":{"$lt": 1990}}, topK=5)
```
![Vector Search with Prefilter Example Results][4]
#### Vector search query example with prefilter and postfilter to control the semantic search relevance and behaviour
```python
query_vector_search("romantic comedy movies", prefilter={"year":{"$lt": 1990}}, postfilter={"score": {"$gt":0.76}},topK=5)
```
![Vector Search with Prefilter and Postfilter Example Results][5]
### Leverage MongoDB Atlas full-text search with Cohere Rerank module
[Cohere Rerank is a module in the Cohere suite of offerings that enhances the quality of search results by leveraging semantic search. This helps elevate the traditional search engine performance, which relies solely on keywords. Rerank goes a step further by ranking results retrieved from the search engine based on their semantic relevance to the input query. This pass of re-ranking search results helps achieve more appropriate and contextually similar search results.
To demonstrate how the Rerank module can be leveraged with MongoDB Atlas full-text search, we can follow along by running the following line of code in your Jupyter Notebook.
```python
# sample search query using $search operator in aggregation pipeline
def query_fulltext_search(q,topK=25):
v = {"$search": {
"text": {
"query": q,
"path":"overview"
}
}}
project = {"$project": {"score": {"$meta": "searchScore"},"_id": 0,"title": 1, "release-date": 1, "overview": 1}}
docs = list(output_collection.aggregate(v,project, {"$limit":topK}]))
return docs
# results before re ranking
docs = query_fulltext_search("romantic comedy movies", topK=10)
docs
```
![Cohere Rerank Model Sample Results][6]
```python
# After passing the search results through the Cohere rerank module
q = "romantic comedy movies"
docs = query_fulltext_search(q)
results = co_client.rerank(query=q, documents=list(map(lambda x:x["overview"], docs)), top_n=5, model='rerank-english-v2.0') # Change top_n to change the number of results returned. If top_n is not passed, all results will be returned.
for idx, r in enumerate(results):
print(f"Document Rank: {idx + 1}, Document Index: {r.index}")
print(f"Document Title: {docs[r.index]['title']}")
print(f"Document: {r.document['text']}")
print(f"Relevance Score: {r.relevance_score:.2f}")
print("\n")
```
Output post reranking the full-text search results:
```
Document Rank: 1, Document Index: 22
Document Title: Love Finds Andy Hardy
Document: A 1938 romantic comedy film which tells the story of a teenage boy who becomes entangled with three different girls all at the same time.
Relevance Score: 0.99
Document Rank: 2, Document Index: 12
Document Title: Seventh Heaven
Document: Seventh Heaven or De zevende zemel is a 1993 Dutch romantic comedy film directed by Jean-Paul Lilienfeld.
Relevance Score: 0.99
Document Rank: 3, Document Index: 19
Document Title: Shared Rooms
Document: A new romantic comedy feature film that brings together three interrelated tales of gay men seeking family, love and sex during the holiday season.
Relevance Score: 0.97
Document Rank: 4, Document Index: 3
Document Title: Too Many Husbands
Document: Romantic comedy adapted from a Somerset Maugham play.
Relevance Score: 0.97
Document Rank: 5, Document Index: 20
Document Title: Walking the Streets of Moscow
Document: "I Am Walking Along Moscow" aka "Ya Shagayu Po Moskve" (1963) is a charming lyrical comedy directed by Georgi Daneliya in 1963 that was nominated for Golden Palm at Cannes Film Festival. Daneliya proved that it is possible to create a masterpiece in the most difficult genre of romantic comedy. Made by the team of young and incredibly talented artists that besides Daneliya included writer/poet Gennady Shpalikov, composer Andrei Petrov, and cinematographer Vadim Yusov (who had made four films with Andrei Tarkovski), and the dream cast of the talented actors even in the smaller cameos, "I Am Walking Along Moscow" keeps walking victoriously through the decades remaining deservingly one of the best and most beloved Russian comedies and simply one of the best Russian movies ever made. Funny and gentle, dreamy and humorous, romantic and realistic, the film is blessed with the eternal youth and will always take to the walk on the streets of Moscow new generations of the grateful viewers.
Relevance Score: 0.96
```
## Summary
In this tutorial, we were able to demonstrate the following:
1. Using the Cohere embedding along with MongoDB Vector Search, we were able to show how easy it is to achieve semantic search functionality alongside your operational data functions.
2. With Cohere Rerank, we were able to search results using full-text search capabilities in MongoDB and then rank them by semantic relevance, thereby delivering richer, more relevant results without replacing your existing search architecture setup.
3. The implementations were achieved with minimal lines of code and showcasing ease of use.
4. Leveraging Cohere Embeddings and Rerank does not need a team of ML experts to develop and maintain. So the monthly costs of maintenance were kept to a minimum.
5. Both solutions are cloud-agnostic and, hence, can be set up on any cloud platform.
The same can be found on a [notebook which will help reduce the time and effort following the steps in this blog.
## What's next?
To learn more about how MongoDB Atlas is helping build application-side ML integration in real-world applications, you can visit the MongoDB for AI page.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte8f8f2d8681106dd/660c5dfcdd5b9e752ba8949a/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt11b31c83a7a30a85/660c5e236c4a398354e46705/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf09db7ce89c89f05/660c5e4a3110d0a96d069608/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5707998b8d57764c/660c5e75c3bc8bfdfbdd1fc1/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt533d00bfde1ec48f/660c5e94c3bc8b26dedd1fcd/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc67a9ac477d5029e/660c5eb0df43aaed1cf11e70/6.png | md | {
"tags": [
"Atlas",
"Python"
],
"pageDescription": "",
"contentType": "Tutorial"
} | How to Use Cohere Embeddings and Rerank Modules with MongoDB Atlas | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/add-memory-to-javascript-rag-application-mongodb-langchain | created | # Add Memory to Your JavaScript RAG Application Using MongoDB and LangChain
## Introduction
AI applications with generative AI capabilities, such as text and image generation, require more than just the base large language models (LLMs). This is because LLMs are limited to their parametric knowledge, which can be outdated and not context-specific to a user query. The retrieval-augmented generation (RAG) design pattern solves the problem experienced with naive LLM systems by adding relevant information and context retrieved from an information source, such as a database, to the user's query before obtaining a response from the base LLM. The RAG architecture design pattern for AI applications has seen wide adoption due to its ease of implementation and effectiveness in grounding LLM systems with up-to-date and relevant data.
For developers creating new AI projects that use LLMs and this kind of advanced AI, it's important to think about more than just giving smart answers. Before they share their RAG-based projects with the world, they need to add features like memory. Adding memory to your AI systems can help by lowering costs, making them faster, and handling conversations in a smarter way.
Chatbots that use LLMs are now a regular feature in many online platforms, from customer service to personal assistants. However, one of the keys to making these chatbots more effective lies in their ability to recall and utilize previous conversations. By maintaining a detailed record of interactions, AI systems can significantly improve their understanding of the user's needs, preferences, and context. This historical insight allows the chatbot to offer responses that are not only relevant but also tailored to the individual user, enhancing the overall user experience.
Consider, for example, a customer who contacts an online bookstore's chatbot over several days, asking about different science fiction novels and authors. On the first day, the customer asks for book recommendations based on classic science fiction themes. The next day, they return to ask about books from specific authors in that genre. If the chatbot keeps a record of these interactions, it can connect the dots between the customer's various interests. By the third interaction, the chatbot could suggest new releases that align with the customer's demonstrated preference for classic science fiction, even recommending special deals or related genres the customer might not have explored yet.
This ability goes beyond simple question-and-answer dynamics; it creates a conversational memory for the chatbot, making each interaction more personal and engaging. Users feel understood and valued, leading to increased satisfaction and loyalty. In essence, by keeping track of conversations, chatbots powered by LLMs transform from impersonal answering machines into dynamic conversational partners capable of providing highly personalized and meaningful engagements.
MongoDB Atlas Vector Search and the new LangChain-MongoDB integration make adding these advanced data handling features to RAG projects easier.
What’s covered in this article:
* How to add memory and save records of chats using LangChain and MongoDB
* How adding memory helps in RAG projects
For more information, including step-by-step guides and examples, check out the GitHub repository.
> This article outlines how to add memory to a JavaScript-based RAG application. See how it’s done in Python and even add semantic caching!
## Step 1: Set up the environment
You may be used to notebooks that use Python, but you may have noticed that the notebook linked above uses JavaScript, specifically Deno.
To run this notebook, you will need to install Deno and set up the Deno Jupyter kernel. You can also follow the instructions.
Because Deno does not require any packages to be “installed,” it’s not necessary to install anything with npm.
Here is a breakdown of the dependencies for this project:
* mongodb: official Node.js driver from MongoDB
* nodejs-polars: JavaScript library for data analysis, exploration, and manipulation
* @langchain: JavaScript toolkit for LangChain
* @langchain/openai: JavaScript library to use OpenAI with LangChain
* @langchain/mongodb: JavaScript library to use MongoDB as a vector store and chat history store with LangChain
You’ll also need an OpenAI API key since we’ll be utilizing OpenAI for embedding and base models. Save your API key as an environment variable.
## Step 2: Set up the database
For this tutorial, we’ll use a free tier cluster on Atlas. If you don’t already have an account, register, then follow the instructions to deploy your first cluster.
Get your database connection string from the Atlas UI and save it as an environment variable.
## Step 3: Download and prepare the dataset
We’re going to use MongoDB’s sample dataset called embedded_movies. This dataset contains a wide variety of movie details such as plot, genre, cast, and runtime. Embeddings on the full_plot field have already been created using OpenAI’s `text-embedding-ada-002` model and can be found in the plot_embedding field.
After loading the dataset, we’ll use Polars to convert it into a DataFrame, which will allow us to manipulate and analyze it easily.
The code above executes the following operations:
* Import the nodejs-polars library for data management.
* fetch the sample_mflix.embedded_movies.json file directly from HuggingFace.
* The df variable parses the JSON into a DataFrame.
* The DataFrame is cleaned up to keep only the records that have information in the fullplot field. This guarantees that future steps or analyses depending on the fullplot field, like the embedding procedure, are not disrupted by any absence of data.
* Additionally, the plot_embedding column within the DataFrame is renamed to embedding. This step is necessary since LangChain requires an input field named “embedding.”
After finishing the steps in this part, we end up with a complete dataset that serves as the information base for the chatbot. Next, we’ll add the data into our MongoDB database and set up our first RAG chain using it.
## Step 4: Create a naive RAG chain with a MongoDB vector store
We’ll start by creating a simple RAG chain using LangChain, with MongoDB as the vector store. Once we get this set up, we’ll add chat history to optimize it even further.
in MongoDB Atlas. This is what enables our RAG application to query semantically similar records to use as additional context in our LLM prompts.
Be sure to create your vector search index on the `data` collection and name it `vector_index`. Here is the index definition you’ll need:
> **NOTE**: We set `numDimensions` to `1536` because we use OpenAI’s `text-embedding-ada-002` model to create embeddings.
Now, we can start constructing the vector store for our RAG chain.
We’ll use `OpenAIEmbeddings` from LangChain and define the model used. Again, it’s the `text-embedding-ada-002` model, which was used in the original embeddings of this dataset.
Next, we define our configuration by identifying the collection, index name, text key (full-text field of the embedding), and embedding key (which field contains the embeddings).
Then, pass everything into our `MongoDBAtlasVectorSearch()` method to create our vector store.
Now, we can “do stuff” with our vector store. We need a way to return the documents that get returned from our vector search. For that, we can use a retriever. (Not the golden kind.)
We’ll use the retriever method on our vector store and identify the search type and the number of documents to retrieve represented by k.
This will return the five most similar documents that match our vector search query.
The final step is to assemble everything into a RAG chain.
> **KNOWLEDGE**: In LangChain, the concept of chains refers to a sequence that may include interactions with an LLM, utilization of a specific tool, or a step related to processing data. To effectively construct these chains, it is advised to employ the LangChain Expression Language (LCEL). Within this structure, each part of a chain is called a Runnable, allowing for independent operation or streaming, separate from the chain's other components.
Here’s the breakdown of the code above:
1. retrieve: Utilizes the user's input to retrieve similar documents using the retriever. The input (question) also gets passed through using a RunnablePassthrough().
2. prompt: ChatPromptTemplate allows us to construct a prompt with specific instructions for our AI bot or system, passing two variables: context and question. These variables are populated from the retrieve stage above.
3. model: Here, we can specify which model we want to use to answer the question. The default is currently gpt-3.5-turbo if unspecified.
4. naiveRagChain: Using a RunnableSequence, we pass each stage in order: retrieve, prompt, model, and finally, we parse the output from the LLM into a string using StringOutputParser().
It’s time to test! Let’s ask it a question. We’ll use the invoke() method to do this.
## Step 5: Implement chat history into a RAG chain
That was a simple, everyday RAG chain. Next, let’s take it up a notch and implement persistent chat message history. Here is what that could look like.
.
## FAQs
1. **What is retrieval-augmented generation (RAG)?**
RAG is a way of making big computer brain models (like LLMs) smarter by giving them the latest and most correct information. This is done by mixing in extra details from outside the model's built-in knowledge, helping it give better and more right answers.
2. **How does integrating memory and chat history enhance RAG applications?**
Adding memory and conversation history to RAG apps lets them keep and look back at past messages between the large language model (LLM) and people. This feature makes the model more aware of the context, helping it give answers that fit the current question and match the ongoing conversations flow. By keeping track of a chat history, RAG apps can give more personal and correct answers, greatly making the experience better for the user and improving how well the app works overall.
3. **How does MongoDB Atlas support RAG applications?**
MongoDB's vector search capabilities enable RAG applications to become smarter and provide more relevant responses. It enhances memory functions, streamlining the storage and recall of conversations. This boosts context awareness and personalizes user interactions. The result is a significant improvement in both application performance and user experience, making AI interactions more dynamic and user-centric.
4. **What benefits does the LangChain-MongoDB integration offer?**
This setup makes it easier to include meaning-based memory in RAG apps. It allows for the easy handling of past conversation records through MongoDB's strong vector search tools, leading to a better running app and a nicer experience for the user.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt892d50c61236c4b6/660b015018980fc9cf2025ab/js-rag-history-2.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt712f63e36913f018/660b0150071375f3acc420e1/js-rag-history-3.png | md | {
"tags": [
"Atlas",
"JavaScript",
"AI"
],
"pageDescription": "Unlock the full potential of your JavaScript RAG application with MongoDB and LangChain. This guide dives into enhancing AI systems with a conversational memory, improving response relevance and user interaction by integrating MongoDB's Atlas Vector Search and LangChain-MongoDB. Discover how to setup your environment, manage chat histories, and construct advanced RAG chains for smarter, context-aware applications. Perfect for developers looking to elevate AI projects with real-time, personalized user engagement.",
"contentType": "Tutorial"
} | Add Memory to Your JavaScript RAG Application Using MongoDB and LangChain | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/build-newsletter-website-mongodb-data-platform | created | # Build a Newsletter Website With the MongoDB Data Platform
>
>
>Please note: This article discusses Stitch. Stitch is now MongoDB Realm. All the same features and functionality, now with a new name. Learn more here. We will be updating this article in due course.
>
>
"This'll be simple," I thought. "How hard can it be?" I said to myself, unwisely.
*record scratch*
*freeze frame*
Yup, that's me. You're probably wondering how I ended up in this
situation.
Once upon a time, there was a small company, and that small company had an internal newsletter to let people know what was going on. Because the company was small and everyone was busy, the absolute simplest and most minimal approach was chosen, i.e. a Google Doc that anyone in the Marketing team could update when there was relevant news. This system worked well.
As the company grew, one Google Doc became many Google Docs, and an automated email was added that went out once a week to remind people to look at the docs. Now, things were not so simple. Maybe the docs got updated, and maybe they didn't, because it was not always clear who owned what. The people receiving the email just saw links to the docs, with no indication of whether there was anything new or good in there, and after a while, they stopped clicking through, or only did so occasionally. The person who had been sending the emails got a new job and asked for someone to take over the running of the newsletter.
This is where I come in. Yes, I failed to hide when the boss came asking for volunteers.
I took one look at the existing system, and knew it could not continue as it was — so of course, I also started looking for suckers er I mean volunteers. Unfortunately, I could not find anyone who wanted to take over whitewashing this particular fence, so I set about trying to figure out how hard it could be to roll my own automated fence-whitewashing system to run the newsletter back end.
Pretty quickly I had my minimum viable product, thanks to MongoDB Atlas and Stitch. And the best part? The whole thing fits into the free tier of both. You can get your own free-forever instance here, just by supplying your email address. And if you ask me nicely, I might even throw some free credits your way to try out some of the paid features too.
>
>
>If you haven't yet set up your free cluster on MongoDB Atlas, now is a great time to do so. You have all the instructions in this blog post.
>
>
## Modelling Data: The Document
The first hurdle of this project was unlearning bad relational habits. In the relational database world, a newsletter like this would probably use several JOINs:
- A table of issues
- Containing references to a table of news items
- Containing references to further tables of topics, authors
In the document-oriented world, we don't do it that way. Instead, I defined a simple document format:
``` javascript
{
_id: 5e715b2099e27fa8539274ea,
section: "events",
itemTitle: "Webinar] Building FHIR Applications with MongoDB, April 14th",
itemText: "MongoDB and FHIR both natively support the JSON format, the standard e...",
itemLink: "https://www.mongodb.com/webinar/building-fhir-applications-with-mongod...",
tags: ["fhir", "healthcare", "webinar"],
createdDate: 2020-03-17T23:01:20.038+00:00
submitter: "marketing.genius@mongodb.com",
updates: [],
published: "true",
publishedDate: 2020-03-30T07:10:06.955+00:00
email: "true"
}
```
This structure should be fairly self-explanatory. Each news item has:
- A title
- Some descriptive text
- A link to more information
- One or more topic tags
- Plus some utility fields to do things like tracking edits
Each item is part of a section and can be published simply to the web, or also to email. I don't want to spam readers with everything, so the email is curated; only items with `email: true` go to email, while everything else just shows up on the website but not in readers' inboxes.
One item to point out is the updates array, which is empty in this particular example. This field was a later addition to the format, as I realised when I built the edit functionality that it would be good to track who made edits and when. The flexibility of the document model meant that I could simply add that field without causing any cascading changes elsewhere in the code, or even to documents that had already been created in the database.
So much for the database end. Now we need something to read the documents and do something useful with them.
I went with [Stitch, which together with the Atlas database is another part of the MongoDB Cloud platform. In keeping with the general direction of the project, Stitch makes my life super-easy by taking care of things like authentication, access rules, MongoDB queries, services, and functions. It's a lot more than just a convenient place to store files; using Stitch let me write the code in JavaScript, gave me somewhere easy to host the application logic, and connects to the MongoDB Atlas database with a single line of code:
``` javascript
client = stitch.Stitch.initializeDefaultAppClient(APP_ID);
```
`APP_ID` is, of course, my private application ID, which I'm not going
to include here! All of the code for the app can be found in my personal Github repository; almost all the functionality (and all of the code from the examples below) is in a single Javascript file.
## Reading Documents
The newsletter goes out in HTML email, and it has a companion website, so my Stitch app assembles DOM sections in Javascript to display the
newsletter. I won't go through the whole thing, but each step looks
something like this:
``` javascript
let itemTitleContainer = document.createElement("div");
itemTitleContainer.setAttribute("class", "news-item-title");
itemContainer.append(itemTitleContainer);
let itemTitle = document.createElement("p");
itemTitle.textContent = currentNewsItem.itemTitle;
itemTitleContainer.append(itemTitle);
```
This logic showcases the benefit of the document object model in MongoDB. `currentNewsItem` is an object in JavaScript which maps exactly to the document in MongoDB, and I can access the fields of the document simply by name, as in `currentNewsItem.itemTitle`. I don't have to create a whole separate object representation in my code and laboriously populate that with relational queries among many different tables of a database; I have the exact same object representation in the code as in the database.
In the same way, inputting a new item is simple because I can build up a JSON object from fields in a web form:
``` javascript
workingJSONe.name] = e.value;
```
And then I can write that directly into the database:
``` javascript
submitJSON.createdDate = today;
if ( submitJSON.section == null ) { submitJSON.section = "news"; }
submitJSON.submitter = userEmail;
db.collection('atf').insertOne(submitJSON)
.then(returnResponse => {
console.log("Return Response: ", returnResponse);
window.alert("Submission recorded, thank you!");
})
.catch(errorFromInsert => {
console.log("Error from insert: ", errorFromInsert);
window.alert("Submission failed, sorry!");
});
```
There's a little bit more verbose feedback and error handling on this one than in some other parts of the code since people other than me use this part of the application!
## Aggregating An Issue
So much for inserting news items into the database. What about when someone wants to, y'know, read an issue of the newsletter? The first thing I need to do is to talk to the MongoDB Atlas database and figure out what is the most recent issue, where an issue is defined as the set of all the news items with the same published date. MongoDB has a feature called the [aggregation pipeline, which works a bit like piping data from one command to another in a UNIX shell. An aggregation pipeline has multiple stages, each one of which makes a transformation to the input data and passes it on to the next stage. It's a great way of doing more complex queries like grouping documents, manipulating arrays, reshaping documents into different models, and so on, while keeping each individual step easy to reason about and debug.
In my case, I used a very simple aggregation pipeline to retrieve the most recent publication dates in the database, with three stages. In the first stage, using $group, I get all the publication dates. In the second stage, I use $match to remove any null dates, which correspond to items without a publication date — that is, unpublished items. Finally, I sort the dates, using — you guessed it — $sort to get the most recent ones.
``` javascript
let latestIssueDate = db.collection('atf').aggregate(
{ $match : { _id: {$ne: null }}},
{ $group : { _id : "$publishedDate" } },
{ $sort: { _id: -1 }}
]).asArray().then(latestIssueDate => {
thisIssueDate = latestIssueDate[0]._id;
prevIssueDate = latestIssueDate[1]._id;
ATFmakeIssueNav(thisIssueDate, prevIssueDate);
theIssue = { published: "true", publishedDate: thisIssueDate };
db.collection('atf').find(theIssue).asArray().then(dbItems => {
orderSections(dbItems); })
.catch(err => { console.error(err) });
}).catch(err => { console.error(err) });
```
As long as I have a list of all the publication dates, I can use the next most recent date for the navigation controls that let readers look at previous issues of the newsletter. The most important usage, though, is to retrieve the current issue, namely the list of all items with that most recent publication date. That's what the `find()` command does, and it takes as its argument a simple document:
``` javascript
{ published: "true", publishedDate: thisIssueDate }
```
In other words, I want all the documents which are published (not the drafts that are sitting in the queue waiting to be published), and where the published date is the most recent date that I found with the aggregation pipeline above.
That reference to `orderSections` is a utility function that makes sure that the sections of the newsletter come out in the right order. I can also catch any errors that occur, either in the aggregation pipeline or in the find operation itself.
## Putting It All Together
At this point publishing a newsletter is a question of selecting which items go into the issue and updating the published date for all those items:
``` javascript
const toPublish = { _id: { '$in': itemsToPublish } };
let today = new Date();
const update = { '$set': { publishedDate: today, published: "true" } };
const options = {};
db.collection('atf').updateMany(toPublish, update, options)
.then(returnResponse => {console.log("Return Response: ", returnResponse);})
.catch(errorFromUpdate => {console.log("Error from update: ", errorFromUpdate);});
```
The [updateMany() command has three documents as its arguments.
- The first, the filter, specifies which documents to update, which here means all the ones with an ID in the `itemsToPublish` array.
- The second is the actual update we are going to make, which is to set the `publishedDate` to today's date and mark them as published.
- The third, optional argument, is actually empty in my case because I don't need to specify any options.
## Moving The Mail
Now I could send emails myself from Stitch, but we already use an external specialist service that has a nice REST API. I used a Stitch Function to assemble the HTTP calls and talk to that external service. Stitch Functions are a super-easy way to run simple JavaScript functions in the Stitch serverless platform, making it easy to implement application logic, securely integrate with cloud services and microservices, and build APIs — exactly my use case!
I set up a simple HTTP service, which I can then access easily like this:
``` javascript
const http = context.services.get("mcPublish");
```
As is common, the REST API I want to use requires an API key. I generated the key on their website, but I don't want to leave that lying around. Luckily, Stitch also lets me define a secret, so I don't need that API key in plaintext:
``` javascript
let mcAPIkey = context.values.get("MCsecret");
```
And that (apart from 1200 more lines of special cases, admin functions, workarounds, and miscellanea) is that. But I wanted a bit more visibility on which topics were popular, who was using the service and so on. How to do that?
## Charting Made Super Easy
Fortunately, there's an obvious answer to my prayers in the shape of Charts, yet another part of the MongoDB Cloud platform, which let me very quickly build a visualisation of activity on the back-end.
Here's how simple that is: I have my database, imaginatively named "newsletter", and the collection, named "atf" for Above the Fold, the name of the newsletter I inherited. I can see all of the fields from my document, so I can take the `_id` field for my X-axis, and then the `createdDate` for the Y-axis, binning by month, to create a real-time chart of the number of news items submitted each month.
It really is that easy to create visualizations in Charts, including much more complicated ones than this, using all MongoDB's rich data types. Take a look at some of the more advanced options and give it a go with your own data, or with the sample data in a free instance of MongoDB Atlas.
It was a great learning experience to build this thing, and the whole exercise gave me a renewed appreciation for the power of MongoDB, the document model, and the extended MongoDB Cloud platform - both the Atlas database and the correlated services like Stitch and Charts. There's also room for expansion; one of the next features I want to build is search, using MongoDB Atlas' Text Search feature.
## Over To You
As I mentioned at the beginning, one of the nice things about this project is that the whole thing fits in the free tier of MongoDB Atlas, Stitch, and Charts. You can sign up for your own free-forever instance and start building today, no credit card required, and no expiry date either. There's a helpful onboarding wizard that will walk you through loading some sample data and performing some basic tasks, and when you're ready to go further, the MongoDB docs are top-notch, with plenty of worked examples. Once you get into it and want to learn more, the best place to turn is MongoDB University, which gives you the opportunity to learn MongoDB at your own pace. You can also get certified on MongoDB, which will get you listed on our public list of certified MongoDB professionals. | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "How I ended up building a whole CMS for a newsletter — when it wasn't even my job",
"contentType": "Article"
} | Build a Newsletter Website With the MongoDB Data Platform | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/languages/go/http-basics-with-go | created | # HTTP basics With Go 1.22
# HTTP basics with Go 1.22
Go is a wonderful programming language –very productive with many capabilities. This series of articles is designed to offer you a feature walkthrough of the language, while we build a realistic program from scratch.
In order for this to work, there are some things we must agree upon:
- This is not a comprehensive explanation of the Go syntax. I will only explain the bits strictly needed to write the code of these articles.
- Typing the code is better than just copying and pasting, but do as you wish.
- Materials are available to try by yourself at your own pace, but it is recommended to play along if you do this in a live session.
- If you are a Golang newbie, type and believe. If you have written some Go, ask any questions. If you have Golang experience, there are comments about best practices –let's discuss those. In summary: Ask about the syntax, ask about the magic, talk about the advanced topics, or go to the bar.
- Finally, although we are only going to cover the essential parts, the product of this series is the seed for a note-keeping back end where we will deal with the notes and its metadata. I hope you like it.
## Hello world
1. Let's start by creating a directory for our project and initializing the project. Create a directory and get into it. Initialize the project as a Go module with the identifier “github.com/jdortiz/go-intro,” which you should change to something that is unique and owned by you.
```shell
go mod init github.com/jdortiz/go-intro
```
2. In the file explorer of VSCode, add a new file called `main.go` with the following content:
```go
package main
import "fmt"
func main() {
fmt.Println("Hola Caracola")
}
```
3. Let's go together through the contents of that file to understand what we are doing here.
1. Every source file must belong to a `package`. All the files in a directory must belong to the same package. Package `main` is where you should create your `main` function.
2. `func` is the keyword for declaring functions and `main` is where your program starts to run at.
3. `fmt.Println()` is a function of the standard library (stdlib) to print some text to the standard output. It belongs to the `fmt` package.
4. Having the `import` statement allows us to use the `fmt` package in the code, as we are doing with the `fmt.Println()` function.
4. The environment is configured so we can run the program from VS Code. Use "Run and Debug" on the left bar and execute the program. The message "Hola caracola" will show up on the debug console.
5. You can also run the program from the embedded terminal by using
```sh
go run main.go
```
## Simplest web server
1. Go's standard library includes all the pieces needed to create a full-fledged HTTP server. Until version 1.22, using third-party packages for additional functionality, such as easily routing requests based on the HTTP verb, was very common. Go 1.22 has added most of the features of those packages in a backward compatible way.
2. Webservers listen to requests done to a given IP address and port. Let's define that in a constant inside of the main function:
```go
const serverAddr string = "127.0.0.1:8081"
```
3. If we want to reply to requests sent to the root directory of our web server, we must tell it that we are interested in that URL path and what we want to happen when a request is received. We do this by using `http.HandleFunc()` at the bottom of the main function, with two parameters: a pattern and a function. The pattern indicates the path that we are interested in (like in `"/"` or `"/customers"` ) but, since Go 1.22, the pattern can also be used to specify the HTTP verb, restrict to a given host name, and/or extract parameters from the URL. We will use `"GET /"`, meaning that we are interested in GET requests to the root. The function takes two parameters: an `http.ResponseWriter`, used to produce the response, and an `http.Request` that holds the request data. We will be using an anonymous function (a.k.a. lambda) that initially doesn't do anything. You will need to import the "net/http" package, and VS Code can do it automatically using its *quick fix* features.
```go
http.HandleFunc("GET /", func(w http.ResponseWriter, r *http.Request) {
})
```
4. Inside of our lambda, we can use the response writer to add a message to our response. We use the `Write()` method of the response writer that takes a slice of bytes (i.e., a "view" of an array), so we need to convert the string. HTML could be added here.
```go
w.Write(]byte("HTTP Caracola"))
```
5. Tell the server to accept connections to the IP address and port with the functionality that we have just set up. Do it after the whole invocation to `http.HandleFunc()`.
```go
http.ListenAndServe(serverAddr, nil)
```
6. `http.ListenAndServe()` returns an error when it finishes. It is a good idea to wrap it with another function that will log the message when that happens. `log` also needs to be imported: Do it yourself if VSCode didn't take care of it.
```go
log.Fatal(http.ListenAndServe(serverAddr, nil))
```
7. Compile and run. The codespace will offer to use a browser or open the port. You can ignore this for now.
8. If you run the program from the terminal, open a second terminal using the "~~" on the right of your zsh shell. Make a request from the terminal to get our web server to respond. If you have chosen to use your own environment, this won't work unless you are using Go 1.22~~.
```shell
curl -i localhost:8081/
```
## (De)Serialization
![Unloading and deserializing task][1]
1. HTTP handlers can also be implemented as regular functions –i.e., non-anonymous– and are actually easier to maintain. Let's define one for an endpoint that can be used to create a note after the `main` function.
```go
func createNote(w http.ResponseWriter, r *http.Request) {
}
```
2. Before we can implement that handler, we need to define a type that will hold the data for a note. The simplest note could have a title and text. We will put this code before the `main` function.
```go
type Note struct {
Title string
Text string
}
```
3. But we can have some more data, like a list of categories, that in Go is represented as a slice of strings (`[]string`), or a field that uses another type that defines the scope of this note as a combination of a project and an area. The complete definition of these types would be:
```go
type Scope struct {
Project string
Area string
}
type Note struct {
Title string
Tags []string
Text string
Scope Scope
}
```
4. Notice that both the names of the types and the names of the fields start with a capital letter. That is the way to say in Go that something is exported and it would also apply to function names. It is similar to using a `public` attribute in other programming languages.
5. Also, notice that field declarations have the name of the field first and its type later. The latest field is called "Scope," because it is exported, and its type, defined a few lines above, is also called Scope. No problem here –Go will understand the difference based on the position.
6. Inside of our `createNote()` handler, we can now define a variable for that type. The order is also variable name first, type second. `note` is a valid variable from here on, but at the moment all the fields are empty.
```go
var note Note
```
7. Data is exchanged between HTTP servers and clients using some serialization format. One of the most common ones nowadays is JSON. After the previous line, let's create a decoder that can convert bytes from the HTTP request stream into an actual object. The `encoding/json` package of the standard library provides what we need. Notice that I hadn't declared the `decoder` variable. I use the "short variable declaration" (`:=`), which declares and assigns value to the variable. In this case, Go is also doing type inference.
```go
decoder := json.NewDecoder(r.Body)
```
8. This decoder can now be used in the next line to deserialize the data in the HTTP request. That method returns an error, which will be `nil` (no value) if everything went well, or some (error) value otherwise. Notice that we use `&` to pass a reference to the variable, so the method can change its value.
```go
err := decoder.Decode(¬e)
```
9. The expression can be wrapped to be used as the condition in an if statement. It is perfectly fine in Go to obtain some value and then compare in an expression after a semicolon. There are no parentheses surrounding the conditional expression.
```go
if err := decoder.Decode(¬e); err != nil {
}
```
10. If anything goes wrong, we want to inform the HTTP client that there is a problem and exit the function. This early exit is very common when you handle errors in Go. `http.Error()` is provided by the `net/http` package, writes to the response writer the provided error message, and sets the HTTP status.
```go
http.Error(w, err.Error(), http.StatusBadRequest)
return
```
11. If all goes well, we just print the value of the note that was sent by the client. Here, we use another function of the `fmt` package that writes to a Writer the given data, using a format string. Format strings are similar to the ones used in C but with some extra options and more safety. `"%+v"` means print the value in a default format and include the field names (% to denote this is a format specifier, v for printing the value, the + for including the field names).
```go
fmt.Fprintf(w, "Note: %+v", note)
```
12. Let's add this handler to our server. It will be used when a POST request is sent to the `/notes` path.
```go
http.HandleFunc("POST /notes", createNote)
```
13. Run this new version.
14. Let's first test what happens when it cannot deserialize the data. We should get a 400 status code and the error message in the body.
```shell
curl -iX POST localhost:8081/notes
```
15. Finally, let's see what happens when we pass some good data. The deserialized data will be printed to the standard output of the program.
```shell
curl -iX POST -d '{ "title": "Master plan", "tags": ["ai","users"], "text": "ubiquitous AI", "scope": {"project": "world domination", "area":"strategy"} }' localhost:8081/notes
```
## Conclusion
In this article, we have learned:
- How to start and initialize a Go project.
- How to write a basic HTTP server from scratch using just Go standard library functionality.
- How to add endpoints to our HTTP server that provide different requests for different HTTP verbs in the client request.
- How to deserialize JSON data from the request and use it in our program.
Developing this kind of program in Go is quite easy and requires no external packages or, at least, not many. If this has been your first step into the world of Go programming, I hope that you have enjoyed it and that if you had some prior experience with Go, there was something of value for you.
In the next article of this series, we will go a step further and persist the data that we have exchanged with the HTTP client. [This repository with all the code for this article and the next ones so you can follow along.
Stay curious. Hack your code. See you next time!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt76b3a1b7e9e5be0f/661f8900394ea4203a75b196/unloading-serialization.jpg | md | {
"tags": [
"Go"
],
"pageDescription": "This tutorial explains how to create a basic HTTP server with a couple of endpoints to backend developers with no prior experience on Go. It uses only the standard library functionality, but takes advantages of the new features introduced in Go 1.22.",
"contentType": "Tutorial"
} | HTTP basics With Go 1.22 | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/practical-exercise-atlas-device-sdk-web-sync | created | # A Practical Exercise of Atlas Device SDK for Web With Sync (Preview)
**Table of contents**
* Atlas Device SDK for web with Device Sync and its real-world usage
* Architecture
* Basic components
* Building your own React web app
- Step 1: Setting up the back end
- Step 2: Creating an App Services app
- Step 3: Getting ready for Device Sync
- Step 4: Atlas Device SDK
- Step 5: Let the data flow (start using sync)!
* Implementation of the coffee app
* A comparison between Device Sync and the Web SDK without Sync
- What will our web app look like without Device Sync?
- Which one should we choose?
* Conclusions
* Appendix
## Atlas Device SDK for web with Sync and its real-world usage
The Device Sync feature of the Web SDK is a powerful tool designed to bring real-time data synchronization and automatic conflict resolution capabilities to cross-platform applications, seamlessly bridging the gap between users’ back ends and client-side data. It facilitates the creation of dynamic user experiences by ensuring eventual data consistency across client apps with syncing and conflict resolution.
In the real-world environment, certain client apps benefit from a high level of automation, therefore bringing users an intuitive interaction with the client app. For example, a coffee consumption counter web app that allows the user to keep track of the cups of coffee he/she consumes from different devices and dynamically calculates daily coffee intake will create a ubiquitous user experience.
In this tutorial, I will demonstrate how a developer can easily enable Device Sync to the above-mentioned coffee consumption web app. By the end of this article, you will be able to build a React web app that first syncs the cups of coffee you consumed during the day with MongoDB Atlas, then syncs the same data to different devices running the app. However, our journey doesn’t stop here. I will also showcase the difference between the aforementioned Web SDK with Sync (preview) and the Web SDK without automatic syncing when our app needs to sync data with the MongoDB back end. Hopefully, this article will also help you to make a choice between these two options.
## Architecture
In this tutorial, we will create two web apps with the same theme: a coffee consumption calculator. The web app that benefits from Device Sync will be named `Coffee app Device Sync` while the one following the traditional MongoDB client will be named `Coffee app`.
The coffee app with Device Sync utilizes Atlas Device Sync to synchronize data between the client app and backend server in real time whilst our coffee app without Device Sync relies on the MongoDB driver.
Data synchronization relies on the components below.
1. _App Services_: App Services and its Atlas Device SDKs are a suite of development tools optimized for cross-platform devices such as mobile, IoT, and edge, to manage data via MongoDB’s edge database called Realm, which in turn leverages Device Sync. With various SDKs designed for different platforms, Realm enables the possibility of building data-driven applications for multiple mobile devices. The Web SDK we are going to explore in this article is one of the handy tools that help developers build intuitive web app experiences.
2. _User authentication_: Before setting up Device Sync, we will need to authenticate a user. In our particular case, for the sake of simplicity, the `anonymous` method is being used. This procedure allows the sync from the client app to be associated with a specific user account on your backend App Services app.
3. _Schema_: You can see schema as the description of how your data looks, or a data model. Schema exists both in your client app’s code and App Services’ UI. You will need to provide the name of it within the configuration.
4. _Sync configuration_: It is mandatory to provide the authenticated user, sync mode (flexible: true), and `initialSubscriptions` which defines a function that sets up initial subscriptions when Realm is opened.
5. _Opening a synced realm_: As you will see, we use `Realm.open(config);` to open a realm that is synchronized with Atlas. The whole process between your client app and back end, as you may have guessed, is bridged by Device Sync.
Once Realm is opened with the configuration we discussed above, any changes to the coffee objects in the local realm are _automatically_ synchronized with Atlas. Likewise, changes made in Atlas are synchronized back to the local realm, keeping the data up-to-date across devices and the server. What’s even better is that the process of data synchronization happens seamlessly in the background without any user action involved.
)
Back end:
* MongoDB Atlas, as the cloud storage
* Data (in this article, we will use dummy data)
* MongoDB App Services app, as the web app’s business logic
These components briefly describe the building blocks of a web app powered by MongoDB App Services. The coffee app is just an example to showcase how Device Sync works and the possibilities for developers to build more complicated apps.
## Building your own React web app
In this section, I will provide step-by-step instructions on how to build your own copy of Coffee App. By the end, you will be able to interact with Realm and Device Sync on your own.
### Step 1. Setting up the back end
MongoDB Atlas is used as the backend server of the web app. Essentially, Atlas has different tiers, from M0 to M700, which represent the difference in storage size, cloud server performance, and limitations from low to high. For more details on this topic, kindly refer to our documentation.
In this tutorial, we will use the free tier (M0), as it is just powerful enough for learning purposes.
To set up an M0 cluster, you will first need to create an account with MongoDB.
Once the account is ready, we can proceed to “Create a Project.”
as this will not be in the scope of this article.
### Step 2. Creating an App Services app
App Services (previously named Realm) is a suite of cloud-based tools (i.e., serverless functions, Device Sync, user management, rules) designed to streamline app development with Atlas. In other words, Atlas works as the datasource for App Services.
Our coffee app will utilize App Services in such a way that the back end will provide data sync among client apps.
For this tutorial, we just need to create an empty app. You can easily do so by skipping any template recommendations.
gives a very good explanation of why schema is a mandatory and important component of Device Sync:
_To use Atlas Device Sync, you must define your data model in two formats:_
* _**App Services schema**: This is a server-side schema that defines your data in BSON. Device Sync uses the App Services schema to convert your data to MongoDB documents, enforce validation, and synchronize data between client devices and Atlas._
* _**Realm object schema**: This is client-side schema of data defined using the Realm SDKs. Each Realm SDK defines the Realm object schema in its own language-specific way. The Realm SDKs use this schema to store data in the Realm database and synchronize data with Device Sync._
> Note: As you can see, Development Mode allows your client app to define a schema and have it automatically reflected server-side. (Simply speaking, schema on your server will be modified by the client app.)
As you probably already guessed, this has the potential to mess with your app’s schema and cause serious issues (i.e., stopping Device Sync) in the production environment.
We only use Development Mode for learning purposes and a development environment, hence the name.
By now, we have created an App Services app and configured it to be ready for our coffee app project.
### Step 3. Getting ready for Device Sync
We are now ready to implement Device Sync in the coffee app. Sync happens when the following requirements are satisfied.
* Client devices are connected to the network and have an established connection to the server.
* The client has data to sync with the server and it initiates a sync session.
* The client sends IDENT messages to the server. *You can see IDENT messages as an identifier that the client uses to tell the server exactly what Realm file it needs to sync and the status of the client realm (i.e., if the current version is the client realm’s most recently synced server version).
The roadmap below shows the workflow of a web app with the Device Sync feature.
and MongoDB Atlas Device SDK for the coffee app in this article.
Despite the differences in programming languages and functionalities, SDKs share the following common points:
Despite the differences in programming languages and functionalities, SDKs share the following common points:
* Providing a core database API for creating and working with local databases
* Providing an API that you need to connect to an Atlas App Services server, and therefore, server-side features like Device Sync, functions, triggers, and authentication will be available at your disposal
We will be using Atlas Device SDK for web later.
### Step 5. Let the data flow
**Implementation**:
Without further ado, I will walk you through the process of creating the coffee app.
Our work here is concentrated on the following parts:
* App.css — adjusts everything about UI style, color
* App.js — authentication, data model, business logic, and Sync
* Footer.js. — add optional information about the developer
* index.css. — add fonts and web page styling
As mentioned previously, React will be used as the library for our web app. Below are some options you can follow to create the project.
As mentioned previously, React will be used as the library for our web app. Below are some options you can follow to create the project.
**Option 1 (the old-fashioned way)**: Create React App (CRA) has always been an “official” way to start a React project. However, it is no longer recommended in the React documents. The coffee app was originally developed using CRA. However, if you are coming from the older set-up or just wish to see how Device Sync is implemented within a React app, this will still be fine to follow.
**Option 2**: Vite addresses a major issue with CRA, the cumbersome dependency size, by introducing dependency pre-bundling. It provides lightning-fast, cold-starting performance.
If you already have your project built using CRA, there is also a fast way to make it Vite-compatible by using the code below.
`npx nx@latest init`
The line above will automatically detect your project’s dependency and structure and make it compatible with Vite. Your application will therefore also enjoy the high performance brought by Vite.
Our simple example app has most of its functionality within the `App.js` file. Therefore, let’s focus on this one and dive into the details.
(1)
Dependency-wise, below are the necessary `imports`.
```
import React, { useEffect, useState } from 'react';
import Realm, { App } from 'realm';
import './App.css';
import Footer from './Footer';
```
Notice `realm` is being imported above as we need to do this to the source files where we need to interact with the database.
(Consider using the `@realm/react` package to leverage hooks and providers for opening realms and managing the data. Refer to MongoDB’s other Web Sync Preview example app for how to integrate @realm/react.)
(2)
```
const REALM_APP_ID = 'mycoffeemenu-hflty'; // Input APP ID here.
const app = new App({ id: REALM_APP_ID });
```
To link your client app to an App Services app, you will need to supply the App ID within the code. The App ID is a unique identifier for each App Services app, and it will be needed as a reference while working with many MongoDB products.
Note: The client app refers to your actual web app whilst the App Services app refers to the app we create on the cloud, which resides on the Atlas App Services server.
You can easily copy your App ID from the App Services UI.
.
* Sync `config`: Within the `sync` block, we supply the information shown below.
`user`: Passing in the user’s login credentials
`flexible`: Defining what Sync mode the app will use
`initialSubscriptions`: Defining the queries for the data that needs to be synced; the two parameters `subs` and `realm` refer to the sync’s subscriptions and local database instance.
We now have built a crucial part that manages the data model used for Sync, authentication, sync mode, and subscription. This part customizes the initial data sync process and tailors it to fit the business logic.
(5)
Our coffee app calculates the cups of coffee we consume during the day. The simple app relies on inputs from the user. In this case, the data flowing in and out of the app is the number of different coffees the user consumes.
, as shown by the code snippet below.
```
await coffeeCollection.updateOne(
{ user_id: user.id },
{ $set: { consumed: total } },
{ upsert: true }
);
```
Here, we use `upsert` to update and insert the changed values of specific coffee drinks. As you can see, this code snippet works directly with documents stored in the back end. Instead of opening up a realm with the Device Sync feature, the coffee app without Device Sync still uses Web SDK.
However, the above-described method is also known as “MongoDB Atlas Client.” The name itself is quite self-explanatory as it allows the client application to take advantage of Atlas features and access data from your app directly.
2: Which one should we choose?
Essentially, whether you should use the Device Sync feature from the Web SDK or follow the more traditional Atlas Client depends on your use cases, working environments, and existing codebase. We talked about two different ways to keep data updated between the client apps and the back end. Although both sample apps don’t look very different due to their simple functionality, this will be quite different in more complicated applications.
Look at the UI of both implementations of the web apps:
, functions) we can keep a heavy workload on the App Services server while making sure our web app remains responsive.
* No encryption at rest: You can understand this limitation as Realm JS Web SDK only encrypts data in transit between the browser and server over HTTPS. Anything that’s saved in the device’s memory will be stored in a non-encrypted format.
However, there’s no need to panic. As previously mentioned, Device Sync uses roles and rules to strictly control users’ access permissions to different data.
A limitation of Atlas Client is the way data is updated/downloaded between the client and server. Compared to Device Sync, Atlas Client does not have the ability to keep data synced automatically. This can also be seen as a feature, in some use cases, where data should only be synced manually.
## Conclusion
In this article, we:
* Talked about the usage of the App Services Web SDK in a React web app.
* Compared Web SDK’s Device Sync feature against Atlas Client.
* Discussed which method we should choose.
The completed code examples are available in the appendix below. You can use them as live examples of MongoDB’s App Services Web SDK. As previously mentioned, the coffee apps are designed to be simple and straightforward when it comes to demoing the basic functionality of the Web SDK and its sync feature. It is also easy to add extra features and tailor the app’s source code according to your specific needs. For example:
1. Instead of anonymous authentication, further configure `credentials` to use other more secure auth methods, such as email/password.
2. Modify the data model to fit your app’s theme. For now, our coffee app keeps track of coffee consumption. However, the app can be quickly rebuilt into a recipe app or something similar without complicated modifications and refactoring.
Alternatively, the example apps can also serve as starting points for your own web app project.
App Services’ Web SDK is MongoDB’s answer to developing modern web apps that take advantage of Realm (a local database) and Atlas (a cloud storage solution). Web SDK now supports Device Sync (in preview) whilst before the preview release, Atlas Client allowed web apps to modify and manipulate data on the server. Both of the solutions have the use cases where they are the best fit, and there is no “right answer” that you need to follow.
As a developer, a better choice can be made by first figuring out the purpose of the app and how you would like it to interact with users. If you already have been working on an existing project, it is beneficial to check whether you indeed need the background auto-syncing feature (Device Sync), compared to using queries to perform CRUD operations (Atlas Client). Take a look at our example app and notice the `App.js` file contains the basic components that are needed for Device Sync to work. Therefore, you will be able to decide whether it is a good idea to integrate Device Sync into your project.
### Appendix (Useful links)
* App Services
* Atlas Device SDK for the web
* Realm Web and Atlas Device Sync (preview)
* Realm SDK references
* The coffee apps source code:
- Coffee app with Device Sync
- Coffee app without Device Sync
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6af95a5a3f41ac6d/664500a901b7992a8fd19134/device-sync-between-client-device-atlas.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltaca56a39c1fbdbd2/6645015652b746f9042818d7/create-project.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc5cf094e67020f1/66450181acadaf4f23726805/deploy-database.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltebeb83b68b03a525/664501d366b81d2b3033f241/database-deployments.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc5e7b6718067804/6645021499f5a835bfc369c4/create-app.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbe5631204b1c717c/664502448c5cd134d503a6e6/app-id-code.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt8b6e7bbe222c51bc/66450296a3f9dfd191c0eeb5/define-schema.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt763b14b569b2b372/664502be5c24836146bc18f2/configure-schema.png
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c56ca7b13ca10fd/6645033fefc97a60764befe9/device-sync-roadmap.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt56fd820a250bd117/664503915c2483382cbc1901/configure-access.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte63e871b8c2ece1d/664503b699f5a89764c369dc/server-side-schema.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6cf1ae1ff25656f0/6645057a4df3f52f6aee7df4/development-mode-switch.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt25990d0bdcb9dccd/6645059da0104b10b7c6459d/auto-generated-data-model.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf415f5c37901fc1d/6645154ba0104bde23c6465c/side-panel.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt478c0ad63be16e4b/664515915c24835ebebc19b0/auto-generated-data-model.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte1c7f47382325f02/664515bb8c5cd1758403a7ac/switching-on-development-mode.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltda38b0c77bf82622/664516a466b81d234f33f33e/coffee-drinks-quantity-tracker-UI.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltc0a269aa5d954891/66451902a3f9df91c3c0efe0/coffee-drinks-quantity-tracker-UI.png
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5dc1e20f474f4c6b/6645191abfbef587de5f695f/web-app-features-atlas-client.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "In this tutorial, we demonstrate how a developer can easily enable Device Sync to a coffee consumption web app./practical-exercise-atlas-device-sdk-web-sync",
"contentType": "Tutorial"
} | A Practical Exercise of Atlas Device SDK for Web With Sync (Preview) | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/iot-mongodb-powering-time-series-analysis-household-power-consumption | created | # IoT and MongoDB: Powering Time Series Analysis of Household Power Consumption
IoT (Internet of Things) systems are increasingly becoming a part of our daily lives, offering smart solutions for homes and businesses.
This article will explore a practical case study on household power consumption, showcasing how MongoDB's time series collections can be leveraged to store, manage, and analyze data generated by IoT devices efficiently.
## Time series collections
Time series collections in MongoDB effectively store time series data — a sequence of data points analyzed to observe changes over time.
Time series collections provide the following benefits:
- Reduced complexity for working with time series data
- Improved query efficiency
- Reduced disk usage
- Reduced I/O for read operations
- Increased WiredTiger cache usage
Generally, time series data is composed of the following elements:
- The timestamp of each data point
- Metadata (also known as the source), which is a label or tag that uniquely identifies a series and rarely changes
- Measurements (also known as metrics or values), representing the data points tracked at increments in time — generally key-value pairs that change over time
## Case study: household electric power consumption
This case study focuses on analyzing the data set with over two million data points of household electric power consumption, with a one-minute sampling rate over almost four years.
The dataset includes the following information:
- **date**: Date in format dd/mm/yyyy
- **time**: Time in format hh:mm:ss
- **global_active_power**: Household global minute-averaged active power (in kilowatt)
- **global_reactive_power**: Household global minute-averaged reactive power (in kilowatt)
- **voltage**: Minute-averaged voltage (in volt)
- **global_intensity**: Household global minute-averaged current intensity (in ampere)
- **sub_metering_1**: Energy sub-metering No. 1 (in watt-hour of active energy); corresponds to the kitchen, containing mainly a dishwasher, an oven, and a microwave (hot plates are not electric but gas-powered)
- **sub_metering_2**: Energy sub-metering No. 2 (in watt-hour of active energy); corresponds to the laundry room, containing a washing machine, a tumble drier, a refrigerator, and a light.
- **sub_metering_3**: Energy sub-metering No. 3 (in watt-hour of active energy); corresponds to an electric water heater and an air conditioner
## Schema modeling
To define and model our time series collection, we will use the Mongoose library. Mongoose, an Object Data Modeling (ODM) library for MongoDB, is widely used in the Node.js ecosystem for its ability to provide a straightforward way to model our application data.
The schema will include:
- **timestamp:** A combination of the “date” and “time” fields from the dataset.
- **global_active_power**: A numerical representation from the dataset.
- **global_reactive_power**: A numerical representation from the dataset.
- **voltage**: A numerical representation from the dataset.
- **global_intensity**: A numerical representation from the dataset.
- **sub_metering_1**: A numerical representation from the dataset.
- **sub_metering_2**: A numerical representation from the dataset.
- **sub_metering_3**: A numerical representation from the dataset.
To configure the collection as a time series collection, an additional “**timeseries**” configuration with “**timeField**” and “**granularity**” properties is necessary. The “**timeField**” will use our schema’s “**timestamp**” property, and “**granularity**” will be set to “minutes” to match the dataset's sampling rate.
Additionally, an index on the “timestamp” field will be created to enhance query performance — note that you can query a time series collection the same way you query a standard MongoDB collection.
The resulting schema is structured as follows:
```javascript
const { Schema, model } = require('mongoose');
const powerConsumptionSchema = new Schema(
{
timestamp: { type: Date, index: true },
global_active_power: { type: Number },
global_reactive_power: { type: Number },
voltage: { type: Number },
global_intensity: { type: Number },
sub_metering_1: { type: Number },
sub_metering_2: { type: Number },
sub_metering_3: { type: Number },
},
{
timeseries: {
timeField: 'timestamp',
granularity: 'minutes',
},
}
);
const PowerConsumptions = model('PowerConsumptions', powerConsumptionSchema);
module.exports = PowerConsumptions;
```
For further details on creating time series collections, refer to MongoDB's official time series documentation.
## Inserting data to MongoDB
The dataset is provided as a .txt file, which is not directly usable with MongoDB. To import this data into our MongoDB database, we need to preprocess it so that it aligns with our database schema design.
This can be accomplished by performing the following steps:
1. Connect to MongoDB.
2. Load data from the .txt file.
3. Normalize the data and split the content into lines.
4. Parse the lines into structured objects.
5. Transform the data to match our MongoDB schema model.
6. Filter out invalid data.
7. Insert the final data into MongoDB in chunks.
Here is the Node.js script that automates these steps:
```javascript
// Load environment variables from .env file
require('dotenv').config();
// Import required modules
const fs = require('fs');
const mongoose = require('mongoose');
const PowerConsumptions = require('./models/power-consumption');
// Connect to MongoDB and process the data file
const processData = async () => {
try {
// Connect to MongoDB using the connection string from environment variables
await mongoose.connect(process.env.MONGODB_CONNECTION_STRING);
// Define the file path for the data source
const filePath = 'Household_Power_Consumption.txt';
// Read data file
const rawFileContent = fs.readFileSync(filePath, 'utf8');
// Normalize line endings and split the content into lines
const lines = rawFileContent.replace(/\r\n/g, '\n').replace(/\r/g, '\n').trim().split('\n');
// Extract column headers
const headers = lines0].split(';').map((header) => header.trim());
// Parse the lines into structured objects
const parsedRecords = lines.slice(1).map((line) => {
const values = line.split(';').map((value) => value.trim());
return headers.reduce((object, header, index) => {
object[header] = values[index];
return object;
}, {});
});
// Transform and prepare data for insertion
const transformedRecords = parsedRecords.map((item) => {
const [day, month, year] = item.Date.split('/').map((num) => parseInt(num, 10));
const [hour, minute, second] = item.Time.split(':').map((num) => parseInt(num, 10));
const dateObject = new Date(year, month - 1, day, hour, minute, second);
return {
timestamp: dateObject.toISOString(),
global_active_power: parseFloat(item.Global_active_power),
global_reactive_power: parseFloat(item.Global_reactive_power),
voltage: parseFloat(item.Voltage),
global_intensity: parseFloat(item.Global_intensity),
sub_metering_1: parseFloat(item.Sub_metering_1),
sub_metering_2: parseFloat(item.Sub_metering_2),
sub_metering_3: parseFloat(item.Sub_metering_3),
};
});
// Filter out invalid data
const finalData = transformedRecords.filter(
(item) =>
item.timestamp !== 'Invalid Date' &&
!isNaN(item.global_active_power) &&
!isNaN(item.global_reactive_power) &&
!isNaN(item.voltage) &&
!isNaN(item.global_intensity) &&
!isNaN(item.sub_metering_1) &&
!isNaN(item.sub_metering_2) &&
!isNaN(item.sub_metering_3)
);
// Insert final data into the database in chunks of 1000
const chunkSize = 1000;
for (let i = 0; i < finalData.length; i += chunkSize) {
const chunk = finalData.slice(i, i + chunkSize);
await PowerConsumptions.insertMany(chunk);
}
console.log('Data processing and insertion completed.');
} catch (error) {
console.error('An error occurred:', error);
}
};
// Call the processData function
processData();
```
Before you start the script, you need to make sure that your environment variables are set up correctly. To do this, create a file named “.env” in the root folder, and add a line for “MONGODB_CONNECTION_STRING”, which is your link to the MongoDB database.
The content of the .env file should look like this:
```javascript
MONGODB_CONNECTION_STRING = 'mongodb+srv://{{username}}:{{password}}@{{your_cluster_url}}/{{your_database}}?retryWrites=true&w=majority'
```
For more details on constructing your connection string, refer to the [official MongoDB documentation.
## Visualization with MongoDB Atlas Charts
Once the data has been inserted into our MongoDB time series collection, MongoDB Atlas Charts can be used to effortlessly connect to and visualize the data.
In order to connect and use MongoDB Atlas Charts, we should:
1. Establish a connection to the time series collection as a data source.
2. Associate the desired fields with the appropriate X and Y axes.
3. Implement filters as necessary to refine the data displayed.
4. Explore the visualizations provided by Atlas Charts to gain insights.
to share your experiences, ask questions, and collaborate with fellow enthusiasts. Whether you are seeking advice, sharing your latest project, or exploring innovative uses of MongoDB, the community is a great place to continue the conversation.
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt50c50df158186ba2/65f8b9fad467d26c5f0bbf14/image1.png | md | {
"tags": [
"Atlas",
"JavaScript"
],
"pageDescription": "",
"contentType": "Tutorial"
} | IoT and MongoDB: Powering Time Series Analysis of Household Power Consumption | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/quarkus-eclipse-jnosql | created | # Create a Java REST API with Quarkus and Eclipse JNoSQL for MongoDB
## Introduction
In this tutorial, you will learn how to create a RESTful API using Quarkus, a framework for building Java applications,
and integrate it with Eclipse JNoSQL to work with MongoDB. We will create a simple API to manage developer records.
Combining Quarkus with Eclipse JNoSQL allows you to work with NoSQL databases using a unified API, making switching
between different NoSQL database systems easier.
## Prerequisites
For this tutorial, you’ll need:
- Java 17.
- Maven.
- A MongoDB cluster.
- Docker (Option 1)
- MongoDB Atlas (Option 2)
You can use the following Docker command to start a standalone MongoDB instance:
```shell
docker run --rm -d --name mongodb-instance -p 27017:27017 mongo
```
Or you can use MongoDB Atlas and try the M0 free tier to deploy your cluster.
## Create a Quarkus project
- Visit the Quarkus Code Generator.
- Configure your project by selecting the desired options, such as the group and artifact ID.
- Add the necessary dependencies to your project. For this tutorial, we will add:
- JNoSQL Document MongoDB quarkus-jnosql-document-mongodb]
- RESTEasy Reactive [quarkus-resteasy-reactive]
- RESTEasy Reactive Jackson [quarkus-resteasy-reactive-jackson]
- OpenAPI [quarkus-smallrye-openapi]
- Generate the project, download the ZIP file, and extract it to your preferred location. Remember that the file
structure may vary with different Quarkus versions, but this should be fine for the tutorial. The core focus will be
modifying the `pom.xml` file and source code, which remains relatively consistent across versions. Any minor
structural differences should be good for your progress, and you can refer to version-specific documentation if needed
for a seamless learning experience.
At this point, your `pom.xml` file should look like this:
```xml
io.quarkus
quarkus-resteasy-reactive-jackson
io.quarkiverse.jnosql
quarkus-jnosql-document-mongodb
1.0.5
io.quarkus
quarkus-smallrye-openapi
io.quarkus
quarkus-resteasy-reactive
io.quarkus
quarkus-arc
io.quarkus
quarkus-junit5
test
io.rest-assured
rest-assured
test
```
By default, [quarkus-jnosql-document-mongodb
is in version `1.0.5`, but the latest release is `3.2.2.1`. You should update your `pom.xml` to use the latest version:
```xml
io.quarkiverse.jnosql
quarkus-jnosql-document-mongodb
3.2.2.1
```
## Database configuration
Before you dive into the implementation, it’s essential to configure your MongoDB database properly. In MongoDB, you
must often set up credentials and specific configurations to connect to your database instance. Eclipse JNoSQL provides
a flexible configuration mechanism that allows you to manage these settings efficiently.
You can find detailed configurations and setups for various databases, including MongoDB, in the Eclipse JNoSQL GitHub
repository.
To run your application locally, you can configure the database name and properties in your application’s
`application.properties` file. Open this file and add the following line to set the database name:
```properties
quarkus.mongodb.connection-string=mongodb://localhost:27017
jnosql.document.database=school
```
This configuration will enable your application to:
- Use the “school” database.
- Connect to the MongoDB cluster available at the provided connection string.
In production, make sure to enable access control and enforce authentication. See the security checklist for more
details.
It’s worth mentioning that Eclipse JNoSQL leverages Eclipse MicroProfile Configuration, which is designed to facilitate
the implementation of twelve-factor applications, especially in configuration management. It means you can override
properties through environment variables, allowing you to switch between different configurations for development,
testing, and production without modifying your code. This flexibility is a valuable aspect of building robust and easily
deployable applications.
Now that your database is configured, you can proceed with the tutorial and create your RESTful API with Quarkus and
Eclipse JNoSQL for MongoDB.
## Create a developer entity
In this step, we will create a simple `Developer` entity using Java records. Create a new record in the `src/main/java`
directory named `Developer`.
```java
import jakarta.nosql.Column;
import jakarta.nosql.Entity;
import jakarta.nosql.Id;
import java.time.LocalDate;
import java.util.Objects;
import java.util.UUID;
@Entity
public record Developer(
@Id String id,
@Column String name,
@Column LocalDate birthday
) {
public static Developer newDeveloper(String name, LocalDate birthday) {
Objects.requireNonNull(name, "name is required");
Objects.requireNonNull(birthday, "birthday is required");
return new Developer(
UUID.randomUUID().toString(),
name,
birthday);
}
public Developer update(String name, LocalDate birthday) {
Objects.requireNonNull(name, "name is required");
Objects.requireNonNull(birthday, "birthday is required");
return new Developer(
this.id(),
name,
birthday);
}
}
```
## Create a REST API
Now, let’s create a RESTful API to manage developer records. Create a new class in `src/main/java`
named `DevelopersResource`.
```java
import jakarta.inject.Inject;
import jakarta.nosql.document.DocumentTemplate;
import jakarta.ws.rs.*;
import jakarta.ws.rs.core.MediaType;
import jakarta.ws.rs.core.Response;
import java.time.LocalDate;
import java.util.List;
@Path("developers")
@Consumes({MediaType.APPLICATION_JSON})
@Produces({MediaType.APPLICATION_JSON})
public class DevelopersResource {
@Inject
DocumentTemplate template;
@GET
public List listAll(@QueryParam("name") String name) {
if (name == null) {
return template.select(Developer.class).result();
}
return template.select(Developer.class)
.where("name")
.like(name)
.result();
}
public record NewDeveloperRequest(String name, LocalDate birthday) {
}
@POST
public Developer add(NewDeveloperRequest request) {
var newDeveloper = Developer.newDeveloper(request.name(), request.birthday());
return template.insert(newDeveloper);
}
@Path("{id}")
@GET
public Developer get(@PathParam("id") String id) {
return template.find(Developer.class, id)
.orElseThrow(() -> new WebApplicationException(Response.Status.NOT_FOUND));
}
public record UpdateDeveloperRequest(String name, LocalDate birthday) {
}
@Path("{id}")
@PUT
public Developer update(@PathParam("id") String id, UpdateDeveloperRequest request) {
var developer = template.find(Developer.class, id)
.orElseThrow(() -> new WebApplicationException(Response.Status.NOT_FOUND));
var updatedDeveloper = developer.update(request.name(), request.birthday());
return template.update(updatedDeveloper);
}
@Path("{id}")
@DELETE
public void delete(@PathParam("id") String id) {
template.delete(Developer.class, id);
}
}
```
## Test the REST API
Now that we've created our RESTful API for managing developer records, it's time to put it to the test. We'll
demonstrate how to interact with the API using various HTTP requests and command-line tools.
### Start the project:
```shell
./mvnw compile quarkus:dev
```
### Create a new developer with POST
You can use the `POST` request to create a new developer record. We'll use `curl` for this demonstration:
```shell
curl -X POST "http://localhost:8080/developers" -H 'Content-Type: application/json' -d '{"name": "Max", "birthday": "
2022-05-01"}'
```
This `POST` request sends a JSON payload with the developer’s name and birthday to the API endpoint. You’ll receive a
response with the details of the newly created developer.
### Read the developers with GET
To retrieve a list of developers, you can use the `GET` request:
```shell
curl http://localhost:8080/developers
```
This `GET` request returns a list of all developers stored in the database.
To fetch details of a specific developer, provide their unique id in the URL:
```shell
curl http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582
```
This request will return the developer’s information associated with the provided id.
### Update a developer with PUT
You can update a developer’s information using the `PUT` request:
```shell
curl -X PUT "http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582" -H 'Content-Type: application/json'
-d '{"name": "Owen", "birthday": "2022-05-01"}'
```
In this example, we update the developer with the given id by providing a new name and birthday in the JSON payload.
### Delete a developer with DELETE
Finally, to delete a developer record, use the DELETE request:
```shell
curl -X DELETE "http://localhost:8080/developers/a6905449-4523-48b6-bcd8-426128014582"
```
This request removes the developer entry associated with the provided `id` from the database.
Following these simple steps, you can interact with your RESTful API to manage developer records effectively. These HTTP
requests allow you to create, read, update, and delete developer entries, providing full control and functionality for
your API.
Explore and adapt these commands to suit your specific use cases and requirements.
## Using OpenAPI to test and explore your API
OpenAPI is a powerful tool that allows you to test and explore your API visually. You can access the OpenAPI
documentation for your Quarkus project at the following URL:
```html
http://localhost:8080/q/swagger-ui/
```
OpenAPI provides a user-friendly interface that displays all the available endpoints and their descriptions and allows
you to make API requests directly from the browser. It’s an essential tool for API development because it:
1. Facilitates API testing: You can send requests and receive responses directly from the OpenAPI interface, making it easy
to verify the functionality of your API.
2. Generates documentation: This is crucial for developers who need to understand how to use your API effectively.
3. Allows for exploration: You can explore all the available endpoints, their input parameters, and expected responses,
which helps you understand the API’s capabilities.
4. Assists in debugging: It shows request and response details, making identifying and resolving issues easier.
In conclusion, using OpenAPI alongside your RESTful API simplifies the testing and exploration process, improves
documentation, and enhances the overall developer experience when working with your API. It’s an essential tool in
modern API development practices.
## Conclusion
In this tutorial, you’ve gained valuable insights into building a REST API using Quarkus and seamlessly integrating it
with Eclipse JNoSQL for MongoDB. You now can efficiently manage developer records through a unified API, streamlining
your NoSQL database operations. However, to take your MongoDB experience even further and leverage the full power of
MongoDB Atlas, consider migrating your application to MongoDB Atlas.
MongoDB Atlas offers a powerful document model, enabling you to store data as JSON-like objects that closely resemble
your application code. With MongoDB Atlas, you can harness your preferred tools and programming languages. Whether you
manage your clusters through the MongoDB CLI for Atlas or embrace infrastructure-as-code (IaC) tools like Terraform or
Cloudformation, MongoDB Atlas provides a seamless and scalable solution for your database needs.
Ready to explore the benefits of MongoDB Atlas? Get started now by trying MongoDB Atlas.
Access the source code used in this tutorial.
Any questions? Come chat with us in the MongoDB Community Forum.
| md | {
"tags": [
"Java",
"MongoDB",
"Quarkus"
],
"pageDescription": "Learn to create a REST API with Quarkus and Eclipse JNoSQL for MongoDB",
"contentType": "Tutorial"
} | Create a Java REST API with Quarkus and Eclipse JNoSQL for MongoDB | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/cluster-to-cluster | created | # Efficient Sync Solutions: Cluster-to-Cluster Sync and Live Migration to Atlas
The challenges that are raised in modern business contexts are increasingly complex. These challenges range from the ability to minimize downtime during migrations to adopting efficient tools for transitioning from relational to non-relational databases, and from implementing resilient architectures that ensure high availability to the ability to scale horizontally, allowing large amounts of data to be efficiently managed and queried.
Two of the main challenges, which will be covered in this article, are:
- The need to create resilient IT infrastructures that can ensure business continuity or minimal downtime even in critical situations, such as the loss of a data center.
- Conducting migrations from one infrastructure to another without compromising operations.
It is in this context that MongoDB stands out by offering innovative solutions such as MongoSync and live migrate.
Ensuring business continuity with MongoSync: an approach to disaster recovery
-----------------------------------------------------------------------------
MongoDB Atlas, with its capabilities and remarkable flexibility, offers two distinct approaches to implementing business continuity strategies. These two strategies are:
- Creating a cluster with a geographic distribution of nodes.
- The implementation of two clusters in different regions synchronized via MongoSync.
In this section, we will explore the second point (i.e., the implementation of two clusters in different regions synchronized via MongoSync) in more detail.
What exactly is MongoSync? For a correct definition, we can refer to the official documentation:
"The `mongosync` binary is the primary process used in Cluster-to-Cluster Sync. `mongosync` migrates data from one cluster to another and can keep the clusters in continuous sync."
This tool performs the following operations:
- It migrates data from one cluster to another.
- It keeps the clusters in continuous sync.
Let's make this more concrete with an example:
- Initially, the situation looks like this for the production cluster and the disaster recovery cluster:
. The commands described below have been tested in the CentOS 7 operating system.
Let's proceed with the configuration of `mongosync` by defining a configuration file and a service:
```
vi /etc/mongosync.conf
```
You can copy and paste the current configuration into this file using the appropriate connection strings. You can also test with two Atlas clusters, which must be M10 level or higher. For more details on how to get the connection strings from your Atlas cluster, you can consult the documentation.
```
cluster0: "mongodb+srv://test_u:test_p@cluster0.*****.mongodb.net/?retryWrites=true&w=majority"
cluster1: "mongodb+srv://test_u:test_p@cluster1.*****.mongodb.net/?retryWrites=true&w=majority"
logPath: "/data/log/mongosync"
verbosity: "INFO"
```
>Generally, this step is performed on a Linux machine by system administrators. Although the step is optional, it is recommended to implement it in a production environment.
Next, you will be able to create a service named mongosync.service.
```
vi /usr/lib/systemd/system/mongosync.service
```
This is what your service file should look like.
```
Unit]
Description=Cluster-to-Cluster Sync
Documentation=https://www.mongodb.com/docs/cluster-to-cluster-sync/
[Service]
User=root
Group=root
ExecStart=/usr/local/bin/mongosync --config /etc/mongosync.conf
[Install]
WantedBy=multi-user.target
```
Reload all unit files:
```
systemctl daemon-reload
```
Now, we can start the service:
```
systemctl start mongosync
```
We can also check whether the service has been started correctly:
```
systemctl status mongosync
```
Output:
```
mongosync.service - Cluster-to-Cluster Sync
Loaded: loaded (/usr/lib/systemd/system/mongosync.service; disabled; vendor preset: disabled)
Active: active (running) since dom 2024-04-14 21:45:45 CEST; 4s ago
Docs: https://www.mongodb.com/docs/cluster-to-cluster-sync/
Main PID: 1573 (mongosync)
CGroup: /system.slice/mongosync.service
└─1573 /usr/local/bin/mongosync --config /etc/mongosync.conf
apr 14 21:45:45 mongosync.mongodb.int systemd[1]: Started Cluster-to-Cluster Sync.
```
> If a service is not created and executed, in a more general way, you can start the process in the following way:
> `mongosync --config mongosync.conf `
After starting the service, verify that it is in the idle state:
```
curl localhost:27182/api/v1/progress -XGET | jq
```
Output:
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 191 100 191 0 0 14384 0 --:--:-- --:--:-- --:--:-- 14692
{
"progress": {
"state": "IDLE",
"canCommit": false,
"canWrite": false,
"info": null,
"lagTimeSeconds": null,
"collectionCopy": null,
"directionMapping": null,
"mongosyncID": "coordinator",
"coordinatorID": ""
}
}
```
We can run the synchronization:
```
curl localhost:27182/api/v1/start -XPOST \
--data '
{
"source": "cluster0",
"destination": "cluster1",
"reversible": true,
"enableUserWriteBlocking": true
} '
```
Output:
```
{"success":true}
```
We can also keep track of the synchronization status:
```
curl localhost:27182/api/v1/progress -XGET | jq
```
Output:
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 502 100 502 0 0 36001 0 --:--:-- --:--:-- --:--:-- 38615
{
"progress": {
"state": "RUNNING",
"canCommit": false,
"canWrite": false,
"info": "collection copy",
"lagTimeSeconds": 54,
"collectionCopy": {
"estimatedTotalBytes": 390696597,
"estimatedCopiedBytes": 390696597
},
"directionMapping": {
"Source": "cluster0: cluster0.*****.mongodb.net",
"Destination": "cluster1: cluster1.*****.mongodb.net"
},
"mongosyncID": "coordinator",
"coordinatorID": "coordinator"
}
}
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 510 100 510 0 0 44270 0 --:--:-- --:--:-- --:--:-- 46363
{
"progress": {
"state": "RUNNING",
"canCommit": true,
"canWrite": false,
"info": "change event application",
"lagTimeSeconds": 64,
"collectionCopy": {
"estimatedTotalBytes": 390696597,
"estimatedCopiedBytes": 390696597
},
"directionMapping": {
"Source": "cluster0: cluster0.*****.mongodb.net",
"Destination": "cluster1: cluster1.*****.mongodb.net"
},
"mongosyncID": "coordinator",
"coordinatorID": "coordinator"
}
}
```
At this time, the DR environment is aligned with the production environment and will also maintain synchronization for the next operations:
![Image of two clusters located in different datacenters, aligned and remained synchronized via mongosync. Mongosync runs on an on-premises server.][2]
```
Atlas atlas-qsd40w-shard-0 [primary] test> show dbs
admin 140.00 KiB
config 276.00 KiB
local 524.00 KiB
sample_airbnb 52.09 MiB
sample_analytics 9.44 MiB
sample_geospatial 1.02 MiB
sample_guides 40.00 KiB
sample_mflix 109.01 MiB
sample_restaurants 5.73 MiB
sample_supplies 976.00 KiB
sample_training 41.20 MiB
sample_weatherdata 2.39 MiB
```
And our second cluster is now in sync with the following data.
```
Atlas atlas-lcu71y-shard-0 [primary] test> show dbs
admin 172.00 KiB
config 380.00 KiB
local 427.22 MiB
mongosync_reserved_for_internal_use 420.00 KiB
sample_airbnb 53.06 MiB
sample_analytics 9.55 MiB
sample_geospatial 1.40 MiB
sample_guides 40.00 KiB
sample_mflix 128.38 MiB
sample_restaurants 6.47 MiB
sample_supplies 1.03 MiB
sample_training 47.21 MiB
sample_weatherdata 2.61 MiB
````
Armed with what we've discussed so far, we could ask a last question like:
*Is it possible to take advantage of the disaster recovery environment in some way, or should we just let it synchronize?*
By making the appropriate `mongosync` configurations --- for example, by setting the "buildIndexes" option to false and omitting the "enableUserWriteBlocking" parameter (which is set to false by default) --- we can take advantage of the [limitation regarding non-synchronization of users and roles to create read-only users. We do this in such a way that no entries can be entered, thereby ensuring consistency between the origin and destination clusters and allowing us to use the disaster recovery environment to create the appropriate indexes that will go into optimizing slow queries identified in the production environment.
Live migrate to Atlas: minimizing downtime
------------------------------------------
Live migrate is a tool that allows users to perform migrations to MongoDB Atlas and more specifically, as mentioned by the official documentation, is a process that uses `mongosync` as the underlying data migration tool, enabling faster live migrations with less downtime if both the source and destination clusters are running MongoDB 6.0.8 or later.
So, what is the added value of this tool compared to `mongosync`?
It brings two advantages:
- You can avoid the need to provision and configure a server to host `mongosync`.
- You have the ability to migrate from previous versions, as indicated in the migration path.
!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltf8bd745448a43713/663e5cdea2616e0474ff1789/Screenshot_2024-05-10_at_1.40.54_PM.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0c6cb1fbf15ed87/663e5d0fa2616e5e82ff178f/Screenshot_2024-05-10_at_1.41.09_PM.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9ace7dcef1c2e7e9/663e67322ff97d34907049ac/Screenshot_2024-05-10_at_2.24.24_PM.png | md | {
"tags": [
"Atlas"
],
"pageDescription": "Learn about how to enable cluster-to-cluster sync",
"contentType": "Tutorial"
} | Efficient Sync Solutions: Cluster-to-Cluster Sync and Live Migration to Atlas | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/java/quarkus-pagination | created | # Introduction to Data Pagination With Quarkus and MongoDB: A Comprehensive Tutorial
## Introduction
In modern web development, managing large datasets efficiently through APIs is crucial for enhancing application
performance and user experience. This tutorial explores pagination techniques using Quarkus and MongoDB, a robust
combination for scalable data delivery. Through a live coding session, we'll delve into different pagination methods and
demonstrate how to implement these in a Quarkus-connected MongoDB environment. This guide empowers developers to
optimize REST APIs for effective data handling.
You can find all the code presented in this tutorial in
the GitHub repository:
```bash
git clone git@github.com:mongodb-developer/quarkus-pagination-sample.git
```
## Prerequisites
For this tutorial, you'll need:
- Java 21.
- Maven.
- A MongoDB cluster.
- MongoDB Atlas (Option 1)
- Docker (Option 2)
You can use the following Docker command to start a standalone MongoDB instance:
```bash
docker run --rm -d --name mongodb-instance -p 27017:27017 mongo
```
Or you can use MongoDB Atlas and try the M0 free tier to deploy your cluster.
## Create a Quarkus project
- Visit the Quarkus Code Generator.
- Configure your project by selecting the desired options, such as the group and artifact ID.
- Add the necessary dependencies to your project. For this tutorial, we will add:
- JNoSQL Document MongoDB quarkus-jnosql-document-mongodb].
- RESTEasy Reactive [quarkus-resteasy-reactive].
- RESTEasy Reactive Jackson [quarkus-resteasy-reactive-jackson].
- OpenAPI [quarkus-smallrye-openapi].
> Note: If you cannot find some dependencies, you can add them manually in the `pom.xml`. See the file below.
- Generate the project, download the ZIP file, and extract it to your preferred location. Remember that the file
structure
may vary with different Quarkus versions, but this should be fine for the tutorial. The core focus will be modifying
the `pom.xml` file and source code, which remains relatively consistent across versions. Any minor structural
differences should be good for your progress, and you can refer to version-specific documentation if needed for a
seamless learning experience.
At this point, your pom.xml file should look like this:
```xml
io.quarkus
quarkus-smallrye-openapi
io.quarkiverse.jnosql
quarkus-jnosql-document-mongodb
3.3.0
io.quarkus
quarkus-resteasy
io.quarkus
quarkus-resteasy-jackson
io.quarkus
quarkus-arc
io.quarkus
quarkus-junit5
test
io.rest-assured
rest-assured
test
```
We will work with the latest version of Quarkus alongside Eclipse JNoSQL Lite, a streamlined integration that notably
does not rely on reflection. This approach enhances performance and simplifies the configuration process, making it an
optimal choice for developers looking to maximize efficiency in their applications.
## Database configuration
Before you dive into the implementation, it's essential to configure your MongoDB database properly. In MongoDB, you
must often set up credentials and specific configurations to connect to your database instance. Eclipse JNoSQL provides
a flexible configuration mechanism that allows you to manage these settings efficiently.
You can find detailed configurations and setups for various databases, including MongoDB, in the [Eclipse JNoSQL GitHub
repository.
To run your application locally, you can configure the database name and properties in your application's
`application.properties` file. Open this file and add the following line to set the database name:
```properties
quarkus.mongodb.connection-string = mongodb://localhost
jnosql.document.database = fruits
```
This configuration will enable your application to:
- Use the "fruits" database.
- Connect to the MongoDB cluster available at the provided connection string.
In production, make sure to enable access control and enforce authentication. See the security checklist for more
details.
It's worth mentioning that Eclipse JNoSQL leverages Eclipse MicroProfile Configuration, which is designed to facilitate
the implementation of twelve-factor applications, especially in configuration management. It means you can override
properties through environment variables, allowing you to switch between different configurations for development,
testing, and production without modifying your code. This flexibility is a valuable aspect of building robust and easily
deployable applications.
Now that your database is configured, you can proceed with the tutorial and create your RESTful API with Quarkus and
Eclipse JNoSQL for MongoDB.
## Create a fruit entity
In this step, we will create a simple `Fruit` entity using Java records. Create a new class in the `src/main/java`
directory named `Fruit`.
```java
import jakarta.nosql.Column;
import jakarta.nosql.Convert;
import jakarta.nosql.Entity;
import jakarta.nosql.Id;
import org.eclipse.jnosql.databases.mongodb.mapping.ObjectIdConverter;
@Entity
public class Fruit {
@Id
@Convert(ObjectIdConverter.class)
private String id;
@Column
private String name;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@Override
public String toString() {
return "Fruit{" +
"id='" + id + '\'' +
", name='" + name + '\'' +
'}';
}
public static Fruit of(String name) {
Fruit fruit = new Fruit();
fruit.setName(name);
return fruit;
}
}
```
## Create a fruit repository
We will simplify the integration between Java and MongoDB using the Jakarta Data repository by creating an interface
that extends NoSQLRepository. The framework automatically implements this interface, enabling us to define methods for
data retrieval that integrate seamlessly with MongoDB. We will focus on implementing two types of pagination: offset
pagination represented by `Page` and keyset (cursor) pagination represented by `CursoredPage`.
Here's how we define the FruitRepository interface to include methods for both pagination strategies:
```java
import jakarta.data.Sort;
import jakarta.data.page.CursoredPage;
import jakarta.data.page.Page;
import jakarta.data.page.PageRequest;
import jakarta.data.repository.BasicRepository;
import jakarta.data.repository.Find;
import jakarta.data.repository.OrderBy;
import jakarta.data.repository.Repository;
@Repository
public interface FruitRepository extends BasicRepository {
@Find
CursoredPage cursor(PageRequest pageRequest, Sort order);
@Find
@OrderBy("name")
Page offSet(PageRequest pageRequest);
long countBy();
}
```
## Create setup
We'll demonstrate how to populate and manage the MongoDB database with a collection of fruit entries at the start of the
application using Quarkus. We'll ensure our database is initialized with predefined data, and we'll also handle cleanup
on application shutdown. Here's how we can structure the SetupDatabase class:
```java
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.enterprise.event.Observes;
import io.quarkus.runtime.ShutdownEvent;
import io.quarkus.runtime.StartupEvent;
import org.jboss.logging.Logger;
import java.util.List;
@ApplicationScoped
public class SetupDatabase {
private static final Logger LOGGER = Logger.getLogger(SetupDatabase.class.getName());
private final FruitRepository fruitRepository;
public SetupDatabase(FruitRepository fruitRepository) {
this.fruitRepository = fruitRepository;
}
void onStart(@Observes StartupEvent ev) {
LOGGER.info("The application is starting...");
long count = fruitRepository.countBy();
if (count > 0) {
LOGGER.info("Database already populated");
return;
}
List fruits = List.of(
Fruit.of("apple"),
Fruit.of("banana"),
Fruit.of("cherry"),
Fruit.of("date"),
Fruit.of("elderberry"),
Fruit.of("fig"),
Fruit.of("grape"),
Fruit.of("honeydew"),
Fruit.of("kiwi"),
Fruit.of("lemon")
);
fruitRepository.saveAll(fruits);
}
void onStop(@Observes ShutdownEvent ev) {
LOGGER.info("The application is stopping...");
fruitRepository.deleteAll(fruitRepository.findAll().toList());
}
}
```
## Create a REST API
Now, let's create a RESTful API to manage developer records. Create a new class in `src/main/java`
named `FruitResource`.
```java
import jakarta.data.Sort;
import jakarta.data.page.PageRequest;
import jakarta.ws.rs.DefaultValue;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;
import jakarta.ws.rs.Produces;
import jakarta.ws.rs.QueryParam;
import jakarta.ws.rs.core.MediaType;
@Path("/fruits")
public class FruitResource {
private final FruitRepository fruitRepository;
private static final Sort ASC = Sort.asc("name");
private static final Sort DESC = Sort.asc("name");
public FruitResource(FruitRepository fruitRepository) {
this.fruitRepository = fruitRepository;
}
@Path("/offset")
@GET
@Produces(MediaType.APPLICATION_JSON)
public Iterable hello(@QueryParam("page") @DefaultValue("1") long page,
@QueryParam("size") @DefaultValue("2") int size) {
var pageRequest = PageRequest.ofPage(page).size(size);
return fruitRepository.offSet(pageRequest).content();
}
@Path("/cursor")
@GET
@Produces(MediaType.APPLICATION_JSON)
public Iterable cursor(@QueryParam("after") @DefaultValue("") String after,
@QueryParam("before") @DefaultValue("") String before,
@QueryParam("size") @DefaultValue("2") int size) {
if (!after.isBlank()) {
var pageRequest = PageRequest.ofSize(size).afterCursor(PageRequest.Cursor.forKey(after));
return fruitRepository.cursor(pageRequest, ASC).content();
} else if (!before.isBlank()) {
var pageRequest = PageRequest.ofSize(size).beforeCursor(PageRequest.Cursor.forKey(before));
return fruitRepository.cursor(pageRequest, DESC).stream().toList();
}
var pageRequest = PageRequest.ofSize(size);
return fruitRepository.cursor(pageRequest, ASC).content();
}
}
```
## Test the REST API
Now that we've created our RESTful API for managing developer records, it's time to put it to the test. We'll
demonstrate how to interact with the API using various HTTP requests and command-line tools.
### Start the project
```bash
./mvnw compile quarkus:dev
```
### Exploring pagination with offset
We will use `curl` to learn more about pagination using the URLs provided. It is a command-line tool that is often used
to send HTTP requests. The URLs you have been given are used to access a REST API endpoint fetching fruit pages using
offset pagination. Each URL requests a different page, enabling us to observe how pagination functions via the API.
Below is how you can interact with these endpoints using the `curl` tool.
#### Fetching the first page
This command requests the first page of fruits from the server.
```bash
curl --location http://localhost:8080/fruits/offset?page=1
```
#### Fetching the second page
This command gets the next set of fruits, which is the second page.
```bash
curl --location http://localhost:8080/fruits/offset?page=2
```
#### Fetching the fifth page
By requesting the fifth page, you can see how the API responds when you request a page that might be beyond the range of
existing data.
```bash
curl --location http://localhost:8080/fruits/offset?page=5
```
### Exploring pagination with a cursor
To continue exploring cursor-based pagination with your API, using both `after` and `before` parameters provides a way
to navigate through your dataset forward and backward respectively. This method allows for flexible data retrieval,
which can be particularly useful for interfaces that allow users to move to the next or previous set of results. Here's
how you can structure your `curl` commands to use these parameters effectively:
#### Fetching the initial set of fruits
This command gets the first batch of fruits without specifying a cursor, starting from the beginning.
```bash
curl --location http://localhost:8080/fruits/cursor
```
#### Fetching fruits after "banana"
This command fetches the list of fruits that appear after "banana" in your dataset. This is useful for moving forward in
the list.
```bash
curl --location http://localhost:8080/fruits/cursor?after=banana
```
#### Fetching fruits before "date"
This command is used to go back to the set of fruits that precede "date" in the dataset. This is particularly useful for
implementing "Previous" page functionality.
```bash
curl --location http://localhost:8080/fruits/cursor?before=date
```
## Conclusion
This tutorial explored the fundamentals and implementation of pagination using Quarkus and MongoDB, demonstrating how to
manage large datasets in web applications effectively. By integrating the Jakarta Data repository with Quarkus, we
designed interfaces that streamline the interaction between Java and MongoDB, supporting offset and cursor-based
pagination techniques. We started by setting up a basic Quarkus application and configuring MongoDB connections. Then,
we demonstrated how to populate the database with initial data and ensure clean shutdown behavior.
Throughout this tutorial, we've engaged in live coding sessions, implementing and testing various pagination methods.
We've used the `curl` command to interact with the API, fetching data with no parameters, and using `after` and `before`
parameters to navigate through the dataset forward and backward. The use of cursor-based pagination, in particular,
has showcased its benefits in scenarios where datasets are frequently updated or when precise data retrieval control is
needed. This approach not only boosts performance by avoiding the common issues of offset pagination but also provides a
user-friendly way to navigate through data.
Ready to explore the benefits of MongoDB Atlas? Get started now by trying MongoDB Atlas.
Access the source code used in this tutorial.
Any questions? Come chat with us in the MongoDB Community Forum.
| md | {
"tags": [
"Java",
"MongoDB",
"Quarkus"
],
"pageDescription": "In this blog post, you'll learn how to create a RESTful API with Quarkus that supports MongoDB queries with pagination.",
"contentType": "Tutorial"
} | Introduction to Data Pagination With Quarkus and MongoDB: A Comprehensive Tutorial | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/getting-started-mongodb-atlas-serverless-aws-cdk-serverless-computing | created | # Getting Started With MongoDB Atlas Serverless, AWS CDK, and AWS Serverless Computing
Serverless development is a cloud computing execution model where cloud and SaaS providers dynamically manage the allocation and provisioning of servers on your behalf, dropping all the way to $0 cost when not in use. This approach allows developers to build and run applications and services without worrying about the underlying infrastructure, focusing primarily on writing code for their core product and associated business logic. Developers opt for serverless architectures to benefit from reduced operational overhead, cost efficiency through pay-per-use billing, and the ability to easily scale applications in response to real-time demand without manual intervention.
MongoDB Atlas serverless instances eliminate the cognitive load of sizing infrastructure and allow you to get started with minimal configuration, so you can focus on building your app. Simply choose a cloud region and then start building with documents that map directly to objects in your code. Your serverless database will automatically scale with your app's growth, charging only for the resources utilized. Whether you’re just getting started or already have users all over the world, Atlas provides the capabilities to power today's most innovative applications while meeting the most demanding requirements for resilience, scale, and data privacy.
In this tutorial, we will walk you through getting started to build and deploy a simple serverless app that aggregates sales data stored in a MongoDB Atlas serverless instance using AWS Lambda as our compute engine and Amazon API Gateway as our fully managed service to create a RESTful API interface. Lastly, we will show you how easy this is using our recently published AWS CDK Level 3 constructs to better incorporate infrastructure as code (IaC) and DevOps best practices into your software development life cycle (SDLC).
In this step-by-step guide, we will walk you through the entire process. We will be starting from an empty directory in an Ubuntu 20.04 LTS environment, but feel free to follow along in any supported OS that you prefer.
Let's get started!
## Setup
1. Create a MongoDB Atlas account. Already have an AWS account? Atlas supports paying for usage via the AWS Marketplace (AWS MP) without any upfront commitment — simply sign up for MongoDB Atlas via the AWS Marketplace.
2. Create a MongoDB Atlas programmatic API key (PAK)
3. Install and configure the AWS CLI and Atlas CLI in your terminal if you don’t have them already.
4. Install the latest versions of Node.js and npm.
5. Lastly, for the playground code running on Lambda function, we will be using Python so will also require Python3 and pip installed on your terminal.
## Step 1: install AWS CDK, Bootstrap, and Initialize
The AWS CDK is an open-source framework that lets you define and provision cloud infrastructure using code via AWS CloudFormation. It offers preconfigured components for easy cloud application development without the need for expertise. For more details, see the AWS CDK Getting Started guide.
You can install CDK using npm:
```
sudo npm install -g aws-cdk
```
Next, we need to “bootstrap” our AWS environment to create the necessary resources to manage the CDK apps (see AWS docs for full details). Bootstrapping is the process of preparing an environment for deployment. Bootstrapping is a one-time action that you must perform for every environment that you deploy resources into.
The `cdk bootstrap` command creates an Amazon S3 bucket for storing files, AWS IAM roles, and a CloudFormation stack to manage these scaffolding resources:
```
cdk bootstrap aws://ACCOUNT_NUMBER/REGION
```
Now, we can initialize a new CDK app using TypeScript. This is done using the cdk init command:
```
cdk init -l typescript
```
This command initializes a new CDK app in TypeScript language. It creates a new directory with the necessary files and directories for a CDK app. When you initialize a new AWS CDK app, the CDK CLI sets up a project structure that organizes your application's code into a conventional layout. This layout includes bin and lib directories, among others, each serving a specific purpose in the context of a CDK app. Here's what each of these directories is for:
- The **bin directory** contains the entry point of your CDK application. It's where you define which stacks from your application should be synthesized and deployed. Typically, this directory will have a .ts file (with the same name as your project or another meaningful name you choose) that imports stacks from the lib directory and initializes them.
The bin directory's script is the starting point that the CDK CLI executes to synthesize CloudFormation templates from your definitions. It acts as the orchestrator, telling the CDK which stacks to include in the synthesis process.
- The **lib directory** is where the core of your application's cloud infrastructure code lives. It's intended for defining CDK stacks and constructs, which are the building blocks of your AWS infrastructure. Typically, this directory will have a .ts file (with the same name as your project or another meaningful name you choose).
The lib directory contains the actual definitions of those stacks — what resources they include, how those resources are configured, and how they interact. You can define multiple stacks in the lib directory and selectively instantiate them in the bin directory as needed.
## Step 2: create and deploy the MongoDB Atlas Bootstrap Stack
The `atlas-cdk-bootstrap` CDK construct was designed to facilitate the smooth configuration and setup of the MongoDB Atlas CDK framework. This construct simplifies the process of preparing your environment to run the Atlas CDK by automating essential configurations and resource provisioning.
Key features:
- User provisioning: The atlas-cdk-bootstrap construct creates a dedicated execution role within AWS Identity and Access Management (IAM) for executing CloudFormation Extension resources. This helps maintain security and isolation for Atlas CDK operations.
- Programmatic API key management: It sets up an AWS Secrets Manager to securely store and manage programmatic API Keys required for interacting with the Atlas services. This ensures sensitive credentials are protected and can be easily rotated.
- CloudFormation Extensions activation: This construct streamlines the activation of CloudFormation public extensions essential for the MongoDB Atlas CDK. It provides a seamless interface for users to specify the specific CloudFormation resources that need to be deployed and configured.
With `atlas-cdk-bootstrap`, you can accelerate the onboarding process for Atlas CDK and reduce the complexity of environment setup. By automating user provisioning, credential management, and resource activation, this CDK construct empowers developers to focus on building and deploying applications using the MongoDB Atlas CDK without getting bogged down by manual configuration tasks.
To use the atlas-cdk-bootstrap, we will first need a specific CDK package called `awscdk-resources-mongodbatlas` (see more details on this package on our
Construct Hub page). Let's install it:
```
npm install awscdk-resources-mongodbatlas
```
To confirm that this package was installed correctly and to find its version number, see the package.json file.
Next, in the .ts file in the **bin directory** (typically the same name as your project, i.e., `cloudshell-user.ts`), delete the entire contents and update with:
```javascript
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { AtlasBootstrapExample } from '../lib/cloudshell-user-stack'; //replace "cloudshell-user" with name of the .ts file in the lib directory
const app = new cdk.App();
const env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };
new AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });
```
Next, in the .ts file in the **lib directory** (typically the same name as your project concatenated with “-stack”, i.e., `cloudshell-user-stack.ts`), delete the entire contents and update with:
```javascript
import * as cdk from 'aws-cdk-lib'
import { Construct } from 'constructs'
import {
MongoAtlasBootstrap,
MongoAtlasBootstrapProps,
AtlasBasicResources
} from 'awscdk-resources-mongodbatlas'
export class AtlasBootstrapExample extends cdk.Stack {
constructor (scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props)
const roleName = 'MongoDB-Atlas-CDK-Excecution'
const mongoDBProfile = 'development'
const bootstrapProperties: MongoAtlasBootstrapProps = {
roleName, secretProfile: mongoDBProfile,
typesToActivate: 'ServerlessInstance', ...AtlasBasicResources]
}
new MongoAtlasBootstrap(this, 'mongodb-atlas-bootstrap', bootstrapProperties)
}
}
```
Lastly, you can check and deploy the atlas-cdk-bootstrap CDK construct with:
```
npx cdk diff mongodb-atlas-bootstrap-stack
npx cdk deploy mongodb-atlas-bootstrap-stack
```
## Step 3: store MongoDB Atlas PAK as env variables and update AWS Secrets Manager
Now that the atlas-cdk-bootstrap CDK construct has been provisioned, we then store our previously created [MongoDB Atlas programmatic API keys in AWS Secrets Manager. For more information on how to create MongoDB Atas PAK, refer to Step 2 from our prerequisites setup.
This will allow the CloudFormation Extension execution role to provision key components including: MongoDB Atlas serverless instance, Atlas project, Atlas project IP access list, and database user.
First, we must store these secrets as environment variables:
```
export MONGO_ATLAS_PUBLIC_KEY=’INPUT_YOUR_PUBLIC_KEY'
export MONGO_ATLAS_PRIVATE_KEY=’INPUT_YOUR_PRIVATE_KEY'
```
Then, we can update AWS Secrets Manager with the following AWS CLI command:
```
aws secretsmanager update-secret --secret-id cfn/atlas/profile/development --secret-string "{\"PublicKey\":\"${MONGO_ATLAS_PUBLIC_KEY}\",\"PrivateKey\":\"${MONGO_ATLAS_PRIVATE_KEY}\"}"
```
## Step 4: create and deploy the atlas-serverless-basic resource CDK L3 construct
The AWS CDK Level 3 (L3) constructs are high-level abstractions that encapsulate a set of related AWS resources and configuration logic into reusable components, allowing developers to define cloud infrastructure using familiar programming languages with less code. Developers use L3 constructs to streamline the process of setting up complex AWS and MongoDB Atlas services, ensuring best practices, reducing boilerplate code, and enhancing productivity through simplified syntax.
The MongoDB Atlas AWS CDK L3 construct for Atlas Serverless Basic provides developers with an easy and idiomatic way to deploy MongoDB Atlas serverless instances within AWS environments. Under the hood, this construct abstracts away the intricacies of configuring and deploying MongoDB Atlas serverless instances and related infrastructure on your behalf.
Next, we then update our .ts file in the **bin directory** to:
- Add the AtlasServerlessBasicStack to the import statement.
- Add the Atlas Organization ID.
- Add the IP address of NAT gateway which we suggest to be the only IP address on your Atlas serverless instance access whitelist.
```javascript
#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { AtlasBootstrapExample, AtlasServerlessBasicStack } from '../lib/cloudshell-user-stack'; //update "cloudshell-user" with your stack name
const app = new cdk.App();
const env = { region: process.env.CDK_DEFAULT_REGION, account: process.env.CDK_DEFAULT_ACCOUNT };
// the bootstrap stack
new AtlasBootstrapExample(app, 'mongodb-atlas-bootstrap-stack', { env });
type AccountConfig = {
readonly orgId: string;
readonly projectId?: string;
}
const MyAccount: AccountConfig = {
orgId: '63234d3234ec0946eedcd7da', //update with your Atlas Org ID
};
const MONGODB_PROFILE_NAME = 'development';
// the serverless stack with mongodb atlas serverless instance
const serverlessStack = new AtlasServerlessBasicStack(app, 'atlas-serverless-basic-stack', {
env,
ipAccessList: '46.137.146.59', //input your static IP Address from NAT Gateway
profile: MONGODB_PROFILE_NAME,
...MyAccount,
});
```
To leverage this, we can update our .ts file in the **lib directory** to:
- Update import blocks for newly used resources.
- Activate underlying CloudFormation resources on the third-party CloudFormation registry.
- Create a database username and password and store them in AWS Secrets Manager.
- Update output blocks to display the Atlas serverless instance connection string and project name.
```javascript
import * as path from 'path';
import {
App, Stack, StackProps,
Duration,
CfnOutput,
SecretValue,
aws_secretsmanager as secretsmanager,
} from 'aws-cdk-lib';
import * as cdk from 'aws-cdk-lib';
import { SubnetType } from 'aws-cdk-lib/aws-ec2';
import {
MongoAtlasBootstrap,
MongoAtlasBootstrapProps,
AtlasBasicResources,
AtlasServerlessBasic,
ServerlessInstanceProviderSettingsProviderName,
} from 'awscdk-resources-mongodbatlas';
import { Construct } from 'constructs';
export class AtlasBootstrapExample extends cdk.Stack {
constructor (scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props)
const roleName = 'MongoDB-Atlas-CDK-Excecution'
const mongoDBProfile = 'development'
const bootstrapProperties: MongoAtlasBootstrapProps = {
roleName: roleName,
secretProfile: mongoDBProfile,
typesToActivate: 'ServerlessInstance', ...AtlasBasicResources]
}
new MongoAtlasBootstrap(this, 'mongodb-atlascdk-bootstrap', bootstrapProperties)
}
}
export interface AtlasServerlessBasicStackProps extends StackProps {
readonly profile: string;
readonly orgId: string;
readonly ipAccessList: string;
}
export class AtlasServerlessBasicStack extends Stack {
readonly dbUserSecret: secretsmanager.ISecret;
readonly connectionString: string;
constructor(scope: Construct, id: string, props: AtlasServerlessBasicStackProps) {
super(scope, id, props);
const stack = Stack.of(this);
const projectName = `${stack.stackName}-proj`;
const dbuserSecret = new secretsmanager.Secret(this, 'DatabaseUserSecret', {
generateSecretString: {
secretStringTemplate: JSON.stringify({ username: 'serverless-user' }),
generateStringKey: 'password',
excludeCharacters: '%+~`#$&*()|[]{}:;<>?!\'/@"\\=-.,',
},
});
this.dbUserSecret = dbuserSecret;
const ipAccessList = props.ipAccessList;
// see https://github.com/mongodb/awscdk-resources-mongodbatlas/blob/main/examples/l3-resources/atlas-serverless-basic.ts#L22
const basic = new AtlasServerlessBasic(this, 'serverless-basic', {
serverlessProps: {
profile: props.profile,
providerSettings: {
providerName: ServerlessInstanceProviderSettingsProviderName.SERVERLESS,
regionName: 'EU_WEST_1',
},
},
projectProps: {
orgId: props.orgId,
name: projectName,
},
dbUserProps: {
username: 'serverless-user',
},
ipAccessListProps: {
accessList: [
{ ipAddress: ipAccessList, comment: 'My first IP address' },
],
},
profile: props.profile,
});
this.connectionString = basic.mserverless.getAtt('ConnectionStrings.StandardSrv').toString();
new CfnOutput(this, 'ProjectName', { value: projectName });
new CfnOutput(this, 'ConnectionString', { value: this.connectionString });
}
}
```
Lastly, you can check and deploy the atlas-serverless-basic CDK construct with:
```
npx cdk diff atlas-serverless-basic-stack
npx cdk deploy atlas-serverless-basic-stack
```
Verify in the Atlas UI, as well as the AWS Management Console, that all underlying MongoDB Atlas resources have been created. Note the database username and password is stored as a new secret in AWS Secrets Manager (as specified in above AWS region of your choosing).
## Step 5: copy the auto-generated database username and password created in AWS Secrets Manager secret into Atlas
When we initially created the Atlas database user credentials, we created a random password, and we can’t simply copy that into AWS Secrets Manager because this would expose our database password in our CloudFormation template.
To avoid this, we need to manually update the MongoDB Atlas database user password from the secret stored in AWS Secrets Manager so they will be in sync. The AWS Lambda function will then pick this password from AWS Secrets Manager to successfully authenticate to the Atlas serverless instance.
We can do this programmatically via the [Atlas CLI. To get started, we first need to make sure we have configured with the correct PAK that we created as part of our initial setup:
```
atlas config init
```
We then input the correct PAK and select the correct project ID. For example:
for an AWS Lambda function that interacts with the MongoDB Atlas serverless instance via a public endpoint. It fetches database credentials from AWS Secrets Manager, constructs a MongoDB Atlas connection string using these credentials, and connects to the MongoDB Atlas serverless instance.
The function then generates and inserts 20 sample sales records with random data into a sales collection within the database. It also aggregates sales data for the year 2023, counting the number of sales and summing the total sales amount by item. Finally, it prints the count of sales in 2023 and the aggregation results, returning this information as a JSON response.
Hence, we populate the Lambda/playground/index.py with:
```python
from datetime import datetime, timedelta
from pymongo.mongo_client import MongoClient
from pymongo.server_api import ServerApi
import random, json, os, re, boto3
# Function to generate a random datetime between two dates
def random_date(start_date, end_date):
time_delta = end_date - start_date
random_days = random.randint(0, time_delta.days)
return start_date + timedelta(days=random_days)
def get_private_endpoint_srv(mongodb_uri, username, password):
"""
Get the private endpoint SRV address from the given MongoDB URI.
e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to
`mongodb+srv://:@my-cluster-pl-0.mzvjf.mongodb.net/?retryWrites=true&w=majority`
"""
match = re.match(r"mongodb\+srv://(.+)\.(.+).mongodb.net", mongodb_uri)
if match:
return "mongodb+srv://{}:{}@{}-pl-0.{}.mongodb.net/?retryWrites=true&w=majority".format(username, password, match.group(1), match.group(2))
else:
raise ValueError("Invalid MongoDB URI: {}".format(mongodb_uri))
def get_public_endpoint_srv(mongodb_uri, username, password):
"""
Get the private endpoint SRV address from the given MongoDB URI.
e.g. `mongodb+srv://my-cluster.mzvjf.mongodb.net` will be converted to
`mongodb+srv://:@my-cluster.mzvjf.mongodb.net/?retryWrites=true&w=majority`
"""
match = re.match(r"mongodb\+srv://(.+)\.(.+).mongodb.net", mongodb_uri)
if match:
return "mongodb+srv://{}:{}@{}.{}.mongodb.net/?retryWrites=true&w=majority".format(username, password, match.group(1), match.group(2))
else:
raise ValueError("Invalid MongoDB URI: {}".format(mongodb_uri))
client = boto3.client('secretsmanager')
conn_string_srv = os.environ.get('CONN_STRING_STANDARD')
secretId = os.environ.get('DB_USER_SECRET_ARN')
json_secret = json.loads(client.get_secret_value(SecretId=secretId).get('SecretString'))
username = json_secret.get('username')
password = json_secret.get('password')
def handler(event, context):
# conn_string_private = get_private_endpoint_srv(conn_string_srv, username, password)
conn_string = get_public_endpoint_srv(conn_string_srv, username, password)
print('conn_string=', conn_string)
client = MongoClient(conn_string, server_api=ServerApi('1'))
# Select the database to use.
db = client'mongodbVSCodePlaygroundDB']
# Create 20 sample entries with dates spread between 2021 and 2023.
entries = []
for _ in range(20):
item = random.choice(['abc', 'jkl', 'xyz', 'def'])
price = random.randint(5, 30)
quantity = random.randint(1, 20)
date = random_date(datetime(2021, 1, 1), datetime(2023, 12, 31))
entries.append({
'item': item,
'price': price,
'quantity': quantity,
'date': date
})
# Insert a few documents into the sales collection.
sales_collection = db['sales']
sales_collection.insert_many(entries)
# Run a find command to view items sold in 2023.
sales_2023 = sales_collection.count_documents({
'date': {
'$gte': datetime(2023, 1, 1),
'$lt': datetime(2024, 1, 1)
}
})
# Print a message to the output window.
print(f"{sales_2023} sales occurred in 2023.")
pipeline = [
# Find all of the sales that occurred in 2023.
{ '$match': { 'date': { '$gte': datetime(2023, 1, 1), '$lt': datetime(2024, 1, 1) } } },
# Group the total sales for each product.
{ '$group': { '_id': '$item', 'totalSaleAmount': { '$sum': { '$multiply': [ '$price', '$quantity' ] } } } }
]
cursor = sales_collection.aggregate(pipeline)
results = list(cursor)
print(results)
response = {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps({
'sales_2023': sales_2023,
'results': results
})
}
return response
```
Lastly, we need to create one last file that will store our requirements for the Python playground application with:
```
touch lambda/playground/requirements.txt
```
In this file, we populate with:
```
pymongo
requests
boto3
testresources
urllib3==1.26
```
To then install these dependencies used in requirements.txt:
```
cd lambda/playground
pip install -r requirements.txt -t .
```
This installs all required Python packages in the playground directory and AWS CDK would bundle into a zip file which we can see from AWS Lambda console after deployment.
## Step 7: create suggested AWS networking infrastructure
AWS Lambda functions placed in public subnets do not automatically have internet access because Lambda functions do not have public IP addresses, and a public subnet routes traffic through an internet gateway (IGW). To access the internet, a Lambda function can be associated with a private subnet with a route to a NAT gateway.
First, ensure that you have NAT gateway created in your public subnet. Then, create a route from a private subnet (where your AWS Lambda resource will live) to the NAT gateway and route the public subnet to IGW. The benefits of this networking approach is that we can associate a static IP to our NAT gateway so this will be our one and only Atlas project IP access list entry. This means that all traffic is still going to the public internet through the NAT gateway and is TLS encrypted. The whitelist only allows the NAT gateway static public IP and nothing else.
Alternatively, you can choose to build with [AWS PrivateLink which does carry additional costs but will dramatically simplify networking management by directly connecting AWS Lambda to a MongoDB Atlas severless instance without the need to maintain subnets, IGWs, or NAT gateways. Also, AWS PrivateLink creates a private connection to AWS services, reducing the risk of exposing data to the public internet.
Select whichever networking approach best suits your organization’s needs.
and walkthrough on a recent episode of MongoDB TV Cloud Connect (aired 15 Feb 2024). Also, see the GitHub repo with the full open-source code of materials used in this demo serverless application.
The MongoDB Atlas CDK resources are open-sourced under the Apache-2.0 license and we welcome community contributions. To learn more, see our contributing guidelines.
Get started quickly by creating a MongoDB Atlas account through the AWS Marketplace and start building with MongoDB Atlas and the AWS CDK today!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7e6c094f4e095c73/65e61b2572b3874d4222d572/1.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltd0fe3f8f42f0b0ef/65e61b4a51368b8d36844989/2.png
[3]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt262d82355ecaefdd/65e61b6caca1713e9fa00cbb/3.png
[4]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6c9d68ba0093af02/65e61badffa94a03503d58ca/4.png
[5]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt76d88352feb37251/65e61bcf0f1d3518c7ca6612/5.png
[6]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0c090ebd5066daf8/65e61beceef4e3c3891e7f5f/6.png
[7]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte6c19ea4fdea4edf/65e61c10c7f05b2df68697fd/7.png
[8]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt904c5fdc6ec9b544/65e61c3111cd1d29f1a1b696/8.png | md | {
"tags": [
"Atlas",
"JavaScript",
"Python",
"Serverless",
"AWS"
],
"pageDescription": "",
"contentType": "Tutorial"
} | Getting Started With MongoDB Atlas Serverless, AWS CDK, and AWS Serverless Computing | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/csharp/polymorphism-with-mongodb-csharp | created | # Using Polymorphism with MongoDB and C#
In comparison to relational database management systems (RDBMS), MongoDB's flexible schema is a huge step forward when handling object-oriented data. These structures often make use of polymorphism where common base classes contain the shared fields that are available for all classes in the hierarchy; derived classes add the fields that are relevant only to the specific objects. An example might be to have several types of vehicles, like cars and motorcycles, that have some fields in common, but each type also adds some fields that make only sense if used for a type:
For RDBMS, storing an object hierarchy is a challenge. One way is to store the data in a table that contains all fields of all classes, though for each row, only a subset of fields is needed. Another approach is to create a table for the base class that contains the shared fields and add a table for each derived class that stores the columns for the specific type and references the base table. Neither of these approaches is optimal in terms of storage and when it comes to querying the data.
However, with MongoDB's flexible schema, one can easily store documents in the same collection that do only share some but not all fields. This article shows how the MongoDB C# driver makes it easy to use this for storing class hierarchies in a very natural way.
Example use cases include storing metadata for various types of documents, e.g., offers, invoices, or other documents related to business partners in a collection. Common fields could be a document title, a summary, the date, a vector embedding, and the reference to the business partner, whereas an invoice would add fields for the line items and totals but would not add the fields for a project report.
Another possible use case is to serve both an overview and a detail view from the same collection. We will have a closer look at how to implement this in the summary of this article.
# Basics
When accessing a collection from C#, we use an object that implements `IMongoCollection` interface. This object can be created like this:
```csharp
var vehiclesColl = db.CreateCollection("vehicles");
```
When serializing or deserializing documents, the type parameter `T` and the actual type of the object provide the MongoDB C# driver with a hint on how to map the BSON representation to a C# class and vice versa. If only documents of the same type reside in the collection, the driver uses the class map of the type.
However, to be able to handle class hierarchies correctly, the driver needs more information. This is where the *type discriminator* comes in. When storing a document of a derived type in the collection, the driver adds a field named `_t` to the document that contains the name of the class, e.g.:
```csharp
await vehiclesColl.InsertOneAsync(new Car());
```
leads to the following document structure:
```JSON
{
"_id": ObjectId("660d7d43e042f8f6f2726f6a"),
"_t": "Car",
// ... fields for vehicle
// ... fields specific to car
}
```
When deserializing the document, the value of the `_t` field is used to identify the type of the object that is to be created.
Though this works out of the box without specific configuration, it is advised to support the driver by specifying the class hierarchy explicitly by using the `BsonKnownTypes` attribute, if you are using declarative mapping:
```csharp
BsonKnownTypes(typeof(Car), typeof(Motorcycle))]
public abstract class Vehicle
{
// ...
}
```
If you configure the class maps imperatively, just add a class map for each type in the hierarchy to reach the same effect.
By default, only the name of the class is used as value for the type discriminator. Especially if the hierarchy spans several levels and you want to query for any level in the hierarchy, you should store the hierarchy as an array in the type discriminator by using the `BsonDiscriminator` attribute:
```csharp
[BsonDiscriminator(RootClass = true)]
[BsonKnownTypes(typeof(Car), typeof(Motorcycle))]
public abstract class Vehicle
{
// ...
}
```
This applies a different discriminator convention to the documents and stores the hierarchy as an array:
```JSON
{
"_id": ObjectId("660d81e5825f1c064024a591"),
"_t": [
"Vehicle",
"Car"
],
// ...
}
```
For additional details on how to configure the class maps for polymorphic objects, see the [documentation of the driver.
# Querying collections with polymorphic documents
When reading objects from a collection, the MongoDB C# driver uses the type discriminator to identify the matching type and creates a C# object of the corresponding class. The following query might yield both `Car` and `Motorcycle` objects:
```csharp
var vehiclesColl = db.GetCollection("vehicles");
var vehicles = (await vehiclesColl.FindAsync(FilterDefinition.Empty))
.ToEnumerable();
```
If you are only interested in documents of a specific type, you can create another instance of `IMongoCollection` that returns only these:
```csharp
var carsColl = vehiclesColl.OfType();
var cars = (await carsColl.FindAsync(FilterDefinition.Empty))
.ToEnumerable();
```
This new collection instance respects the corresponding type discriminator whenever an operation is performed. The following statement removes only `Car` documents from the collection but keeps the `Motorcycle` documents as they are:
```csharp
await carsColl.DeleteManyAsync(FilterDefinition.Empty);
```
If you are using the LINQ provider brought by the MongoDB C# driver, you can also use the LINQ `OfType` extension method to only retrieve the `Car` objects:
```csharp
var cars = vehiclesColl.AsQueryable().OfType();
```
# Serving multiple views from a single collection
As promised before, we now take a closer look at a use case for polymorphism: Let's suppose we are building a system that supports monitoring sensors that are distributed over several sites. The system should provide an overview that lists all sites with their name and the last value that was reported for the site along with a timestamp. When selecting a site, the system shows detailed information for the site that consists of all the data on the overview and also lists the sensors that are located at the specific site with their last value and its timestamp.
This can be depicted by creating a base class for the documents that contains the id of the site, a name to identify the document, and the last measurement, if available. A derived class for the site overview adds the site address; another one for the sensor detail contains the location of the sensor:
```csharp
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
public abstract class BaseDocument
{
BsonRepresentation(BsonType.ObjectId)]
public string Id { get; set; } = ObjectId.GenerateNewId().ToString();
[BsonRepresentation(BsonType.ObjectId)]
public string SiteId { get; set; } = ObjectId.GenerateNewId().ToString();
public string Name { get; set; } = string.Empty;
public Measurement? Last { get; set; }
}
public class Measurement
{
public int Value { get; set; }
public DateTime Timestamp { get; set; }
}
public class Address
{
// ...
}
public class SiteOverview : BaseDocument
{
public Address Address { get; set; } = new();
}
public class SensorDetail : BaseDocument
{
public string Location { get; set; } = string.Empty;
}
```
When ingesting new measurements, both the site overview and the sensor detail are updated (for simplicity, we do not use a multi-document transaction):
```csharp
async Task IngestMeasurementAsync(
IMongoCollection overviewsColl,
string sensorId,
int value)
{
var measurement = new Measurement()
{
Value = value,
Timestamp = DateTime.UtcNow
};
var sensorUpdate = Builders
.Update
.Set(x => x.Last, measurement);
var sensorDetail = await overviewsColl
.OfType()
.FindOneAndUpdateAsync(
x => x.Id == sensorId,
sensorUpdate,
new() { ReturnDocument = ReturnDocument.After });
if (sensorDetail != null)
{
var siteUpdate = Builders
.Update
.Set(x => x.Last, measurement);
var siteId = sensorDetail.SiteId;
await overviewsColl
.OfType()
.UpdateOneAsync(x => x.SiteId == siteId, siteUpdate);
}
}
```
Above sample uses `FindAndUpdateAsync` to both update the sensor detail document and also retrieve the resulting document so that the site id can be determined. If the site id is known beforehand, a simple update can also be used.
When retrieving the documents for the site overview, the following code returns all the relevant documents:
```csharp
var siteOverviews = (await overviewsColl
.OfType()
.FindAsync(FilterDefinition.Empty))
.ToEnumerable();
```
When displaying detailed data for a specific site, the following query retrieves all documents for the site by its id in a single request:
```csharp
var siteDetails = await (await overviewsColl
.FindAsync(x => x.SiteId == siteId))
.ToListAsync();
```
The result of the query can contain objects of different types; you can use the LINQ `OfType` extension method on the list to discern between the types, e.g., when building a view model.
This approach allows for efficient querying from different perspectives so that central views of the application can be served with minimum load on the server.
# Summary
Polymorphism is an important feature of object-oriented languages and there is a wide range of use cases for it. As you can see, the MongoDB C# driver provides a solid bridge between object orientation and the MongoDB flexible document schema. If you want to dig deeper into the subject from a data modeling perspective, be sure to check out the [polymorphic pattern part of the excellent series "Building With Patterns" on the MongoDB Developer Center. | md | {
"tags": [
"C#"
],
"pageDescription": "An article discussing when and how to use polymorphism in a C# application using the MongoDB C# Driver.",
"contentType": "Tutorial"
} | Using Polymorphism with MongoDB and C# | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/languages/cpp/me-and-the-devil-bluez-2 | created | # Me and the Devil BlueZ: Reading BLE sensors from C++
# Me and the Devil Bluez: Reading BLE sensors from C++
In our last article, I shared how to interact with Bluetooth Low Energy devices from a Raspberry Pi with Linux, using DBus and BlueZ. I did a step-by-step walkthrough on how to talk to a BLE device using a command line tool, so we had a clear picture of the sequence of operations that had to be performed to interact with the device. Then, I repeated the process but focused on the DBus messages that have to be exchanged to achieve that interaction.
Now, it is time to put that knowledge into practice and implement an application that connects to the RP2 BLE sensor that we created in our second article and reads the value of the… temperature. (Yep, we will switch to noise sometime soon. Please, bear with me.)
Ready to start? Let's get cracking!
## Setup
The application that we will be developing in this article is going to run on a Raspberry Pi 4B, our collecting station. You can use most other models, but I strongly recommend you connect it to your network using an ethernet cable and disable your WiFi. Otherwise, it might interfere with the Bluetooth communications.
I will do all my development using Visual Studio Code on my MacBook Pro and connect via SSH to the Raspberry Pi (RPi). The whole project will be held in the RPi, and I will compile it and run it there. You will need the Remote - SSH extension installed in Visual Studio Code for this to work, and the first time you connect to the RPi, it will take some time to set it up. If you use Emacs, TRAMP is available out of the box.
We also need some software installed on the RPi. At the very least, we will need `git` and `CMake`, because that is the build system that I will be using for the project. The C++ compiler (g++) is installed by default in Raspberry Pi OS, but you can install `Clang` if you prefer to use LLVM.
```sh
sudo apt-get install git git-flow cmake
```
In any case, we will need to install `sdbus-c++`. That is the library that allows us to interact with DBus using C++ bindings. There are several alternatives, but sdbus-c++ is properly maintained and has good documentation.
```sh
sudo apt-get install libsdbus-c++-{bin,dev,doc}
```
## Initial project
I am going to write this project from scratch, so I want to be sure that you and I start with the same set of files. I am going to begin with a trivial `main.cpp` file, and then I will create the seed for the build instructions that we will use to produce the executable throughout this episode.
### Initial main.cpp
Our initial `main.cpp` file is just going to print a message:
```cpp
#include
int main(int argc, char *argv])
{
std::cout << "Noise Collector BLE" << std::endl;
return 0;
}
```
### Basic project
And now we should create a `CMakeLists.txt` file with the minimal build instructions for this project:
```cmake
cmake_minimum_required(VERSION 3.5)
project(NoiseCollectorBLE CXX)
add_executable(${PROJECT_NAME} main.cpp)
```
Before we move forward, we are going to check that it all works fine:
```sh
mkdir build
cmake -S . -B build
cmake --build build
./build/NoiseCollectorBLE
```
## Talk to DBus from C++
### Send the first message
Now that we have set the foundations of the project, we can send our first message to DBus. A good one to start with is the one we use to query if the Bluetooth radio is on or off.
1. Let's start by adding the library to the project using CMake's `find_package` command:
```cmake
find_package(sdbus-c++ REQUIRED)
```
2. The library must be linked to our binary:
```cmake
target_link_libraries(${PROJECT_NAME} PRIVATE SDBusCpp::sdbus-c++)
```
3. And we enforce the usage of the C++17 standard because it is required by the library:
```cmake
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
```
4. With the library in place, let's create the skeleton to implement our BLE sensor. We first create the `BleSensor.h` file:
```cpp
#ifndef BLE_SENSOR_H
#define BLE_SENSOR_H
class BleSensor
{
};
#endif // BLE_SENSOR_H
```
5. We add a constructor and a method that will take care of all the steps required to scan for and connect to the sensor:
```cpp
public:
BleSensor();
void scanAndConnect();
```
6. In order to talk to BlueZ, we should create a proxy object. A proxy is a local object that allows us to interact with the remote DBus object. Creating the proxy instance without passing a connection to it means that the proxy will create its own connection automatically, and it will be a system bus connection.
```cpp
private:
std::unique_ptr bluezProxy;
```
7. And we need to include the library:
```cpp
#include
```
8. Let's create a `BleSensor.cpp` file for the implementation and include the header file that we have just created:
```cpp
#include "BleSensor.h"
```
9. That proxy requires the name of the service and a path to the instance that we want to talk to, so let's define both as constants inside of the constructor:
```cpp
BleSensor::BleSensor()
{
const std::string SERVICE_BLUEZ { "org.bluez" };
const std::string OBJECT_PATH { "/org/bluez/hci0" };
bluezProxy = sdbus::createProxy(SERVICE_BLUEZ, OBJECT_PATH);
}
```
10. Let's add the first step to our scanAndConnect method using a private function that we declare in the header:
```cpp
bool getBluetoothStatus();
```
11. Following this, we write the implementation, where we use the proxy that we created before to send a message. We define a message to a method on an interface using the required parameters, which we learned using the introspectable interface and the DBus traces. The result is a *variant* that can be casted to the proper type using the overloaded `operator()`:
```cpp
bool BleSensor::getBluetoothStatus()
{
const std::string METHOD_GET { "Get" };
const std::string INTERFACE_PROPERTIES { "org.freedesktop.DBus.Properties" };
const std::string INTERFACE_ADAPTER { "org.bluez.Adapter1" };
const std::string PROPERTY_POWERED { "Powered" };
sdbus::Variant variant;
// Invoke a method that gets a property as a variant
bluezProxy->callMethod(METHOD_GET)
.onInterface(INTERFACE_PROPERTIES)
.withArguments(INTERFACE_ADAPTER, PROPERTY_POWERED)
.storeResultsTo(variant);
return (bool)variant;
}
```
12. We use this private method from our public one:
```cpp
void BleSensor::scanAndConnect()
{
try
{
// Enable Bluetooth if not yet enabled
if (getBluetoothStatus())
{
std::cout << "Bluetooth powered ON\n";
} else
{
std::cout << "Powering bluetooth ON\n";
}
}
catch(sdbus::Error& error)
{
std::cerr << "ERR: on scanAndConnect(): " << error.getName() << " with message " << error.getMessage() << std::endl;
}
}
```
13. And include the iostream header:
```cpp
#include
```
14. We need to add the source files to the project:
```cmake
file(GLOB SOURCES "*.cpp")
add_executable(${PROJECT_NAME} ${SOURCES})
```
15. Finally, we import the header that we have defined in the `main.cpp`, create an instance of the object, and invoke the method:
```cpp
#include "BleSensor.h"
int main(int argc, char *argv[])
{
std::cout << "Noise Collector BLE" << std::endl;
BleSensor bleSensor;
bleSensor.scanAndConnect();
```
16. We compile it with CMake and run it.
### Send a second message
Our first message queried the status of a property. We can also change things using messages, like the status of the Bluetooth radio:
1. We declare a second private method in the header:
```cpp
void setBluetoothStatus(bool enable);
```
2. And we also add it to the implementation file –in this case, only the message without the constants:
```cpp
void BleSensor::setBluetoothStatus(bool enable)
{
// Invoke a method that sets a property as a variant
bluezProxy->callMethod(METHOD_SET)
.onInterface(INTERFACE_PROPERTIES)
.withArguments(INTERFACE_ADAPTER, PROPERTY_POWERED, sdbus::Variant(enable))
// .dontExpectReply();
.storeResultsTo();
}
```
3. As you can see, the calls to create and send the message use most of the same constants. The only new one is the `METHOD_SET`, used instead of `METHOD_GET`. We set that one inside of the method:
```cpp
const std::string METHOD_SET { "Set" };
```
4. And we make the other three static constants of the class. Prior to C++17, we would have had to declare them in the header and initialize them in the implementation, but since then, we can use `inline` to initialize them in place. That helps readability:
```cpp
static const std::string INTERFACE_ADAPTER { "org.bluez.Adapter1" };
static const std::string PROPERTY_POWERED { "Powered" };
static const std::string INTERFACE_PROPERTIES { "org.freedesktop.DBus.Properties" };
```
5. With the private method complete, we use it from the public one:
```cpp
if (getBluetoothStatus())
{
std::cout << "Bluetooth powered ON\n";
} else
{
std::cout << "Powering bluetooth ON\n";
setBluetoothStatus(true);
}
```
6. The second message is ready and we can build and run the program. You can verify its effects using `bluetoothctl`.
## Deal with signals
The next thing we would like to do is to enable scanning for BLE devices, find the sensor that we care about, connect to it, and disable scanning. Obviously, when we start scanning, we don't get to know the available BLE devices right away. Some reply almost instantaneously, and some will answer a little later. DBus will send signals, asynchronous messages that are pushed to a given object, that we will listen to.
### Use messages that have a delayed response
1. We are going to use a private method to enable and disable the scanning. The first thing to do is to have it declared in our header:
```cpp
void enableScanning(bool enable);
```
2. In the implementation file, the method is going to be similar to the ones we have defined before. Here, we don't have to worry about the reply because we have to wait for our sensor to show up:
```cpp
void BleSensor::enableScanning(bool enable)
{
const std::string METHOD_START_DISCOVERY { "StartDiscovery" };
const std::string METHOD_STOP_DISCOVERY { "StopDiscovery" };
std::cout << (enable?"Start":"Stop") << " scanning\n";
bluezProxy->callMethod(enable?METHOD_START_DISCOVERY:METHOD_STOP_DISCOVERY)
.onInterface(INTERFACE_ADAPTER)
.dontExpectReply();
}
```
3. We can then use that method in our public one to enable and disable scanning:
```cpp
enableScanning(true);
// Wait to be connected to the sensor
enableScanning(false);
```
4. We need to wait for the devices to answer, so let's add some delay between both calls:
```cpp
// Wait to be connected to the sensor
std::this_thread::sleep_for(std::chrono::seconds(10))
```
5. And we add the headers for this new code:
```cpp
#include
#include
```
6. If we build and run, we will see no errors but no results of our scanning, either. Yet.
### Subscribe to signals
In order to get the data of the devices that scanning for devices produces, we need to be listening to the signals sent that are broadcasted through the bus.
1. We need to interact with a different DBus object so we need another proxy. Let's declare it in the header:
```cpp
std::unique_ptr rootProxy;
```
2. And instantiate it in the constructor:
```cpp
rootProxy = sdbus::createProxy(SERVICE_BLUEZ, "/");
```
3. Next, we define the private method that will take care of the subscription:
```cpp
void subscribeToInterfacesAdded();
```
4. The implementation is simple: We provide a closure to be called on a different thread every time we receive a signal that matches our parameters:
```cpp
void BleSensor::subscribeToInterfacesAdded()
{
const std::string INTERFACE_OBJ_MGR { "org.freedesktop.DBus.ObjectManager" };
const std::string MEMBER_IFACE_ADDED { "InterfacesAdded" };
// Let's subscribe for the interfaces added signals (AddMatch)
rootProxy->uponSignal(MEMBER_IFACE_ADDED).onInterface(INTERFACE_OBJ_MGR).call(interfaceAddedCallback);
rootProxy->finishRegistration();
}
```
5. The closure has to take as arguments the data that comes with a signal: a string for the path that points to an object in DBus and a dictionary of key/values, where the keys are strings and the values are dictionaries of strings and values:
```cpp
auto interfaceAddedCallback = [this
{
};
```
6. We will be doing more with the data later, but right now, displaying the thread id, the object path, and the device name, if it exists, will suffice. We use a regular expression to restrict our attention to the Bluetooth devices:
```cpp
const std::regex DEVICE_INSTANCE_RE{"^/org/bluez/hci0-9]/dev(_[0-9A-F]{2}){6}$"};
std::smatch match;
std::cout << "(TID: " << std::this_thread::get_id() << ") ";
if (std::regex_match(path, match, DEVICE_INSTANCE_RE)) {
std::cout << "Device iface ";
if (dictionary["org.bluez.Device1"].count("Name") == 1)
{
auto name = (std::string)(dictionary["org.bluez.Device1"].at("Name"));
std::cout << name << " @ " << path << std::endl;
} else
{
std::cout << " @ " << path << std::endl;
}
} else {
std::cout << "*** UNEXPECTED SIGNAL ***";
}
```
7. And we add the header for regular expressions:
```cpp
#include
```
8. We use the private method **before** we start scanning:
```cpp
subscribeToInterfacesAdded();
```
9. And we print the thread id in that same method:
```cpp
std::cout << "(TID: " << std::this_thread::get_id() << ") ";
```
10. If you build and run this code, it should display information about the BLE devices that you have around you. You can show it to your friends and tell them that you are searching for spy microphones.
## Communicate with the sensor
Well, that looks like progress to me, but we are still missing the most important features: connecting to the BLE device and reading values from it.
We should connect to the device, if we find it, from the closure that we use in `subscribeToInterfacesAdded()`, and then, we should stop scanning. However, that closure and the method `scanAndConnect()` are running in different threads concurrently. When the closure connects to the device, it should *inform* the main thread, so it stops scanning. We are going to use a mutex to protect concurrent access to the data that is shared between those two threads and a conditional variable to let the other thread know when it has changed.
### Connect to the BLE device
1. First, we are going to declare a private method to connect to a device by name:
```cpp
void connectToDevice(sdbus::ObjectPath path);
```
2. We will obtain that object path from the signals that tell us about the devices discovered while scanning. We will compare the name in the dictionary of properties of the signal with the name of the sensor that we are looking for. We'll receive that name through the constructor, so we need to change its declaration:
```cpp
BleSensor(const std::string &sensor_name);
```
3. And declare a field that will be used to hold the value:
```cpp
const std::string deviceName;
```
4. If we find the device, we will create a proxy to the object that represents it:
```cpp
std::unique_ptr deviceProxy;
```
5. We move to the implementation and start by adapting the constructor to initialize the new values using the preamble:
```cpp
BleSensor::BleSensor(const std::string &sensor_name)
: deviceProxy{nullptr}, deviceName{sensor_name}
```
6. We then create the method:
```cpp
void BleSensor::connectToDevice(sdbus::ObjectPath path)
{
}
```
7. We create a proxy for the device that we have selected using the name:
```cpp
deviceProxy = sdbus::createProxy(SERVICE_BLUEZ, path);
```
8. And move the declaration of the service constant, which is now used in two places, to the header:
```cpp
inline static const std::string SERVICE_BLUEZ{"org.bluez"};
```
9. And send a message to connect to it:
```cpp
deviceProxy->callMethodAsync(METHOD_CONNECT).onInterface(INTERFACE_DEVICE).uponReplyInvoke(connectionCallback);
std::cout << "Connection method started" << std::endl;
```
10. We define the constants that we are using:
```cpp
const std::string INTERFACE_DEVICE{"org.bluez.Device1"};
const std::string METHOD_CONNECT{"Connect"};
```
11. And the closure that will be invoked. The use of `this` in the capture specification allows access to the object instance. The code in the closure will be added below.
```cpp
auto connectionCallback = [this
{
};
```
12. The private method can now be used to connect from the method `BleSensor::subscribeToInterfacesAdded()`. We were already extracting the name of the device, so now we use it to connect to it:
```cpp
if (name == deviceName)
{
std::cout << "Connecting to " << name << std::endl;
connectToDevice(path);
}
```
13. We would like to stop scanning once we are connected to the device. This happens in two different threads, so we are going to use the producer-consumer concurrency design pattern to achieve the expected behavior. We define a few new fields –one for the mutex, one for the conditional variable, and one for a boolean flag:
```cpp
std::mutex mtx;
std::condition_variable cv;
bool connected;
```
14. And we include the required headers:
```cpp
#include
```
15. They are initialized in the constructor preamble:
```cpp
BleSensor::BleSensor(const std::string &sensor_name)
: deviceProxy{nullptr}, deviceName{sensor_name},
cv{}, mtx{}, connected{false}
```
16. We can then use these new fields in the `BleSensor::scanAndConnect()` method. First, we get a unique lock on the mutex before subscribing to notifications:
```cpp
std::unique_lock lock(mtx);
```
17. Then, between the start and the stop of the scanning process, we wait for the conditional variable to be signaled. This is a more robust and reliable implementation than using the delay:
```cpp
enableScanning(true);
// Wait to be connected to the sensor
cv.wait(lock, this
{ return connected; });
enableScanning(false);
```
18. In the `connectionCallback`, we first deal with errors, in case they happen:
```cpp
if (error != nullptr)
{
std::cerr << "Got connection error "
<< error->getName() << " with message "
<< error->getMessage() << std::endl;
return;
}
```
19. Then, we get a lock on the same mutex, change the flag, release the lock, and signal the other thread through the connection variable:
```cpp
std::unique_lock lock(mtx);
std::cout << "Connected!!!" << std::endl;
connected = true;
lock.unlock();
cv.notify_one();
std::cout << "Finished connection method call" << std::endl;
```
20. Finally, we change the initialization of the BleSensor in the main file to pass the sensor name:
```cpp
BleSensor bleSensor { "RP2-SENSOR" };
```
21. If we compile and run what we have so far, we should be able to connect to the sensor. But if the sensor isn't there, it will wait indefinitely. If you have problems connecting to your device and get "le-connection-abort-by-local," use an ethernet cable instead of WiFi and disable it with `sudo ip link set wlan0 down`.
### Read from the sensor
Now that we have a connection to the BLE device, we will receive signals about other interfaces added. These are going to be the services, characteristics, and descriptors. If we want to read data from a characteristic, we have to find it –using its UUID for example– and use DBus's "Read" method to get its value. We already have a closure that is invoked every time a signal is received because an interface is added, but in this closure, we verify that the object path corresponds to a device, instead of to a Bluetooth attribute.
1. We want to match the object path against the structure of a BLE attribute, but we want to do that only when the device is already connected. So, we surround the existing regular expression match:
```cpp
if (!connected)
{
// Current code with regex goes here.
}
else
{
}
```
2. In the *else* part, we add a different match:
```cpp
if (std::regex_match(path, match, DEVICE_ATTRS_RE))
{
}
else
{
std::cout << "Not a characteristic" << std::endl;
}
```
3. That code requires the regular expression declared in the method:
```cpp
const std::regex DEVICE_ATTRS_RE{"^/org/bluez/hci\\d/dev(_0-9A-F]{2}){6}/service\\d{4}/char\\d{4}"};
```
4. If the path matches the expression, we check if it has the UUID of the characteristic that we want to read:
```cpp
std::cout << "Characteristic " << path << std::endl;
if ((dictionary.count("org.bluez.GattCharacteristic1") == 1) &&
(dictionary["org.bluez.GattCharacteristic1"].count("UUID") == 1))
{
auto name = (std::string)(dictionary["org.bluez.GattCharacteristic1"].at("UUID"));
if (name == "00002a1c-0000-1000-8000-00805f9b34fb")
{
}
}
```
5. When we find the desired characteristic, we need to create (yes, you guessed it) a proxy to send messages to it.
```cpp
tempAttrProxy = sdbus::createProxy(SERVICE_BLUEZ, path);
std::cout << "<<>> " << path << std::endl;
```
6. That proxy is stored in a field that we haven't declared yet. Let's do so in the header file:
```cpp
std::unique_ptr tempAttrProxy;
```
7. And we do an explicit initialization in the constructor preamble:
```cpp
BleSensor::BleSensor(const std::string &sensor_name)
: deviceProxy{nullptr}, tempAttrProxy{nullptr},
cv{}, mtx{}, connected{false}, deviceName{sensor_name}
```
8. Everything is ready to read, so let's declare a public method to do the reading:
```cpp
void getValue();
```
9. And a private method to send the DBus messages:
```cpp
void readTemperature();
```
10. We implement the public method, just using the private method:
```cpp
void BleSensor::getValue()
{
readTemperature();
}
```
11. And we do the implementation on the private method:
```cpp
void BleSensor::readTemperature()
{
tempAttrProxy->callMethod(METHOD_READ)
.onInterface(INTERFACE_CHAR)
.withArguments(args)
.storeResultsTo(result);
}
```
12. We define the constants that we used:
```cpp
const std::string INTERFACE_CHAR{"org.bluez.GattCharacteristic1"};
const std::string METHOD_READ{"ReadValue"};
```
13. And the variable that will be used to qualify the query to have a zero offset as well as the one to store the response of the method:
```cpp
std::map args{{{"offset", sdbus::Variant{std::uint16_t{0}}}}};
std::vector result;
```
14. The temperature starts on the second byte of the result (offset 1) and ends on the fifth, which in this case is the last one of the array of bytes. We can extract it:
```cpp
std::cout << "READ: ";
for (auto value : result)
{
std::cout << +value << " ";
}
std::vector number(result.begin() + 1, result.end());
```
15. Those bytes in ieee11073 format have to be transformed into a regular float, and we use a private method for that:
```cpp
float valueFromIeee11073(std::vector binary);
```
16. That method is implemented by reversing the transformation that we did on [the second article of this series:
```cpp
float BleSensor::valueFromIeee11073(std::vector binary)
{
float value = static_cast(binary0]) + static_cast(binary[1]) * 256.f + static_cast(binary[2]) * 256.f * 256.f;
float exponent;
if (binary[3] > 127)
{
exponent = static_cast(binary[3]) - 256.f;
}
else
{
exponent = static_cast(binary[3]);
}
return value * pow(10, exponent);
}
```
17. That implementation requires including the math declaration:
```cpp
#include
```
18. We use the transformation after reading the value:
```cpp
std::cout << "\nTemp: " << valueFromIeee11073(number);
std::cout << std::endl;
```
19. And we use the public method in the main function. We should use the producer-consumer pattern here again to know when the proxy to the temperature characteristic is ready, but I have cut corners again for this initial implementation using a couple of delays to ensure that everything works fine.
```cpp
std::this_thread::sleep_for(std::chrono::seconds(5));
bleSensor.getValue();
std::this_thread::sleep_for(std::chrono::seconds(5));
```
20. In order for this to work, the thread header must be included:
```cpp
#include
```
21. We build and run to check that a value can be read.
### Disconnect from the BLE sensor
Finally, we should disconnect from this device to leave things as we found them. If we don't, re-running the program won't work because the sensor will still be connected and busy.
1. We declare a public method in the header to handle disconnections:
```cpp
void disconnect();
```
2. And a private one to send the corresponding DBus message:
```cpp
void disconnectFromDevice();
```
3. In the implementation, the private method sends the required message and creates a closure that gets invoked when the device gets disconnected:
```cpp
void BleSensor::disconnectFromDevice()
{
const std::string INTERFACE_DEVICE{"org.bluez.Device1"};
const std::string METHOD_DISCONNECT{"Disconnect"};
auto disconnectionCallback = [this
{
};
{
deviceProxy->callMethodAsync(METHOD_DISCONNECT).onInterface(INTERFACE_DEVICE).uponReplyInvoke(disconnectionCallback);
std::cout << "Disconnection method started" << std::endl;
}
}
```
4. And that closure has to change the connected flag using exclusive access:
```cpp
if (error != nullptr)
{
std::cerr << "Got disconnection error " << error->getName() << " with message " << error->getMessage() << std::endl;
return;
}
std::unique_lock lock(mtx);
std::cout << "Disconnected!!!" << std::endl;
connected = false;
deviceProxy = nullptr;
lock.unlock();
std::cout << "Finished connection method call" << std::endl;
```
5. The private method is used from the public method:
```cpp
void BleSensor::disconnect()
{
std::cout << "Disconnecting from device" << std::endl;
disconnectFromDevice();
}
```
6. And the public method is used from the main function:
```cpp
bleSensor.disconnect();
```
7. Build and run to see the final result.
## Recap and future work
In this article, I have used C++ to write an application that reads data from a Bluetooth Low Energy sensor. I have realized that writing C++ is **not** like riding a bike. Many things have changed since I wrote my last C++ code that went into production, but I hope I did a decent job at using it for this task.
," caused by a "Connection Failed to be Established (0x3e)," when attempting to connect to the Bluetooth sensor. It happened often but not always. In the beginning, I didn't know if it was my code to blame, the library, or what. After catching exceptions everywhere, printing every message, capturing Bluetooth traces with `btmon`, and not finding much (although I did learn a few new things from Unix & Linux StackExchange, Stack Overflow and the Raspberry Pi forums), I suddenly realized that the culprit was the Raspberry Pi WiFi/Bluetooth chip. The symptom was an unreliable Bluetooth connection, but my sensor and the RPi were very close to each other and without any relevant interference from the environment. The root cause was sharing the radio frequency (RF) in the same chip (Broadcom BCM43438) with a relatively small antenna. I switched from the RPi3A+ to an RPi4B with an ethernet cable and WiFi disabled and, all of a sudden, things started to work.
Even though the implementation wasn't too complex and the proof of concept was passed, the hardware issue raised some concerns. It would only get worse if I talked to several sensors instead of just one. And that is exactly what we will do in future episodes to collect the data from the sensor and send it to a MongoDB Cluster with time series. I could still use a USB Bluetooth dongle and ignore the internal hardware. But before I take that road, I would like to work on the MQTT alternative and make a better informed decision. And that will be our next episode.
Stay curious, hack your code, and see you next time!
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltdc6a10ecba495cf2/658195252f46f765e48223bf/yogendra-singh-BxHnbYyNfTg-unsplash.jpg | md | {
"tags": [
"C++",
"RaspberryPi"
],
"pageDescription": "This article is a step-by-step description of the process of writing a C++ application from scratch that reads from a Bluetooth Low Energy sensor using DBus and BlueZ. The resulting app will run in a Raspberry Pi and might be the seed for the collecting station that will upload data to a MongoDB cluster in the Cloud.",
"contentType": "Tutorial"
} | Me and the Devil BlueZ: Reading BLE sensors from C++ | 2024-05-20T17:32:23.501Z |
devcenter | https://www.mongodb.com/developer/products/atlas/rag-workflow-with-atlas-amazon-bedrock | created | # Launch a Fully Managed RAG Workflow With MongoDB Atlas and Amazon Bedrock
## Introduction
MongoDB Atlas is now natively integrated with Amazon Bedrock Knowledge Base, making it even easier to build generative AI applications backed by enterprise data.
Amazon Bedrock, Amazon Web Services’ (AWS) managed cloud service for generative AI, empowers developers to build applications on top of powerful foundation models like Anthropic's Claude, Cohere Embed, and Amazon Titan. By integrating with Atlas Vector Search, Amazon Bedrock enables customers to leverage the vector database capabilities of Atlas to bring up-to-date context to Foundational Model outputs using proprietary data.
With the click of a button (see below), Amazon Bedrock now integrates MongoDB Atlas as a vector database into its fully managed, end-to-end retrieval-augmented generation (RAG) workflow, negating the need to build custom integrations to data sources or manage data flows.
Companies using MongoDB Atlas and Amazon Bedrock can now rapidly deploy and scale generative AI apps grounded in the latest up-to-date and accurate enterprise data. For enterprises with the most demanding privacy requirements, this capability is also available via AWS PrivateLink (more details at the bottom of this article).
## What is retrieval-augmented generation?
One of the biggest challenges when working with generative AI is trying to avoid hallucinations, or erroneous results returned by the foundation model (FM) being used. The FMs are trained on public information that gets outdated quickly and the models cannot take advantage of the proprietary information that enterprises possess.
One way to tackle hallucinating FMs is to supplement a query with your own data using a workflow known as retrieval-augmented generation, or RAG. In a RAG workflow, the FM will seek specific data — for instance, a customer's previous purchase history — from a designated database that acts as a “source of truth” to augment the results returned by the FM. For a generative AI FM to search for, locate, and augment its responses, the relevant data needs to be turned into a vector and stored in a vector database.
## How does the Knowledge Base integration work?
Within Amazon Bedrock, developers can now “click to add” MongoDB Atlas as a knowledge base for their vector data store to power RAG.
In the workflow, a customer chooses two different models: an embedding model and a generative model. These models are then orchestrated and used by Bedrock Agents during the interaction with the knowledge base — in this case, MongoDB Atlas.
Bedrock reads your text data from an S3 bucket, chunks the data, and then uses the embedding model chosen by the user to create the vector embeddings, storing these text chunks, embeddings, and related metadata in MongoDB Atlas’ vector database. An Atlas vector search index is also created as part of the setup for querying the vector embeddings.
combines operational, vector, and metadata in a single platform, making it an ideal knowledge base for Amazon Bedrock users who want to augment their generative AI experiences while also simplifying their generative AI stack.
In addition, MongoDB Atlas gives developers the ability to set up dedicated infrastructure for search and vector search workloads, optimizing compute resources to scale search and database independently.
## Solution architecture
to populate our knowledge base. Please download the PDF (by clicking on “Read Whitepaper” or “Email me the PDF”). Alternatively, you can download it from the GitHub repository. Once you have the PDF, upload it into an S3 bucket for hosting. (Note the bucket name as we will use it later in the article.)
## Prerequisites
* MongoDB Atlas account
* AWS account
## Implementation steps
### Atlas Cluster and Database Setup
* Login or Signup][3] to MongoDB Atlas
* [Setup][4] the MongoDB Atlas cluster with a M10 or greater configuration. *Note M0 or free cluster will not support this setup.*
* Setup the [database user][5] and [Network access][6].
* Copy the [connection string][7].
* [Create][8] a database and collection
![The screenshot shows the navigation of creating a database in MongoDB Atlas.][9]
### Atlas Vector Search index
Before we create an Amazon Bedrock knowledge base (using MongoDB Atlas), we need to create an Atlas Vector Search index.
* In the MongoDB Atlas Console, navigate to your cluster and select the _Atlas Search_ tab.
![Atlas console navigation to create the search index][10]
* Select _Create Search Index_, select _Atlas Vector Search_, and select _Next_.
![The screenshot shows the MongoDB Atlas Search Index navigation.][11]
* Select the database and the collection where the embeddings are stored.
![MongoDB Atlas Search Index navigation][12]
* Supply the following JSON in the index definition and click _Next_, confirming and creating the index on the next page.
```
{
"fields": [
{
"numDimensions": 1536,
"path": "bedrock_embedding",
"similarity": "cosine",
"type": "vector"
},
{
"path": "bedrock_metadata",
"type": "filter"
},
{
"path": "bedrock_text_chunk",
"type": "filter"
}
]
}
```
![The screenshot shows the MongoDB Atlas Search Index navigation][13]
Note: The fields in the JSON are customizable but should match the fields we configure in the Amazon Bedrock AWS console. If your source content contains [filter metadata, the fields need to be included in the JSON array above in the same format: `{"path": "<attribute_name>","type":"filter"}`.
### Amazon Bedrock Knowledge Base
* In the AWS console, navigate to Amazon Bedrock, and then click _Get started_.
orchestrate interactions between foundation models, data sources, software applications, and user conversations. In addition, agents automatically call APIs to take actions and invoke knowledge bases to supplement information for these actions
* In the AWS Bedrock console, create an Agent.
* AWS docs about MongoDB Bedrock integration
* MongoDB Vector Search
* Bedrock User Guide
* MongoDB Atlas on AWS Marketplace
[1]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt511ed709f8f6d72c/66323fa5ba17b0c937cb77a0/1_image.png
[2]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt33b9187a37c516d8/66323fa65319a05c071f59a6/2_image.png
[3]: https://www.mongodb.com/docs/guides/atlas/account/
[4]: https://www.mongodb.com/docs/guides/atlas/cluster/
[5]: https://www.mongodb.com/docs/guides/atlas/db-user/
[6]: https://www.mongodb.com/docs/guides/atlas/network-connections/
[7]: https://www.mongodb.com/docs/guides/atlas/connection-string/
[8]: https://www.mongodb.com/basics/create-database#using-the-mongodb-atlas-ui
[9]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt6fe6d13267c2898e/663bb48445868a5510839ee6/27_bedrock.png
[10]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt81882c0004f72351/66323fa5368bfca5faff012c/3_image.png
[11]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt7f47b6bde3a5c6e0/66323fa6714a1b552cb74ee7/4_image12.png
[12]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt2a5dcee8c6491baf/66323fa63c98e044b720dd9f/5_image.png
[13]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt9fec5112de72ed56/663bb5552ff97d53f17030ad/28_bedrock.png
[14]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1fd932610875a23b/66323fa65b8ef39b7025bd85/7_image.png
[15]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltbc8d532d6dcd3cc5/66323fa6ba17b06b7ecb77a8/8_image.png
[16]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0a24ae8725a51d07/66323fa6e664765138d445ee/9_image.png
[17]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt88d2e77e437da43f/66323fa6e664767b99d445ea/10_image.png
[18]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltad5b66cb713d14f7/66323fa63c98e0b22420dd97/11_image.png
[19]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt0da5374bed3cdfc4/66323fa6e66476284ed445e6/12_image.png
[20]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt1a5f530f0d77cb25/66323fa686ffea3e4a8e4d1e/13_image.png
[21]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt3c5a3bbfd93fc797/66323fa6ba17b003bccb77a4/14_image.png
[22]: https://github.com/mongodb-partners/mongodb_atlas_as_aws_bedrock_knowledge_base
[23]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt17ef52eaa92859b7/66323fa6f5bf2dff3c36e840/15_image.png
[24]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt039df3ec479a51b3/66323fa6dafc457afab1d9ca/16_image18.png
[25]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltff47aa531f588800/66323fa65319a08f491f59aa/17_image.png
[26]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltb886edc28380eaad/66323fa63c98e0f4ca20dda1/18_image.png
[27]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt94659b32f1eedb41/66323fa657623318c954d39d/19_image.png
[28]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltce5243d13db956ac/66323fa6599d112fcc850538/20_image.png
[29]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt5cac340ac53e5630/66323fa6d63d2215d9b8ce1e/21_image.png
[30]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltecb09d22b3b99731/66323fa586ffea4e788e4d1a/22_image.png
[31]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/bltcc59300cd4b46845/66323fa6deafa962708fcb0c/23_image.png
[32]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blt381ef4a7a68c7b40/66323fa54124a57222a6c45d/24_image.png
[33]: https://images.contentstack.io/v3/assets/blt39790b633ee0d5a7/blte3d2b90f86472bf1/66323fa5ba17b0c29fcb779c/25_image.png | md | {
"tags": [
"Atlas",
"AWS"
],
"pageDescription": "Atlas Vector Search and Amazon Bedrock enable the vector database capabilities of Atlas to bring up-to-date context to Foundational Model outputs.",
"contentType": "Tutorial"
} | Launch a Fully Managed RAG Workflow With MongoDB Atlas and Amazon Bedrock | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/atlas-online-archival | created | # Atlas Online Archive: Efficiently Manage the Data Lifecycle
## Problem statement
In the production environment, in a MongoDB Atlas database, a collection contains massive amounts of data stored, including aged and current data. However, aged data is not frequently accessed through applications, and the data piles up daily in the collection, leading to performance degradation and cost consumption. This results in needing to upgrade the cluster tier size to maintain sufficient resources according to workload, as it would be difficult to continue with the existing tier size.
Overall, this negatively impacts application performance and equates to higher resource utilization and increased costs for business.
## Resolution
To avoid overpaying, you can offload aged data to a cheaper storage area based on the date criteria, which is called _archival storage_ in MongoDB. Later, you can access those infrequently archived data by using MongoDB federated databases. Hence, cluster size, performance, and resource utilization are optimized.
To better manage data in the Atlas cluster, MongoDB introduced the Online Archive feature from MongoDB Atlas 4.4 version onward.
### Advantages
* It archives data based on the date criteria in the archival rule, and the job runs every five minutes by default.
* Query the data through a federated database connection, which is available in the Data Federation tab.
* Infrequent data access through federated connections apart from the main cluster improves performance and reduces traffic on the main cluster.
* Archived data can be queried by downstream environments and consumed in read-only mode.
### Limitations
* Archived data is available for reading purposes, but it does not support writing or modification.
* Capped collections do not support online archival.
* Atlas serverless clusters do not support online archival.
* Separate federated connection strings connect archived data.
### Pre-requisites
* Online Archive is supported by cluster tier M10 and above.
* Indexes offer better performance during archival.
* To create or delete an online archive, you must have one of the following roles:
Project Data Access Admin, Project Cluster Manager, or Project Owner.
## Online archival configuration setup
The cluster DemoCluster has a collection called movies in the database sample_mflix. As per the business rule, you are storing aged and the latest data in the main cluster, but day by day, data keeps piling up, as expected. Therefore, right-sizing your cluster resources by upgrading tier size leads to increased costs.
To overcome this issue and maintain the cluster efficiently, you have to offload the infrequent or aged data to lower cost storage by the online archive feature and access it through a federated database connection. You can manage online archival at any point in time as per business requirements through managing archives.
In your case, you have loaded a sample dataset from the MongoDB Atlas cluster setup — one of the databases is sample_mflix — and there is a collection called movies that has aged, plus the latest data itself. As per the business requirement, the last 10 years of data have been frequently used by customers. Therefore, plan to implement archived data after 10 years from the collection based on the date field.
To implement the Online Archive feature, you need a basic M10 cluster or above:
### Define archiving rules
Once business requirements are finalized, define the rules on which data fields will be archived based on criteria like age, size, and other conditions. We can set up Online Archive rules through the Atlas UI or using the Atlas API.
The movies collection in the sample_mflix database has a date field called released. To make online archival perform better, you need to create an index on the released field using the below command.
use sample_mflix
db.movies.createIndex({"released":1})
After creating the index, you can choose this field as a date-based archive and move the data that is older than 10 years (3652 days) to cold storage. This means the cluster will store documents less than 10 years old, and all other documents move to archival storage which is cheaper to maintain.
Before implementing the archival rule, the movies collection's total document count was 21,349, as seen in the below image.
## Implementation steps
Step 1: Go to Browse Collections on Cluster Overview and select the Online Archive tab.
Step 2: You have to supply a namespace for the collection, storage region, date match field, and age limit to archive. In your case:
* Namespace: sample_mflix.movies
* Chosen Region: AWS / Mumbai (cloud providers AWS, Azure, GCP)
* Date Field: released (Indexed field required)
* Age Limit: 3652 days (10 years from the date)
For instance, today is February 28, 2024, so that means that 3652 days before today would be Feb 28, 2014.
Step 3: Here are a couple of features you can add as optional.
Delete age limit: This allows the purging of data from archival storage based on the required criteria. It's an optional feature you can use as per your organization's decision.
In this example, we are not purging any data as per business rules.
Schedule archiving window: This feature enables you to customize schedules. For example, you can run archive jobs during non-business hours or downtime windows to make sure it has a low impact on applications.
")
Step 4: You can add any further partition fields required.
Step 5: Once the rule configuration is completed, the wizard prompts a detailed review of your archival rule. You can observe Namespace, service provider (AWS), Storage Region (Mumbai), Archive Field, Age Limit, etc.
Step 6: Once the steps are reviewed, click on BeginArchiving to create data federation instances in the DataFederation tab. Then, it will start archiving data based on the validation rule and move to AWS S3 storage. One of the best features is you can modify, pause, and delete online archival rules any time around the clock. For instance, your archival criteria can change at any time.
Step 7: Once the Online Archive is set, there will be an archive job run every five minutes by default. This validates criteria based on the date field and moves the data to archival storage. Apart from that, you can set up this job as per your custom range instead of the default schedule. You can view this archival job in the cluster main section as seen in the below image, with the actual status Archiving/IDLE.
The Atlas Online Archive feature will create two federated database instances in the Data Federation tab for the cluster to access data apart from the regular connection string:
* A federated database instance to query data on your archive only
* A federated database instance to query both your cluster and archived data
When the archival job runs as per the schedule, it moves documents to archival storage. As a result, the document count of the collection in the main cluster will be reduced by maintaining the latest data or hot data.
Therefore, as per the above scenario, the movies collection now contains fresh/the latest data.
Movies collection document count: 2186 (it excludes documents more than 10 years old).
Every day, it validates 3652 days later to find documents to move to archival storage.
You can observe the collection document count in the below image:
## How to connect and access
You can access archived or read-only data through the Data Federation wizard. Simply connect with connection strings for both:
* Archived only (specific database collection for which we set up archive rule)
* Cluster archive (all the databases in it)
** You can point these connection strings to downstream environments to read the data or consume it via end-user applications._
## Atlas Data Federation
Data Federation provides the capability to federate queries across data stored in various supported storage formats, including Atlas clusters, Atlas online archives, Data Lake datasets, AWS S3 buckets, and HTTP stores. You can derive insights or move data between any of the supported storage formats of the service.
2. DemoCluster archive: This is a federated database instance for your archive that allows you to query data on your archive only. By connecting with this string, you will see only archived collections, as shown in the below screen. For more details check, visit the docs.
Here, the cluster name DemoCluster has archived collection data that you can retrieve only by using the below connection string, as shown in the image.
Connection string: "mongodb://Username:Password@archived-atlas-online-archive-65df00164668c44159eb65c8-abcd6.a.query.mongodb.net/?ssl=true&authSource=admin"
As shown in the image, you can view only those archived collections data in the form of READ-ONLY mode, which means you cannot modify these documents in the future.
2. DemoCluster cluster archive:
This federated database instance for your cluster and archive allows you to query both your cluster and archived data. Here, you can access all the databases in the cluster, including non-archived collections, as shown in the below image.
Connection string:
```bash
mongodb://Username:Password@atlas-online-archive-65df00164668c44159eb65c8-abcd6.a.query.mongodb.net/?ssl=true&authSource=admin
```
Note: Using this connection string, you can view all the databases inside the cluster and the archived collection’s total document count. It also allows READ-ONLY mode.
## Project cluster overview
As discussed earlier, the main cluster DemoCluster contains the latest data as per the business requirements — i.e., frequently consumed data. You can access data and perform read and write operations at any time by pointing to live application changes.
Note: In your case, the latest data refers to anything less than 10 years old.
Connection string:
```bash
mongodb+srv://Username:Password@democluster.abcd6.mongodb.net/
```
In this scenario, after archiving aged data, you can see only 2186 documents for the movies collection with data less than 10 years old.
You can use MongoShell, an application, or any third-party tools (like MongoCompass) to access the archived data and main cluster data.
Alternatively, with all three of these connection strings, you can fetch from the below wizard in cluster connect.
1. Connect to cluster and Online Archive (read-only archived instance connection string)
2. Connect to cluster (direct cluster connection to perform CRUD operations)
3. Connect to Online Archive (read-only specific to an archived database connection string)
MongoShell prompt: To connect both archived data from the Data Federation tab, you can view the difference between both archived data in the form of READ-ONLY mode.
MongoShell prompt: Here in the main cluster, you can view a list of databases where you can access, read, and write frequent data through a cluster connection string.
## Conclusion
Overall, MongoDB Atlas's online archival feature empowers organizations to optimize storage costs, enhance performance, adhere to data retention policies by securely storing data for long-term retention periods, and effectively manage data and storage efficiency throughout its lifecycle.
We’d love to hear your thoughts on everything you’ve learned! Join us in the Developer Community to continue the conversation and see what other people are building with MongoDB. | md | {
"tags": [
"Atlas"
],
"pageDescription": "This article explains the MongoDB's Online Archival feature and its advantages.",
"contentType": "Article"
} | Atlas Online Archive: Efficiently Manage the Data Lifecycle | 2024-05-20T17:32:23.502Z |
devcenter | https://www.mongodb.com/developer/products/atlas/jina-ai-semantic-search | created | # Semantic search with Jina Embeddings v2 and MongoDB Atlas
Semantic search is a great ally for AI embeddings.
Using vectors to identify and rank matches has been a part of search for longer than AI has. The venerable tf/idf algorithm, which dates back to the 1960s, uses the counts of words, and sometimes parts of words and short combinations of words, to create representative vectors for text documents. It then uses the distance between vectors to find and rank potential query matches and compare documents to each other. It forms the basis of many information retrieval systems.
We call this “semantic search” because these vectors already have information about the meaning of documents built into them. Searching with semantic embeddings works the same way, but instead, the vectors come from AI models that do a much better job of making sense of the documents.
Because vector-based retrieval is a time-honored technique for retrieval, there are database platforms that already have all the mechanics to do it. All you have to do is plug in your AI embeddings model.
This article will show you how to enhance MongoDB Atlas — an out-of-the-box, cloud-based solution for document retrieval — with Jina Embeddings’ top-of-the-line AI to produce your own killer search solution.
### Setting up
You will first need a MongoDB Atlas account. Register for a new account or sign in using your Google account directly on the website.
### Create a project
Once logged in, you should see your **Projects** page. If not, use the navigation menu on the left to get to it.
Create a new project by clicking the **New Project** button on the right.
You can add new members as you like, but you shouldn’t need to for this tutorial.
### Create a deployment
This should return you to the **Overview** page where you can now create a deployment. Click the **+Create** button to do so.
Select the **M0 Free** tier for this project and the provider of your choice, and then click the **Create** button at the bottom of the screen.
On the next screen, you will need to create a user with a username and secure password for this deployment. Do not lose this password and username! They are the only way you will be able to access your work.
Then, select access options. We recommend for this tutorial selecting **My Local Environment**, and clicking the **Add My Current IP Address** button.
If you have a VPN or a more complex security topology, you may have to consult your system administrator to find out what IP number you should insert here instead of your current one.
After that, click **Finish and Deploy** at the bottom of the page. After a brief pause, you will now have an empty MongoDB database deployed on Atlas for you to use.
Note: If you have difficulty accessing your database from outside, you can get rid of the IP Access List and accept connections from all IP addresses. Normally, this would be very poor security practice, but because this is a tutorial that uses publicly available sample data, there is little real risk.
To do this, click the **Network Access** tab under **Security** on the left side of the page:
Then, click **ADD IP ADDRESS** from the right side of the page:
You will get a modal window. Click the button marked **ALLOW ACCESS FROM ANYWHERE**, and then click **Confirm**.
Your Network Access tab should now have an entry labeled `0.0.0.0/0`.
This will allow any IP address to access your database if it has the right username and password.
## Adding Data
In this tutorial, we will be using a sample database of Airbnb reviews. You can add this to your database from the Database tab under Deployments in the menu on the left side of the screen. Once you are on the “Database Deployments” page, find your cluster (on the free tier, you are only allowed one, so it should be easy). Then, click the “three dots” button and choose **Load Sample Data**. It may take several minutes to load the data.
This will add a collection of free data sources to your MongoDB instance for you to experiment with, including a database of Airbnb reviews.
## Using PyMongo to access your data
For the rest of this tutorial, we will use Python and PyMongo to access your new MongoDB Atlas database.
Make sure PyMongo is installed in your Python environment. You can do this with the following command:
```
pip install pymongo
```
You will also need to know:
1. The username and password you set when you set up the database.
2. The URL to access your database deployment.
If you have lost your username and password, click on the **Database Access** tab under **Security** on the left side of the page. That page will enable you to reset your password.
To get the URL to access your database, return to the **Database** tab under **Deployment** on the left side of the screen. Find your cluster, and look for the button labeled **Connect**. Click it.
You will see a modal pop-up window like the one below:
Click **Drivers** under **Connect to your application**. You will see a modal window like the one below. Under number three, you will see the URL you need but without your password. You will need to add your password when using this URL.
## Connecting to your database
Create a file for a new Python script. You can call it `test_mongo_connection.py`.
Write into this file the following code, which uses PyMongo to create a client connection to your database:
```
from pymongo.mongo_client import MongoClient
client = MongoClient("")
```
Remember to insert the URL to connect to your database, including the correct username and password.
Next, add code to connect to the Airbnb review dataset that was installed as sample data:
```
db = client.sample_airbnb
collection = db.listingsAndReviews
```
The variable `collection` is an iterable that will return the entire dataset item by item. To test that it works, add the following line and run `test_mongo_connection.py`:
```
print(collection.find_one())
```
This will print JSON formatted text that contains the information in one database entry, whichever one it happened to find first. It should look something like this:
```
{'_id': '10006546',
'listing_url': 'https://www.airbnb.com/rooms/10006546',
'name': 'Ribeira Charming Duplex',
'summary': 'Fantastic duplex apartment with three bedrooms, located in the historic
area of Porto, Ribeira (Cube) - UNESCO World Heritage Site. Centenary
building fully rehabilitated, without losing their original character.',
'space': 'Privileged views of the Douro River and Ribeira square, our apartment offers
the perfect conditions to discover the history and the charm of Porto.
Apartment comfortable, charming, romantic and cozy in the heart of Ribeira.
Within walking distance of all the most emblematic places of the city of Porto.
The apartment is fully equipped to host 8 people, with cooker, oven, washing
machine, dishwasher, microwave, coffee machine (Nespresso) and kettle. The
apartment is located in a very typical area of the city that allows to cross
with the most picturesque population of the city, welcoming, genuine and happy
people that fills the streets with his outspoken speech and contagious with
your sincere generosity, wrapped in a only parochial spirit.',
'description': 'Fantastic duplex apartment with three bedrooms, located in the historic
area of Porto, Ribeira (Cube) - UNESCO World Heritage Site. Centenary
building fully rehabilitated, without losing their original character.
Privileged views of the Douro River and Ribeira square, our apartment
offers the perfect conditions to discover the history and the charm of
Porto. Apartment comfortable, charming, romantic and cozy in the heart of
Ribeira. Within walking distance of all the most emblematic places of the
city of Porto. The apartment is fully equipped to host 8 people, with
cooker, oven, washing machine, dishwasher, microwave, coffee machine
(Nespresso) and kettle. The apartment is located in a very typical area
of the city that allows to cross with the most picturesque population of
the city, welcoming, genuine and happy people that fills the streets with
his outspoken speech and contagious with your sincere generosity, wrapped
in a only parochial spirit. We are always available to help guests',
...
}
```
Getting a text response like this will show that you can connect to your MongoDB Atlas database.
## Accessing Jina Embeddings v2
Go to the Jina AI embeddings website, and you will see a page like this:
Copy the API key from this page. It provides you with 10,000 tokens of free embedding using Jina Embeddings models. Due to this limitation on the number of tokens allowed to be used in the free tier, we will only embed a small part of the Airbnb reviews collection. You can buy additional quota by clicking the “Top up” tab on the Jina Embeddings web page if you want to either embed the entire collection on MongoDB Atlas or apply these steps to another dataset.
Test your API key by creating a new script, call it `test_jina_ai_connection.py`, and put the following code into it, inserting your API code where marked:
```
import requests
url = 'https://api.jina.ai/v1/embeddings'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer '
}
data = {
'input': "Your text string goes here"],
'model': 'jina-embeddings-v2-base-en'
}
response = requests.post(url, headers=headers, json=data)
print(response.content)
```
Run the script test_jina_ai_connection.py. You should get something like this:
```
b'{"model":"jina-embeddings-v2-base-en","object":"list","usage":{"total_tokens":14,
"prompt_tokens":14},"data":[{"object":"embedding","index":0,"embedding":[-0.14528547,
-1.0152762,1.3449358,0.48228237,-0.6381836,0.25765118,0.1794826,-0.5094953,0.5967494,
...,
-0.30768695,0.34024483,-0.5897042,0.058436804,0.38593403,-0.7729841,-0.6259417]}]}'
```
This indicates you have access to Jina Embeddings via its API.
## Indexing your MongoDB collection
Now, we’re going to put all these pieces together with some Python functions to use Jina Embeddings to assign embedding vectors to descriptions in the Airbnb dataset.
Create a new Python script, call it `index_embeddings.py`, and insert some code to import libraries and declare some variables:
```
import requests
from pymongo.mongo_client import MongoClient
jinaai_token = ""
mongo_url = ""
embedding_url = "https://api.jina.ai/v1/embeddings"
```
Then, add code to set up a MongoDB client and connect to the Airbnb dataset:
```
client = MongoClient(mongo_url)
db = client.sample_airbnb
```
Now, we will add to the script a function to convert lists of texts into embeddings using the `jina-embeddings-v2-base-en` AI model:
```
def generate_embeddings(texts):
payload = {"input": texts,
"model": "jina-embeddings-v2-base-en"}
try:
response = requests.post(
embedding_url,
headers={"Authorization": f"Bearer {jinaai_token}"},
json=payload
)
except Exception as e:
raise ValueError(f"Error in calling embedding API: {e}/nInput: {texts}")
if response.status_code != 200:
raise ValueError(f"Error in embedding service {response.status_code}: {response.text}, {texts}")
embeddings = [d["embedding"] for d in response.json()["data"]]
return embeddings
```
And we will create a function that iterates over up to 30 documents in the listings database, creating embeddings for the descriptions and summaries, and adding them to each entry in the database:
```
def index():
collection = db.listingsAndReviews
docs_to_encode = collection.find({ "embedding_summary" : { "$exists" : False } }).limit(30)
for i, doc in enumerate(docs_to_encode):
if i and i%5==0:
print("Finished embedding", i, "documents")
try:
embedding_summary, embedding_description = generate_embeddings([doc["summary"], doc["description"]])
except Exception as e:
print("Error in embedding", doc["_id"], e)
continue
doc["embedding_summary"] = embedding_summary
doc["embedding_description"] = embedding_description
collection.replace_one({'_id': doc['_id']}, doc)
```
With this in place, we can now index the collection:
```
index()
```
Run the script `index_embeddings.py`. This may take several minutes.
When this finishes, we will have added embeddings to 30 of the Airbnb items.
## Create the embedding index in MongoDB Atlas
Return to the MongoDB website, and click on **Database** under **Deployment** on the left side of the screen.
![Creating an index on Mongo Atlas from the “Database Deployments” page
Click on the link for your cluster (**Cluster0** in the image above).
Find the **Search** tab in the cluster page and click it to get a page like this:
Click the button marked **Create Search Index**.
Now, click **JSON Editor** and then **Next**:
Now, perform the following steps:
1. Under **Database and Collection**, find **sample_airbnb**, and underneath it, check **listingsAndReviews**.
2. Under **Index Name**, fill in the name `listings_comments_semantic_search`.
3. Underneath that, in the numbered lines, add the following JSON text:
```
{
"mappings": {
"dynamic": true,
"fields": {
"embedding_description": {
"dimensions": 768,
"similarity": "dotProduct",
"type": "knnVector"
},
"embedding_summary": {
"dimensions": 768,
"similarity": "dotProduct",
"type": "knnVector"
}
}
}
}
```
Your screen should look like this:
Now click **Next** and then **Create Search Index** in the next screen:
This will schedule the indexing in MongoDB Atlas. You may have to wait several minutes for it to complete.
When completed, the following modal window will pop up:
Return to your Python client, and we will perform a search.
## Search with Embeddings
Now that our embeddings are indexed, we will perform a search.
We will write a search function that does the following:
1. Take a query string and convert it to an embedding using Jina Embeddings and our existing generate_embeddings function.
2. Query the index on MongoDB Atlas using the client connection we already set up.
3. Print names, summaries, and descriptions of the matches.
Define the search functions as follows:
```
def search(query):
query_embedding = generate_embeddings(query])[0]
results = db.listingsAndReviews.aggregate([
{
'$search': {
"index": "listings_comments_semantic_search",
"knnBeta": {
"vector": query_embedding,
"k": 3,
"path": ["embedding_summary", "embedding_description"]
}
}
}
])
for document in results:
print(f'Listing Name: {document["name"]}\nSummary: {document["name"]}\nDescription: {document["description"]}\n\n')
```
And now, let’s run a search:
```
search("an amazing view and close to amenities")
```
Your results may vary because this tutorial did not index all the documents in the dataset, and which ones were indexed may vary dramatically. You should get a result like this:
```
Listing Name: Rented Room
Summary: Rented Room
Description: Beautiful room and with a great location in the city of Rio de Janeiro
Listing Name: Spacious and well located apartment
Summary: Spacious and well located apartment
Description: Enjoy Porto in a spacious, airy and bright apartment, fully equipped, in a
building with lift, located in a region full of cafes and restaurants, close to the subway
and close to the best places of the city. The apartment offers total comfort for those
who, besides wanting to enjoy the many attractions of the city, also like to relax and
feel at home, All airy and bright, with a large living room, fully equipped kitchen, and a
delightful balcony, which in the summer refreshes and in the winter protects from the cold
and rain, accommodating up to six people very well. It has 40-inch interactive TV, internet
and high-quality wi-fi, and for those who want to work a little, it offers a studio with a
good desk and an inspiring view. The apartment is all available to guests. I leave my guests
at ease, but I am available whenever they need me. It is a typical neighborhood of Porto,
where you have silence and tranquility, little traffic, no noise, but everything at hand:
good restaurants and c
Listing Name: Panoramic Ocean View Studio in Quiet Setting
Summary: Panoramic Ocean View Studio in Quiet Setting
Description: Luxury studio unit is located in a family-oriented neighborhood that lets you
experience Hawaii like a local! with tranquility and serenity, while in close proximity to
beaches and restaurants! The unit is surrounded by lush tropical vegetation! High-speed
Wi-Fi available in the unit!! A large, private patio (lanai) with fantastic ocean views is
completely under roof and is part of the studio unit. It's a great space for eating outdoors
or relaxing, while checking our the surfing action. This patio is like a living room
without walls, with only a roof with lots and lots of skylights!!! We provide Wi-Fi and
beach towels! The studio is detached from the main house, which has long-term tenants
upstairs and downstairs. The lower yard and the front yard are assigned to those tenants,
not the studio guests. The studio has exclusive use of its large (600 sqft) patio - under
roof! Check-in and check-out times other than the ones listed, are by request only and an
additional charges may apply;
Listing Name: GOLF ROYAL RESIDENCE SUİTES(2+1)-2
Summary: GOLF ROYAL RESIDENCE SUİTES(2+1)-2
Description: A BIG BED ROOM WITH A BIG SALOON INCLUDING A NICE BALAKON TO HAVE SOME FRESH
AIR . OUR RESIDENCE SITUATED AT THE CENTRE OF THE IMPORTANT MARKETS SUCH AS NİŞANTAŞİ,
OSMANBEY AND TAKSIM SQUARE,
Listing Name: DOUBLE ROOM for 1 or 2 ppl
Summary: DOUBLE ROOM for 1 or 2 ppl
Description: 10m2 with interior balkony kitchen, bathroom small but clean and modern metro
in front of the building 7min walk to Sagrada Familia, 2min walk TO amazing Gaudi Hospital
Sant Pau SAME PRICE FOR 1 OR 2 PPL-15E All flat for your use, terrace, huge TV.
```
Experiment with your own queries to see what you get.
## Next steps
You’ve now created the core of a MongoDB Atlas-based semantic search engine, powered by Jina AI’s state-of-the-art embedding technology. For any project, you will follow essentially the same steps outlined above:
1. Create an Atlas instance and fill it with your data.
2. Create embeddings for your data items using the Jina Embeddings API and store them in your Atlas instance.
3. Index the embeddings using MongoDB’s vector indexer.
4. Implement semantic search using embeddings.
This boilerplate Python code will integrate easily into your own projects, and you can create equivalent code in Java, JavaScript, or code for any other integration framework that supports HTTPS.
To see the full documentation of the MongoDB Atlas API, so you can integrate it into your own offerings, see the [Atlas API section of the MongoDB website.
To learn more about Jina Embeddings and its subscription offerings, see the Embeddings page of the Jina AI website. You can find the latest news about Jina AI’s embedding models on the Jina AI website and X/Twitter, and you can contribute to discussions on Discord.
| md | {
"tags": [
"Atlas"
],
"pageDescription": "Follow along with this tutorial on using Jina Embeddings v2 with MongoDB Atlas for vector search.",
"contentType": "Tutorial"
} | Semantic search with Jina Embeddings v2 and MongoDB Atlas | 2024-05-20T17:32:23.501Z |