NURS 6411: Week 7: Database Normalization Assignment
NURS 6411: Week 7: Database Normalization Assignment
NURS 6411: Information and Knowledge Management | Week 7
One of the many advantages of databases is the reduction of duplicate information. Consider a patient who visits a health care provider once for an initial check-up, and then returns to the same provider several more times for follow-up examinations and tests. For each visit, the patient’s data (home address, insurance information, health history, etc.) is re-entered as a separate entry in the patient’s paper-based medical file and then reshelved. This unnecessary outflow of time and effort can be avoided through the use of an efficient online health care database system.
This week, you appraise data redundancy and data normalization in health care databases.
Learning Objectives – NURS 6411: Week 7: Database Normalization Assignment
Students will:
- Analyze normalization of data
Learning Resources
Note: To access this week’s required library resources, please click on the link to the Course Readings List, found in the Course Materials section of your Syllabus.
Required Readings
Coronel, C. & Morris, S. (2017). Database systems: Design, implementation, and management (12th ed.). Boston, MA: Cengage Learning.
- Chapter 6, “Normalization of Database Tables” (pp. 201-234)When designing a database, normalization can diminish the risk of redundant input. This chapter defines and describes how to perform normalization of a dataset.
Chute, C. G., Beck, S. A., Fisk, T. B., & Mohr, D. N. (2010). The Enterprise Data Trust at Mayo Clinic: A semantically integrated warehouse of biomedical data. Journal of American Medical Informatics Association, 17(2), 131–135.
This article explores the Mayo Clinic’s data warehousing system: the Enterprise Data Trust (EDT). The authors explain the various components of the EDT and detail how it can enhance research productivity, quality improvement, and best-practice monitoring. .
Westra, B. L., Subramanian, A., Hart, C. M., Matney, S. A., Wilsom, P. S., Huff, S., Huber, D. L., & Delaney, C. W. (2010). Achieving “meaningful use” of electronic health records through the integration of the nursing management minimum data set. The Journal of Nursing Administration, 40(7/8), 336–343.
This article describes the process of updating Nursing Management Minimum Data Sets (NMMDS) to achieve meaningful use compliance. The authors explain the methods used to update three NMMDS data elements, in addition to the results of the updates.
Khan, R., & Saber, M. (2010). Design of a hospital-based database system: A case study of BIRDEM. International Journal on Computer Science and Engineering, 2(8), 2616–2621. Retrieved from http://www.enggjournals.com/ijcse/doc/IJCSE10-02-08-050.pdf
The authors present the events at BIRDEM hospital surrounding the introduction of a more technologically-based system. The article shows the adjustments made by the staff to move from partial to fully digitized data storage and exchange. It also covers the process by which the data was transferred and organized within the new system.
Discussion: Normalization of Data
The transition into digitized data storage and access systems in health care requires a general adjustment in data capturing techniques. In the days of paper records, redundancy was a standard—information was written and rewritten with each patient visit. Digitized data storage and the normalization of data can help reduce this redundancy. Ideally, a patient can enter a facility and information previously submitted is readily available. The provider can simply apply updates and make adjustments to the information as necessary. This saves the health care provider time and the organization’s information system space. Additionally, this decrease of data input can lead to improved quality and patient safety as fewer diagnostic errors associated with navigating a patient’s information are less likely to occur.
Consider the following scenario:
You have recently been hired by a small community hospital as a nurse informaticist. One of your first responsibilities is to help to convert records to an electronic format. You decide to address the process of ordering medicine for patients and you need to develop a database to address the issue. Currently all of the patient information is contained in an Excel spreadsheet, which contains the following categories: PATIENT ORDER, PATIENT medical record number (MRN), PATIENT Name, Order Number, MEDICATION Name, MEDICATION Description, Quantity, PATIENT Address, and Date Ordered. In designing your database, you need to normalize the data in order to remove redundancies and duplications. What approach will you take to normalize the data?
Note: A PATIENT can have multiple orders but an order can be for only one MEDICATION. Patient medical record number (MRN) and Order Number are the primary keys.
- Review the information in your course text, Database Systems: Design, Implementation, and Management on how to normalize data.
- What form of normalization would use (1NF, 2NF, 3NF)?
- Illustrate the normalized form for PATIENT ORDER.
- Reflect on the issues you encountered as you attempted to normalize the list.
- Consider the consequences of not normalizing data.
By Day 3
Post a brief description of how you would normalize the data from the scenario and your rationale. Describe the challenges that you encountered in determining how to normalize the data. Explain the possible consequences of failing to normalize data.
Read a selection of your colleagues’ responses.
By Day 6
Respond to at least two of your colleagues on two different days using one or more of the following approaches:
- Ask a probing question, substantiated with additional background information, evidence, or research.
- Offer and support an alternative perspective using readings from the classroom or from your own research in the Walden Library.
- Validate an idea with your own experience and additional research. NURS 6411: Week 7: Database Normalization Assignment.
ADDITIONAL INFO
Database Normalization
Introduction
The process of normalization is a way to organize data in a database so that you can keep it organized and make sure that it’s easy to update. You should normalize your tables as often as possible, but don’t worry if you don’t have time or resources right now! We’ll show you how in this article
Normalization is a technique which organizes tables so that they can be easily maintained.
Normalization is a technique which organizes tables so that they can be easily maintained. It helps to avoid redundant data and maintain just one fact per item (instead of multiple facts).
Normalization also helps prevent errors when data is updated. For example, if you have two different types of customers and their orders need to be stored in separate tables, there could be problems when you try to update the customer’s information:
-
If you update the customer’s name first, then all future updates will reflect the old name rather than their new one;
-
If you update the order number first then those values won’t change until another transaction updates both fields at once;
Normalization seeks to avoid redundant data and maintain just one fact per item.
Normalization seeks to avoid redundant data and maintain just one fact per item. In other words, normalized tables will have a set of columns for each distinct state or value that an object can take on. For example, if you had two objects with different states—a person in Ohio and a person in California—they would each have separate columns for their location.
If you were to query your database against these two tables alone, they could return different results depending on which part of the country they were located within (Ohio vs California). This makes sense when we consider how many times we encounter duplicates in our daily lives; we often go through the same processes multiple times over time because of this phenomenon. However butting heads against other databases may lead them down similar paths as well which could cause issues like performance degradation due to JOIN queries being forced onto both databases instead of just one at once.”
A database is said to be in 1NF if it represents the smallest amount of information while not losing any data.
A database is said to be in 1NF if it represents the smallest amount of information while not losing any data.
This can be seen as a form of abstraction, where we take away or abstract away some of the details that we just don’t need at this time. This thinking is especially important when working with our databases, because they are often large and complex enough that we might not have time or knowledge about all aspects of them (or even what those details mean).
You should never use two or more columns together as a primary key.
A primary key is a single column that uniquely identifies each row.
The best way to ensure that your primary key is unique and small (and not just large), is to use only one or two columns in the table as its primary (hence, the name “primary”) key. The reason for this is that when you have multiple columns as part of your primary key, a lot of unnecessary work has to be done by SQL Server when it comes time for reconciliation between data from multiple queries or updates.
Another thing you should avoid when creating your database normalization design: columns containing null values or spaces or special characters such as “!” character will cause problems during data retrieval in subsequent queries since they could potentially cause errors like missing values
In 2NF, every non-key attribute must depend on the entire primary key.
In 2NF, every non-key attribute must depend on the entire primary key. This means that if you have an attribute A, and it doesn’t depend on a column in your table’s PK, then it cannot be part of your normalization solution.
Here are some examples:
-
An employee could have two departments (A1 through A2) with department numbers 1 through 100 respectively; these two departments would be considered independent units if they were not related to each other by their department number (e.g., Department 1 could be responsible for all employees under 120). However, because this is not a good way to organize your data into small groups where each group has its own purpose and does not overlap with other groups’ purposes—it makes more sense for this organization scheme to use multiple tables instead of one large table (e.g., Employees could be stored under Employee Table A1 because they work at different locations; Employees should also be stored under Employee Table B because these individuals will eventually move up into management roles).
The 3NF definition is a bit more complex though, and the easiest way to understand this is with an example.
Normalization is a process of organizing data into tables. The most basic normalization form is 2NF, which consists of having each table be related to exactly one other table in the database. This makes it possible for you to query or retrieve information from any one table easily and quickly, since you know that all the other tables are already present and available for your use.
The 3NF definition is a bit more complex though, and the easiest way to understand this is with an example: let’s say we have an order system where orders go through different departments (like shipping or customer service). To keep things organized and easy-to-find for our end users who need access to certain information about their orders quickly without having too much trouble finding what they’re looking for within our databases (or worse yet—having them spend time running queries across multiple databases), we’ll create separate tables dedicated specifically toward each department so that users don’t have any trouble searching effectively when searching through huge amounts of data stored across multiple servers/databases/etcetera.”
Removing redundant data from your database helps make sure that you don’t store multiple copies of the same piece of information.
Removing redundant data from your database helps make sure that you don’t store multiple copies of the same piece of information. This can lead to inconsistent and incomplete information, which in turn can cause a number of problems for databases:
-
Data integrity issues – If duplicate rows are stored in different databases, it’s easy for users to confuse one with another. If a user is able to delete one row but not another, or if they change their mind about which row should be deleted, then there will be inconsistencies between what’s stored in your database and what’s actually being used by other systems or applications on top of it (such as an application).
-
Performance issues – When two tables are identical except for some values being duplicated across them (e.g., names), there will be overhead due to having duplicate operations performed over both tables; this could result in slower performance overall than if those operations weren’t necessary at all!
There are quite a few rules for when things in your database should match, and when things in your database should have their own table.
There are quite a few rules for when things in your database should match, and when things in your database should have their own table.
First, the data must be related. That means that they have some sort of relationship to each other—in this case, they’re both customers with similar information about them. If they were totally unrelated (for example: one customer is a cat and another is a dog), then it wouldn’t make sense to keep them together since there isn’t any relationship between them.
Next up is whether or not there needs to be its own table for each item on the list: if something has more than just one type of data about it (like pets), then it would probably be best off being stored separately from those other types of data so that there’s less chance for confusion later down the road when someone tries adding new information into their pet’s record rather than simply replacing whatever was already there before (which would require manually updating every single record).
Normalization helps your database run more efficiently.
Normalization helps your database run more efficiently, as it reduces the amount of space used by your database. It also reduces the amount of time it takes to make changes to your database and helps avoid data loss or integrity issues.
The purpose of normalization is to simplify a schema so that you can store data in a more efficient manner than if you had not normalized it first. There are several types of normalizations:
Conclusion
I hope you found this article helpful, and I wish you the best of luck in your quest to normalize your database.
Leave a Reply