Saturday, August 31, 2019

Privacy by default

The word ‘pervasive' is indicative of the intrusive nature of TIT, as can be evidenced by two of its features, namely data collected would come from various origins, and that it would be done without the user's awareness. The paper highlights five objectives to be achieved for TIT to avoid being misused and end up harming the users. The right of deletion, the right to be forgotten, data portability, privacy, and data protection principles must not be compromised or neglected. The following examples below list just three common ways that these five objectives were not met in many internet applications.Firstly, one such risk is that a user's personal information might not be directly known, but it could be revealed through lots data collection from different sources or even uncover new information, thus violating the users privacy. A person who does not wish to be recognized online can therefore be identified through such means. Secondly, the paper details that many applications only consider the installation of â€Å"information security, privacy and data protection† only as an afterthought, rather than in the beginning, at the drawing board.This compromises the application's security and the users privacy substantially, as he capabilities of these measures would be reduced. Thirdly, with the advent of cloud computing and social networking services, the likelihood of users becoming â€Å"locked-in† to a particular TIT service provider increases because it becomes more difficult to import and export their information to other service providers. The lack of data portability here also means that users do not have control over their own data. The paper mentions that there are t-von. O general principles that should be followed in the policy making of TIT. Firstly, the TIT should not â€Å"violate human identity, human integrity, human rights, privacy or individual or public berries†. Secondly, individuals should have control of all their p ersonal information created or processed within the TIT, unless such an action violates the first principle. With regards to this, the paper illustrated four methods to reaching the objectives: 1 . Privacy, data protection and information security risk management 2. Privacy by design and Privacy by default 3.Data protection legislation: harmonistic/coherent application/enhanced enforcement 4. Standardization Each of the above four options address various challenges related to TIT. The first option is not simply a technology-focused idea, as the paper states that it loud also be important to look into other measures, such as legal, regulatory, procedural and organizational. The main idea of this option is to avoid leaving said protection measures till the end of the development process as an afterthought, but to be included at the planning stage, with an adherence to best practices to avoid or reduce common risks.The second option operates on the basis that while the technology might not be the factor that puts privacy and security at risk, but the way that it is created and implemented. It claims that applications should not collect data necessary to its functions, and that users should be made aware of what information would be gathered from them as well as what said information would be used for. Users should also be informed on how to exercise their rights, and the applications should adhere to data protection principles.The flashlight application example mentioned before was in clear violation of this, and this real life example further enforces the need for an option such as this. From the technical standpoint, the paper states that personal data protection should be defined, such as in-built privacy options and mechanisms to inform ND educate users on data processing, although the challenge would be to do such things while operating within limited processing power and/or memory of the applications.The third option focuses on the legal aspect of data prot ection, such as strengthening, clarifying and harmonize the powers of data protection authorities in order to make sure that all legislations are enforced, and not just pick and choose specific laws to be followed. The paper also states that violations should be sanctioned significantly to deter people from making applications that would neglect such issues. This is to ensure transparency of applications and for users to have control over their own data.The â€Å"concept of indirectly identifiable data† also has to be improved and elucidated to avoid uncertainty in legislations. The last option, standardization, allows ease of conformity with legal requirements and certification due to the clarity provided from it, thus being cap bled of educating users on how to exercise their rights and allowing them to make informed choices. One weakness with standardization is that â€Å"standards are voluntary and non-binding†, and thus it might not be very effective. It would req uire measures which are more binding.The impact of these options is the building of trust between consumers and the applications. Trust is important in this online environment because without trust, consumers are less likely to buy and use new applications, thus slowing progress of the invention of new technologies, economic growth, and cause the public sector to take longer to benefit from digitizing its services. The paper concludes that having a binding law with more data protection enforcement is the best option to achieve the goals for TIT to ensure that the applications are trustworthy and compliant with user rights.

Friday, August 30, 2019

Archibald Motley’s Art

Archibald Motley Jr. is one of the African America painters who is renowned all over the world because of his work. One of his popular works is entitled ‘Blues’. The ‘Blues’ work of Archibald Motleys is my favorite because of two things: one, because of the socio-political message the painting evokes, and two, because of the artistic style that Motley features in this particular work. The socio political meaning of the painting refers to the feeling of fusion that the painting evokes. If the audience of the painting will know of the background of the painting, then the audience will understand how the painting symbolizes the ability of music to bring together people from different parts of the world with different origins and inclination. The title is pertaining to the type of music that originated from African Americans, and during the time Motley composed it, it was a tribute to a part of the global music scene that is growing in public acceptance and enjoyment. Like what Shuman pointed out in explaining the relevance of the painting, ‘in the hands of artists like Hughes and Motley, the blues were a potent force in both popular and high culture' (Shuman, 2002, p. 739) The aesthetic appreciation for this particular work is focused on Motley’s ability to pay tribute to the creators of such music by designing in the composition in a way that the black acts as a dominant feature of the painting, broken by the colored images of the dancers and the musicians and their clothing and the instruments. There is a sensual and fluid feeling that goes along with the painting, prompting the audience to get a glimpse of the high that comes along with the dancing of the blues because of what the curve lines evoke. The prevailing black patterns seem like references to the Black ancestry, origin and lineage, since the painting is a tribute to the success of one of the types of music that they introduced to the world. This is as much a reflection of his pride for his skin color. Being an African American himself, the painting speaks of how Black people are fusing with the rest of the world in great fluidity.

Thursday, August 29, 2019

The Ideal Man Essay

Ideal Man Essay: Rewrite Alicia Luttrell AP English October 11, 2012 From ancient times to present day, authors from around the world have tried to pursue the ideal man. After reading some of their stories, it is easy to see that each story depends on the type of culture the author was raised in. Throughout this pursuit, the ideal man has taken on many different forms. The ideal man may be thought of as someone who is perfect, but some may see him as someone who overcomes faults or doubts. He could also be seen as someone tries to become the ideal man or is well-known and liked throughout the land.In Flight, the young man Pepe has to learn to become a man after his father dies. His mother tells him almost daily that he is lazy and needs to get to work. As Pepe’s mother starts to rely more on him, she sends him into town to pick up some medicine and supplies. This is the start of Pepe forming into a man. His mother is instilling responsibility in him to help out the family. He goes on a journey and ends up proving that his mother should have never sent him because he kills someone. He runs from his crime and tries to get away from facing the consequences.Although he wanted to be like his father, the ideal man in Pepe’s eyes, he was young and foolish. The doubt of his mother didn’t help though. John Steinbeck wrote this story to show the reader that just because there is a void to fill does not mean that the person filling that void is ready for the task. Pepe tried to be like his father and fill his shoes. Some readers might view trying as a characteristic of an ideal man. Although Pepe tried he still lacked strength and courage to face his mistakes.The strange story of The Handsomest Drowned Man In The World is an example of creating an ideal man. The town’s people created a life that they thought this handsome man, they found on the beach, should have lived. Instead of just burying and being done with him, they chose to give him a l ife story so that he would not be forgotten. They gave him the name of Esteban, which means great and beautiful. The women compared him to their husbands, even though they knew nothing of him. The town’s people went out of their way to give him a lavish funeral, one that everyone would remember.These people were using Esteban for their own self-improvement. They saw how this man was forgotten and did not want the same thing to happen to them. This thinking caused them to change their life styles and better themselves. In the short story, A Good Man Is Hard to Find, there are many characteristics that could be put together to make up one ideal man, but separately everyone has flaws. The main character of the story, the grandmother, is in search of a good man. She is always quick to say someone is a â€Å"good man† before she knows anything about them.For example, when she meets Red Sam he mentions to the family that he let some â€Å"fellers† charge their gas las t week; she instantly decides that he is a good man. Red Sam was caring, generous, and thoughtful, things that an ideal man should be. On the other hand, Red Sam was lazy and rude towards his wife, not a characteristic of an ideal man. When the grandmother meets the Misfit, even though she knows he is a criminal, she still calls him a â€Å"good man. † He is kind to her, but in the end he kills her because she knows too much.Flannery O’Conner wrote this short story to let the reader know that just because someone may seem like â€Å"good man†, it does not always mean that they are. Red Sam seemed kind and caring, but he was really just lazy and rude. The Misfit turned out to be a cold-hearted killer after seeming like he cared to stop and help the family on the side of the road. There are people that are in the world today who seem good, but are nowhere close to it. They can say whatever they want but their actions will speak for themselves. As all of these stor ies show, being an ideal man is near impossible.Many people possess qualities that would make up an ideal man, but never all the qualities needed. They can never possess all the qualities because every perspective of an ideal man is based on what the reader believes in. Pepe was not mature enough, Esteban was not remembered for what he actually did, and Red Sam and the Misfit appeared like good men, but were not even close. There will never be an ideal man living in society. The ideal man will only live on through the stories, movies, and imagination people create. The ideal man is all based around culture and belief.

Wednesday, August 28, 2019

Reflection paper about hoe to slingshot, the history of drug race Essay

Reflection paper about hoe to slingshot, the history of drug race - Essay Example However, the video presents a remarkable history on how drag racing evolved to become one of the most adored motor sports in the world. Starting from the streets, drag racing began as a passionate duel for a few people. What actually made the sport fun is that there were no other comparable motor competitions at the time. This arguably made it difficult for the pioneers because they were developing a sport from scratch. Nevertheless, the passion, dedication, and the desire to make the sport better revolutionized drag racing. The video draws interesting insight on how drag racing sport tapped from the inventions of the time. For instance, the sport began at a time when time machines were developing. In the initial competitions, the winner was determined by finishing at the front. However, the invention of time machines saw a significant shift of drag racing into a timed trial event. It can be argued that the founders of drag racing were indeed opportunistic considering the way they were taking advantage of inventions of the time. Whereas globalization has enabled modern sporting industry to tap from inventions across the globe, pioneers of drag racing deserve credit since globalization at their time was still in its infancy. The history of drag racing is an epic example of self-determination. It is easy for the modern generation sport enthusiasts to enter into sport of their desire because everything has already been made up. However, the history of drag racing represents a generation that was purely driven by self-determination. Since 1940 when the sport arguably began, there were no defined sporting structures, rules, safety guidelines, infrastructure, or sponsorships. Most often, the participants used their own money and resources to make it happen. More importantly, the inventors sacrificed their own lives and spent significant amount of time away from their families. Their

Studying abroad Essay Example | Topics and Well Written Essays - 750 words

Studying abroad - Essay Example I believe that my decision to study abroad is one of my best decisions that I have taken in my life. There are many benefits of studying abroad, including earning more experience, having dependent responsibility, and changing my future. First of all, studying abroad has many positive effects on people; one of them is earning more experience in life. By studying abroad you may have an experience with how to deal with people and how to have more friends. For example, before I came to the United States, I did not have any friends who would not speak my language. But after I came to the United States, I started to meet with different people. After a few months, I had made many friends from the United States, Mexico, and from many Asian countries. Also, because of modern technology, I was able to communicate with them easily. In addition, you may know much information about many countries along with learning their languages. One day, one of my friends went to Canada to study English, and after a few months he came back to Kuwait. He said â€Å"of course I went to Canada to study English, but also I learned some French.† So, many people can know more than one new language, and can learn a lot about many countries. Not only does studying abroad give a person more experience, it also makes him independent. Another reason of studying abroad is having dependent responsibility. ... You need more experience and studying abroad may help you with that. For example, when my roommate was in Ohio, he met two people from Mexico. One of them was a good man but the other one was a bad man. So he decided to be a friend with the good man and he said to the other one, â€Å"I do not want to be your friend.† Studying abroad helped him to open his mind and decide what was better for him. Furthermore, more responsibilities in the house were another part of dependent responsibility. For example, in Kuwait I did not do anything in the house. My mother cooked the food for my family and me, and my father paid the bills for the house. But here in the United States, I have to pay my bills, cook my meals and clean my house and clothes by myself. Not only having dependent responsibility is an effect of studying abroad, it is also changing my future. Changing my future is one of the objectives of studying abroad. It affects my future in many ways. One of them is there are many opportunities to have a graduate scholarship for Master and P.H.D degrees. For example, my uncle went to the UK to continue his Master and P.H.D degree in engineering. One of the most important reasons that helped him to have a scholarship was that he had been a student in the United States for his bachelor degree. Furthermore, you may have a bigger chance to have a job because you are studying abroad. Also many private sections in Kuwait give the opportunity to the students who study abroad in comparison to many people who graduate from local schools. One day, two of my friends applied to the company to work there. One of them had graduated from the United Kingdom and the other one had graduated from Kuwait University. After a week, my friend who had graduated from the United Kingdom

Tuesday, August 27, 2019

Informative report Assignment Example | Topics and Well Written Essays - 250 words

Informative report - Assignment Example The requirements including applicants’ eligibility, whether all application materials were turned in by the deadline, whether the applicant achieves the minimum standard scores on all evaluations and importantly whether he/she displays positive behavior, attitude as well as commitment to completing the college program. (Gateway to College). FRCC office also offers financial aid to the students to pay their tuition and other fees. To get that aid, the students need to have certain eligibility conditions. Apart from being a U.S. citizen or a U.S. permanent resident, they have to be classified as a Colorado resident as well. In addition, they need to have high school diploma, or pass the ACCUPLACER test. Importantly, they should have a â€Å"good standing in accordance with Front Range Community College’s Satisfactory Academic Progress policy.† (Financial Aid) Choosing a career is one of the most crucial decisions that will be taken by the students in their lives. To aid in taking that crucial decision, FRCC has in-house career counselors, who can aid the students. So, the students can schedule an appointment with these counselors, who will surely aid them in exploring various options and then focusing on one. There are also career assessment tools or softwares, which can aid the students in selecting their

Monday, August 26, 2019

Analysis of how Joint Warfare was applied during Operation Just Cause Research Paper

Analysis of how Joint Warfare was applied during Operation Just Cause in Panama - Research Paper Example In order to tackle the future political challenges, there is a need for developing an army which is prepared for not only offensive and defensive activities, but also to maintain stability and to conduct supporting functions. The military operations in Haiti, Bosnia and Middle East demonstrate the requirement of joint forces in order to ensure better cooperation. The trends of joint operation in 21st century evidently reveal the fact that single military operations have become history. In order to rule in the rapidly changing environment, joint approach of military planning is considered as a critical success factor for any nation . The research paper provides an in-depth analysis on application of joint warfare during ‘Operation Just Cause’ in Panama. The objective of the research is to analyze how joint operations were implemented during Operation Just Cause. Besides, the paper also aims at understanding how Goldwater-Nichols Act had affected the planning of Operation Just Cause. Basic Description about Joint Operation The term, ‘Joint Operation’ denotes military movements which are undertaken by joint military forces. Joint operation principle is established on a sound foundation of war philosophy, concept and practical knowledge. Joint operation principles identify the basic and the advantageous impacts of ‘unity in command’ and the cooperation which results from integration and harmonization of military operations. This cooperation can increase the effectiveness and efficiency of military force. The benefits of joint operation are spread beyond the operational background across the array of military actions2. ‘Joint Warfare’ is often used by the military forces of the US in conjunction with other countries and agencies. Joint warfare is required in order to create a decisive and a combined combat strength. In joint warfare philosophy, the military organizations incorporate and synchronize their air forces, land forces, navy forces and the special operation forces in order to accomplish strategic and operational objectives. The objective of joint warfare is to enhance the effectiveness of military assault3. The Origin of Joint Concepts The history of the US reveals the significance of joint operations. The war of 1812 in between the US and Great Britain provides a great example with regard to success of joint operation. However, the operation was criticized due to misperceptions, low level of cooperation and coordination. The first joint military board in the US was established in the year 1903. The joint board was aimed at planning joint military operations and resolving issues on common concerns. Yet, the joint board was provided with little right to solve

Sunday, August 25, 2019

Parenting Styles and the Chinese Tiger Mother. Why Chinese Mothers Are Essay

Parenting Styles and the Chinese Tiger Mother. Why Chinese Mothers Are Superior - Essay Example The authoritarian style is characterized by parents who may be depicted as being absolutely in control mode in dealing with their children. Orders are to be carried out unquestioningly, and no explanations are given. The child has no freedom to make any choices. The parents set the bar for performance exceptionally high, and insist that the child meets those standards. Any failure in rising to these expectations is met with harsh criticism. In the case of authoritative, or democratic, parents, control is tempered with affection. The emphasis is on firm guidance and not on punishment. The child is encouraged to share the responsibility for the decisions taken, and is given reasons and explanations for any behavioral expectations. On the other hand, permissive parents grant the child unlimited freedom of choice. This involves a very hands-off approach, in which the parents set no rules, have no expectations, and make no effort to discipline the child. This style is based on uncondition al acceptance. Amy Chua’s parenting style, as evidenced by her article Why Chinese Mothers are Superior, is obviously based on authoritarianism. Chua is in absolute control of her daughters lives.

Saturday, August 24, 2019

Reducing the driver texting while driving can significant reduce the Essay

Reducing the driver texting while driving can significant reduce the number of car accident - Essay Example Texting while driving kills thousands of people each year. To stop these completely preventable accidents, every person who drives needs to stop texting in the car and wait to answer the phone. Distracted driving, which is the practice of not paying full attention to the road, is a big killer, and cell phone usage is one key factor. According to the FCC, in 2012 alone â€Å"driver distraction was the cause of 18 percent of all fatal crashes,† totaling more than 3300 deaths and over 421,000 injured people (FCC). While there are a number of ways that drivers could be distracted in their car, text messaging stands out as a very notable culprit. There are a huge amount of at-risk drivers who use cell phones. A poll from 2014 showed that 90% of all Americans own cell phones, 64% of which are smartphones (Pew Research Center). On top of that, studies have found that â€Å"text messaging creates a crash risk 23 times worse than driving while not distracted† (FCC). This is because, as can be imagined, the driver of a car who is trying to text has his or her attention split between a screen and the road. Distracted driving accidents have increased a lot in the past decade due to an increase in cell phone and smartphone usage. From 1999 to 2008, there was an increase of almost five percent in the number of fatal accidents which involved distracted driving (Wilson & Stimpson 2214). This makes sense when thinking about the increase in cell phone usage during that time. And the last few years of this period are the biggest increase, which ties in with more smart phones being popular for teenagers. Just from 2005 to 2008 there was an increase of 28% in distracted driving deaths (Wilston & Stimpson 2214). But there is still hope for the texting disaster. As Alosco et al. showed in a simulated driving study, texting as well as eating caused â€Å"impaired performance,† and interestingly, those who took part in the study after the fact understood that â€Å"texting

Friday, August 23, 2019

Is sickle cell pain adequately managed in a hospital setting Essay

Is sickle cell pain adequately managed in a hospital setting - Essay Example The latter is a synthesis of researches from the main contributors on the same study. The two medical research articles to be presented in this critique will be analyzed in six (6) parts. (1) First is on the statement of the research problem. For the research problem, this section will find out if the problems stated or implied in the two research articles are clear, researchable, and related to field-nursing. The first part will also examine the competence and qualifications of the researchers who conducted the research. (2) The second part is concerned about the literature review. In this section, the relevance, comprehensiveness, currency, and logic of the literature reviews will be evaluated. (3) The third part of this critique studies on the study design. It consists of the study design, discussion of the theoretical framework and the statements of the hypothesis. (4) The fourth is about the data collection. It identifies the data collection methods used in the research articles including the justifications of the usage of these methods. This part also talks abo ut the validity and reliability of the data collection methods. (5) Next part tackles the data analyses of the two articles. This section analyses the appropriateness and descriptiveness of the analysis methods on the data. The analysis methods involve the presentation of findings in the form of tables, graphs, and charts and also some discussions. (6) The last part discusses the conclusions and recommendations presented in the two researches for future studies. This section tends to evaluate if the conclusions presented justify their purposes. In Thomas, Wilson-Barnett, and Goodhart’s research article, they state the problem concerning the difficulty of the management of pain on patients with sickle cell disease. They state the problem by saying the current state in the pain management of sickle cell disease in hospitals. As quoted in their article: â€Å"Painful crisis

Thursday, August 22, 2019

The Person Who Inspired My Life Essay Example for Free

The Person Who Inspired My Life Essay â€Å"The strongest influences in my life and my work are always whomever I love. Whomever I love and am with most of the time, or whomever I remember most vividly. I think that’s true, don’t you?† (Tennessee Williams). I pondered this question for some time until I realised that the answer to this is in front of me. Every family has people who influence the lives of the other family members. Often the father’s or mother’s influence is the most important. However, in my family my brother has influenced me the most because I enjoy his company, I respect him, and I can rely on him for good advice. He has all qualities of a good friend and by good friend I mean one that is understanding, but not too sympathetic. My brother will take up for me, but is not afraid to tell me when I am wrong. He is good person to know because he enjoys having fun. He is always up for hiking, fishing, swimming or even road tripping. He also enjoys going to the theatre and when I double date with him he is the one who keeps us laughing at what he says. I respect my brother because he is true to his principles and I have high esteem for his set of values. He had to work his way through university when he could have chosen the easiest way by staying at home. This inspired me to do the same and gave me the strength to pull it off. I hope that I can remain true to my values as my brother has. No matter what happens I know I can rely on his judgement. For instance, when I was getting a lot of pressure from someone and I couldn’t talk to my parents about it, he was the one who listened to my side of the story without an instant condemnation. He never over-reacted to the problem without understanding it. Instead, he listened to my problem, asked me questions and helped me see the way to the solution. I feel like I can always rely on his advices. Although I love and respect my family, I am especially close to my brother. He is enjoyable company and has earned my respect. I know I can rely on him no matter the situation is.

Wednesday, August 21, 2019

Leisure Time Essay Example for Free

Leisure Time Essay In the mid-1800s Americans started to receive a lot of extra time in their hand and had more time to do fun activities. Businesses declining working hours was a major reason for the rise of leisure time but what also had a huge impact was installation of the electric lighting in the city’s streets which made Americans have more time to perform activities at later hours (Lund). So what did Americans decide to do with this time? The recommendation of physical exercise as a way to relive of all the stress raised the importance of physical actives and to also attend spectator sports became part of the leisure activities in the United States. Vaudeville, dance halls, and motion pictures became popular, while new modes of travel allowed Americans to go visit the mountains, seaside resorts and county fairs (Lund). To begin with after the Civil War, the popularity of sports as leisure activities grew as people began to see the importance of exercise to health. While initially only the wealthy could take part in most sporting events because the middle and lower classes couldn’t afford most of the activities .The opening of public available gymnasiums, courts, and fields allowed the middle class to participate in those activities as well (Lund). Athletic clubs such as the New York Athletic Club were organized and the YMCAs began to institute sports programs. These programs mostly focused on track and field events, introduced by communities of European descent, and gymnastics, heavily influenced by German athletics. Gymnasiums, which included exercises using Indian clubs, wooden rings, and dumbbells, were opened in many Eastern cities which helped expand exercise. Also With the extra time Americans gained American sought out to find new ways to keep themselves occupied and entertained. Americans began to seek out new ways to distract themselves so they started to experiment in making new activities to do. As a result of Americans seeking for a new past time they invented Americas greatest past time today, baseball. Different types of Baseball have been played throughout time and before the 1800s but the rise of the sport didn’t come until after the civil war (Bluett). While at war the soldiers had a lot of extra time in their hands the soldiers from New York introduced the game to their comrades from their camps and soon the game also spread to other camps over the north and south. It was said that because of this baseball expanded from the north to the south baseball was recognized more after the civil war ended. After all the soldiers from the camps were taught how to play, they went back home and introduced their sport to their families and friends which helped expand they name of the game (Bluett). In 1870 the first professional team were founded but didn’t last for more than five years. In 1876 the first major professional league that was formed was the National league of baseball clubs which still exist today. Another sport that was created by Americans in the 19th century which they invented to perform as a leisure activity was Basketball. The game was created from the mind of James Naismith in 1891, while he was an instructor at the School for Christian Workers, later called the International YMCA Training School, in Springfield, Massachusetts (McComb). The YMCA was an international organization that focused on the idea that physical as well as religious training was important for humans. Naismith had graduated from McGill University, and after leaving a Presbyterian seminary jo ined the YMCA school in Massachusetts. When he arrived the director of the YMCA name Luther asked Naismith if he could design an interesting indoor sport for physical activity in a gymnasiums Americans can exercise while having fun in the winter when it was cold and couldn’t be outside (McComb). He thought that people were bored of just working out with weights and running around without a purpose. James considered the thought of throwing a ball into a box, like the warm-up exercise he had used when he played rugby but only this time the box was ten feet up and had to dribble the ball (McComb). The game was a success Americans all over the country were playing it and it even expanded internationally. In the 1880s women didn’t really perform any  sports but after they were introduced with basketball they were so fascinated that it became one of women’s favorite sport in the 1880s. Furthermore although men performed the majority of sports activities around the 19th century. Opportunities for women beg an to appear as the century ended. Some of the most popular sports for women in the 19 century were basketball and lawn tennis. Also by the end of the century schools began to offer even more sports activities for females, such as gymnastics and basketball. One of the most popular sports for women was basketball. Girls and womens basketball flourished in the early years of the game. Two days after the invention of basketball, some female teachers who had heard the shouts of players coming from the gym, asked Naismith to instruct them (Woolum). Naismith did so and the women became very fund of the sport. In 1892 after the women new the game better the YMCA hosted the first tournament for women (Woolum). The game for women spread and they even got to play in college, in 1896 the first intercollegiate game was played in Berkeley between the University of California and Stanford. No male spectators were allowed at the Berkeley match because it was considered improper for male viewers to attend (Woolum). Also the other sport the women were very fund of was lawn tennis. Mary Ewing Outerbridge has been credited with introducing lawn tennis to the United States in 1874. When she went to Bermuda she watched as British Army officers were hitting a rubber ball with spoon-shaped paddles strung with catgut across an outstretched net. Fascinated by the game, Outerbridge brought a box of the tennis equipment with her and returned to New York (Woolum). When she got home she introduced the game to her friends and family. The game rapidly got popular and became very popular not just with women but with men as well and rapidly spread throughout the Northeast as a favorite national pastime (Woolum). In addition Americans didn’t just past their times with sports they also enjoyed other activities like attending vaudeville shows. Within cities, people attended vaudeville shows which would include many acts. The vaudeville shows were usually watched by the middle class, the shows often ran for many hours so the people could come and go whenever they wanted (Ushistory). In the Vaudeville shows the viewers could enjoy a performance consisting of Shakespeare plays, acrobatics, singing, dancing, and comedy (Lund). Vaudeville shows weren’t just for entertaining they also talked  about economic and ethnic situations. Other popular shows of the time included circuses and Wild West shows, one of the most famous plays was Buffalo Bill Codys by William F. Motion pictures also served as entertainment during leisure time for urban audiences. Initially the movies were originalities in kinescope viewers, until they became acts in their own right on the vaudeville stage. As motion pictures became longer, they moved into storefront Nickelodeon theaters and then into even larger theaters (Lund). Finally another activity Americans did as for their leisure time was to go on vacations. Some people wished to go further afield on their vacations and leave the city now that they invented a new way to travel faster which was by trains (Ushistory). Many with limited budgets went to the countryside or the beaches which mostly included the lower and middle class. Towards the late nineteenth century resorts opened in the outskirts of cities, such as the beach area of Asbury Park in New Jersey which was founded in 1870 (Lund). Amusement parks opened in places like Coney Island in New York was founded in 1897.the Park offering rides, fun houses, and scenes from foreign life, and the latest technological breakthroughs, such as motion pictu res. National parks were created by the federal government to preserve nature and many began to tour these areas on vacation (Lund). One such example was Yellowstone Park where people camped or stayed at the hotels, the park was built there in the late 1880s. Worlds fairs and expositions held in different U.S. cities offered Americans a chance to tour the world in one place. The fairs celebrated progress and featured exhibits of science and technology, foreign villages, shows, rides and vendors. The first major one was the Philadelphia Centennial Exposition in 1876 (Lund). All in all Americans in the 19th century performed a large variety of activities for their leisure time. Men performed many different physical activities especially when clubs like the YMCA was introduced which helped more Americans become more active, also American performed sports like baseball and basketball in their leisure time. In the late 19th century even women were starting to take place in the sports and other activates. Another major activity Americans did for leisure time was to go to vaudeville shows which had all sorts of acts from Shakespeare plays to comedy acts. Finally after traveling was easier Americans started to go on vacations outside the city and go to resorts and world fairs for their leisure time. Bibliography 1. a. Leisure after the civil war b. Lund, Karen C. America at Work / America at Leisure, 1894-1915. America at Work / America at Leisure, 1894-1915., 01 June 2000. Web. 17 Dec. 2013. 2. a. Baseball and the civil war b. Bluett, Terry. Baseball and the Civil War. Welcome to the Pennsylvania Civil War Trails. Civil War Trails, n.d. Web. 17 Dec. 2013. 3. a. The Great Spectator Sports of North America b. McComb, David G. ELibrary: Login. ELibrary: Login. Proquest, 01 Dec. 1998. Web. 18 Dec. 2013. 4. a. Leisure time b. 39b. Sports and Leisure. Sports and Leisure []. U.S. History Online Textbook, n.d. Web. 19 Dec. 2013. 5. a. Women in America sports b. Woolum, Janet. ELibrary: Login. ELibrary: Login. Elibrary, 05 Aug. 1992.

Tuesday, August 20, 2019

Labelling Theory for Understanding Youth Crime

Labelling Theory for Understanding Youth Crime Outline labelling theory and consider its usefulness in understanding youth crime and anti-social behaviour in Britain today. Labelling theory claims that deviance and conformity results not so much from what people do but from how others respond to those actions, it highlights social responses to crime and deviance Macionis and Plummer, (2005).Deviant behaviour is therefore socially constructed. This essay will describe in full the labelling theory and comment on the importance of the theory to the deviant behaviour of the youth and the anti-social behaviour of the youth in Britain today. The labelling theory becomes dominant in the early 1960s and the late 1970s when it was used as a sociological theory of crime influential in challenging orthodox positivity criminology. The key people to this theory were Becker and Lement.The foundations of this view of deviance are said to have been first established by Lement, (1951) and were subsequently developed by Becker, (1963).As a matter of fact the labelling theory has subsequently become a dominant paradigm in the explanation of devience.The symbolic interaction perspective was extremely active in the early foundations of the labelling theory. The labelling theory is constituted by the assumption that deviant behaviour is to be seen not simply as the violation of a norm but as any behaviour which is successfully defined or labelled as deviant. Deviance is not the act itself but the response others give to that act which means deviance is in the eyes of the beholder. Actually the labelling theory was built on Becker, (1963 :9) statement that Social groups create deviance by making the rules whose infraction constitute deviance, and by applying those rules to particular people and labelling them as outsiders-deviance is not a quality of the act of a person commits, but rather a consequences of the application by others of rules and sanctions to an offender The deviant is one to whom that label has successfully been applied. Deviant behaviour is behaviour that people so label. The way out is a refusal to dramatize the evil. The labelling theory connects to great sociological ideas of Dukheim the symbolic interactionism and the conflict theory. The theory also draws from the idea of Thomas (1928) that when people define situations as real they become real in their consequences. Lement ,(1951-1972) distinguishes deviance into primary and secondary deviance in which he described primary deviance as those little reactions from others which have little effect on a persons self concept and secondary deviance as when people push a deviant person out of their social circles which leads the person to be embittered and seek the company of the people who condone his behaviour.Lement further argued that rather than seeing a crime as leading to control it may be more fruitful to see the process as one in which control agencies structured and even generated crime. Secondary deviance leads to what Goffman (1963) deviant career. This will subsequently leads to stigma which is a powerful negative social label that radically changes a persons self concept and social identity. A criminal prosecution is one way that an individual is labelled in a negative rather than in a positive way. Stigmatising people often leads to retrospective labelling which is the interpretation of s omeones past consistent with the present deviance Seheff; (1984).Retrospective labelling distorts a persons biography in a prejudicial way guided by stigma than any attempt to be fair. No social class stands apart from others as being either criminal or free from criminality. However according to various sociologists people with less stake in society and their own future typically exhibit less resistance to some kinds of devience.Labelling theory asks what happens to criminals after they have been labelled and suggests that crime may be highlighted by criminal sanctions thus sending one to prison may help to criminalise an individual further. Stigmatising young offenders may actually lead them into a criminal career. Howard S.Becker , (1963) one of the earlier interaction theorists claimed that social groups create deviance by making the rules whose infraction constitute deviance and by applying those rules to particular people and labelling them as outsiders. Furthermore the labelling theoretical approach to deviance concentrates on the social reaction to deviance committed by individuals as well as the interaction processes leading up to the labelling. The theory therefore suggests that criminology has been given too much attention to criminals as types of people and insufficient attention to the collection of social control responses. That therefore means the law, the police, the media and the public publications helps to give crime its shape. This is supported by the conflict theory which demonstrates how deviance reflects inequalities and power .This approach holds that the causes of crime may be linked to inequalities of class, race and gender and that who or what is labelled as deviant depends on the relative power of categories of people.Cicourels study on Juvenile justice in California, (1972) pointed out that police stereotypes result in black, white class youth being labelled criminal. The conflict theory links deviance to power in the form of the norms and the laws of most societies which bolster the interests of the rich and powerful. The labelling theory links deviance not to action but to the reaction of others .The concept of stigma, secondary deviance and deviant career demonstrates how people can incorporate the label of deviance into a lasting self-concept. Political leaders recognises that labelling was a political act for it made them aware on which rules to enforce, what behaviour is to regarded as deviant and which people labelled as outsiders may require political assistance Becker,(1963-7).Political leaders went on to produce a series of empirical studies concerning the origins of deviancy definitions through political actions in areas such as drugs legislation, temperance legislation ,delinquency definitions,homosexuality,prostitution and pornography. Becker, (1963) examines the possible effects upon an individual after being publicly labelled as deviant. A label is not neutral; it contains an evaluation of the person to whom it is applied. It will become a master label in the sense that it colours all the other statuses possessed by an individual. If one is labelled as a paedophile, criminal or homosexual it is difficult to reject such labels for those labels largely overrides their original status as parents, worker, neighbour and friend. Others view that person and respond to him or her in terms of the label and tend to assume that individual has the negative characteristics normally associated with such labels. Since an individuals self concept is largely derived from the responses of others they will tend to see themselves in terms of that label. This may produce a self fulfilling prophecy whereby the deviant identification becomes the controlling one. This links to the interactionist approach which emphasizes the importance of the meanings the various actors bring to and develops within the interaction situation. However the labelling theory has its weaknesses which includes Liazos,(1972) who noted that although the labelling theorists aims to humanise the deviant individual and show that he or she is no different than other individuals except perhaps in terms of opportunity. It however by the very emphasis on the deviant and his identity problems and subculture the opposite effect may have been achieved. He further suggested that while considering the more usual everyday types of deviance such as homosexuality, prostitution and juvenile delinquency the labelling theorists have totally ignored a more dangerous and malevolent types of deviance which he termed covert institutional violence. He pointed out that this type of violence leads to such things as poverty and exploitation for example the war in Vietnam, unjust tax laws, racism and sexism. It is questionable whether labelling theorists should even attempt to discuss forms of deviance such as this in the same way as more commonplace indiv idual crimes or whether the two should be kept totally separate being so different in subject matter. Akers, (1994) also criticized the labelling theory by pointing out that it fails to explain why people break the law while the majority conform explaining that people go about minding their own business and then wham-bad society comes along and stops them with a stigmatised label. The theory fails to explain why the moral entrepreneurs react in the manner described but rather blames society and portrays criminals as innocent victims which is not always the case. To counter for the negative effects of punitive measures to youth crime and anti-social behaviour the British government introduced the ASBO and ABC which means anti social behaviour orders and acceptable behaviours respectively.ASBO and ABC are recent developments in Britain which were designed to put a stop to anti-social behaviour by the individual on whom they are imposed.ASBO is a statutory creation and it carries legal force where as an ABC is an informal procedure though not without legal significance. Both types of interventions are aimed at stopping the problem behaviour rather than punishing the offender which may lead an individual into a deviant career. The ABC proved most effective as a means of encouraging young adults, children and parents to take responsibility for unacceptable behaviour. These measures are being used to improve the quality of life for local people by tackling behaviour such as harassment, graffiti, criminal damage and verbal abuse without criminalisi ng the offender. The crime and disorder act (1998) contains the key elements of labours new youth justice system which saw the establishment of the youth justice and the restructuring of the non custodial penalties available to the youth court. The government believed that preventing offending promotes the welfare of the individual young offender and protects the public. The youth justice board oversees the youth offending teams which has a number of roles including assessing the risk and protective factors in a young persons life that relate to their offending behaviour to enable effective interventions to be implemented, providing support to young people who have been released from the custody into the community and early intervention and preventative work both in criminality and anti-social behaviour. To further reduce the effects of labelling the British government is tackling anti-social behaviour and its causes by tackling family problems, poor education attainment, unemployment, alcohol and drug misuse. The most successful interventions to be implemented where noted to be those that engage the individual in changing their own behaviour. This is being done ensuring that an individual understands the impact of their behaviour to the community whilst offering the necessary support to conform. Rather than labelling and criminalising an individual the British government came up with effective advice, councelling and support that enable people who behave anti-Socially to change their behaviour. Perpetrators young and adults have issues in their lives that require the help and support of professional, statutory or voluntary organisations. Issues like money management and debt, communication difficulties with the family, young people struggling within the educational or employment because of offending behaviour and victims of domestic violence can all benefit from available services in Britain today. This essay therefore concludes that labelling theory is enormously influential in directing attention towards the relative and somewhat arbitrary nature of dominant definitions of crime and criminality in Britain. It also critizes the criminal justice and the agencies of social control for it reflects on the consequences of our social reaction and advocates for changes in public policy on juvenile justice, restorative justice,de-institutionalisation and communitarian approaches. The powerful insights of the labelling theory made the British authorities to rethink again on the tough on crime stance hence the introduction of new restorative measures which does not label or criminalise young offenders. The labelling theory is therefore quite useful in understanding that the rise in the yob culture, gang culture and hoody culture in Britain was a result of criminalising young offenders rather than addressing issues leading the young into crime and anti-social behaviour. References Berker and Howard, S (1963) Outsiders: Studies in the sociology of deviance, New York: free press Goffman, E (1963) Stigma: Notes on the management of spoiled identity, Prentice-hall Hall, S (1978) Policing the crisis, The Macmillan press LTD Haralambos, M and Holborn (1991) Sociology themes and perspectives, Collins education. Macionis, J and Plummer, K (2005) Sociology a global introduction, Pearson education limited. Taylor et al, (1973) the new criminology for a social theory of devience, Routledge

Essay --

Justin Kim 1C-2 Oedipus: the Perfect Example of Freudian Psychology In â€Å"Oedipus the King† by Sophocles, Oedipus, the king of Thebes finds out that he kills his father and that his wife is his mother. Oedipus is very concerned for his kingdom and his people. He wants to â€Å"drive the corruption from the land† (Sophocles 109) by finding Laius’s murderer and killing him. Through his curiosity, Oedipus finds out that the man he kills long ago is Laius, who is his father, and that his wife is his mother—all in accordance with Oedipus’ prophecy. After coming on this realization as well, Jocasta, Oedipus’ wife, commits suicide, and Oedipus gouges out his eyes as a result of this. In his search for the murderer, Oedipus strives to be a fair king. Ironically, he often is not fair to the people he sees or things that are essential to him. This play demonstrates the theme of justice through Oedipus’ denial of justice in three situations—Oedipus’ meeting with Tiresias, Oedipus’ g ouging out of his eyes, and Creon’s asking of Oedipus to adjudicate fairly The theme of justice is evident during Oedipus’ meeting with Tiresias. Oedipus brings over Tiresias, the renowned prophet to tell him who the Laius’ murderer is. Tiresias, at first, tries to hide the answer, but through Oedipus’ pressing, Tiresias is forced to reveal that Oedipus kills Laius a long time ago and is the source of â€Å"the corruption of the land (178).† Oedipus, enraged at the answer, tells Tiresias that he should be ashamed for cooking up such a story and asks Tiresias who told him to do this. Tiresias responds, â€Å"You did, you forced me, twisted it out of me (179).† The most rudimentary definition of justice is fairness, and back in Ancient Greece prophets’ words were reg... ...ied† show that Oedipus’ surmise is nothing more than a guess that is unable to be proven true, which conclusively proves Oedipus is not just to Creon because all the accusations against Creon are completely false. In conclusion, Oedipus’ denial of justice especially to his eyes, Tiresias, and Creon, show that justice is an important theme in this book. Oedipus denies justice by not giving the full representation of the truth, and by hurting and blaming others for things he himself has done. By doing this, he fails to be the fair king he strives to be by trying to track down Laius’ murderer, and becomes the complete antithesis of what he wants to be. If Oedipus wants to be a just king to his people, then he should take the punishment meant for him—death—because it is fair that since he is the source of the corruption of the land, that he should die in return. Essay -- Justin Kim 1C-2 Oedipus: the Perfect Example of Freudian Psychology In â€Å"Oedipus the King† by Sophocles, Oedipus, the king of Thebes finds out that he kills his father and that his wife is his mother. Oedipus is very concerned for his kingdom and his people. He wants to â€Å"drive the corruption from the land† (Sophocles 109) by finding Laius’s murderer and killing him. Through his curiosity, Oedipus finds out that the man he kills long ago is Laius, who is his father, and that his wife is his mother—all in accordance with Oedipus’ prophecy. After coming on this realization as well, Jocasta, Oedipus’ wife, commits suicide, and Oedipus gouges out his eyes as a result of this. In his search for the murderer, Oedipus strives to be a fair king. Ironically, he often is not fair to the people he sees or things that are essential to him. This play demonstrates the theme of justice through Oedipus’ denial of justice in three situations—Oedipus’ meeting with Tiresias, Oedipus’ g ouging out of his eyes, and Creon’s asking of Oedipus to adjudicate fairly The theme of justice is evident during Oedipus’ meeting with Tiresias. Oedipus brings over Tiresias, the renowned prophet to tell him who the Laius’ murderer is. Tiresias, at first, tries to hide the answer, but through Oedipus’ pressing, Tiresias is forced to reveal that Oedipus kills Laius a long time ago and is the source of â€Å"the corruption of the land (178).† Oedipus, enraged at the answer, tells Tiresias that he should be ashamed for cooking up such a story and asks Tiresias who told him to do this. Tiresias responds, â€Å"You did, you forced me, twisted it out of me (179).† The most rudimentary definition of justice is fairness, and back in Ancient Greece prophets’ words were reg... ...ied† show that Oedipus’ surmise is nothing more than a guess that is unable to be proven true, which conclusively proves Oedipus is not just to Creon because all the accusations against Creon are completely false. In conclusion, Oedipus’ denial of justice especially to his eyes, Tiresias, and Creon, show that justice is an important theme in this book. Oedipus denies justice by not giving the full representation of the truth, and by hurting and blaming others for things he himself has done. By doing this, he fails to be the fair king he strives to be by trying to track down Laius’ murderer, and becomes the complete antithesis of what he wants to be. If Oedipus wants to be a just king to his people, then he should take the punishment meant for him—death—because it is fair that since he is the source of the corruption of the land, that he should die in return.

Monday, August 19, 2019

The Schlieffen Plan :: European Europe History

The Schlieffen Plan The aim of the plan The aim of the plan was to avoid fighting two wars at once (France and Russia) The Plan The plan was to attack France, not on the main border, which was fortified, but to attack through Belgium and circle the capital Paris. This is all supposed to happen before the predicted 6 weeks it would take for the Russians to get their army ready for action. This would mean Paris would be taken by Germany, therefore capturing France, then the troops could go across the country and attack Russia. What actually happened: * Germans went into Belgium * Belgians blew up railways which stopped Germans * Belgians stopped supplies and reinforcements getting to the Germans * Britain wanted to join war because Germany was attacking France and Belgium * Sir Edward Grey made parliament fight, he said 'if we don't fight Germany will dominate Europe and trade will suffer' * British government sent a telegram to tell Germans to leave Belgium * No reply to the telegram * Britain was at war with Germany * Everyone in all the countries were rushing to join the army, they thought their country was right and they would win * 100,000 soldiers from the BEFC (British Expeditionary Force) landed in France * over 1,000,000 French soldier were ready to use plan seventeen to attack Alasce and Lorraine * after 3 weeks it all went wrong * German troops pushed into France * Russians mobilised more quickly than expected, and invaded * troops were taken from France to defend on the eastern front * French and German soldier met at the river Marne * All the French and English troops went to defend on the river Marne, the battle of the Marne began Why did it fail? The Schlieffen plan failed for many reasons. The trip though Belgium did not go as the Germans wanted. They got slowed down because the Belgian army put up a fight. This meant that the German troops were 10 days late. The Germans wanted to get the war over with France so that they didn't have to fight Russia at the same time, so when they got delayed, they were in trouble. On the 29th august, Von Cluck (German general) changed the plan, so that they didn't attack Paris as planned, instead they went east and sent more troops to attack the forts.

Sunday, August 18, 2019

The Effects of Japanese Civil Society on Policy-Making :: Government Political Science Japan Papers

The Effects of Japanese Civil Society on Policy-Making I. Introduction My interest in this project began after studying Japan for four years as an undergraduate and realizing I knew relatively little of the country’s political atmosphere. I assumed after reading Western political scholars’ views on the subject, that it was a relatively homogonous nation and only after the Occupational Authority entered Japan had the country emerged with democratic ideals and a true notion of liberal rights. It is true that citizens’ movements are a comparatively recent modern phenomenon in a country filled with millennia of rich cultural and political history. Still today, however numbers of Westerners, including some scholars, see Japan through Orientalism’s foreign and exotic eyes and continue the assumption that hierarchy rules politics and the working class disengages themselves from day-to-day politics. This paper not only attempts to present a broad understanding of Japan’s political history, but also show how civil society has transformed from early Meiji society to post World War II restructuring. While outsiders believe democracy and liberal rights are an inherently novel part of Japanese culture, this paper illustrates the historical basis for a rich electorate, thriving with individual and interpersonal interest in freedom, rights, and the political environment around them. The cleavages that divide civil society and the government policy-making in Japan have been written about at length. The groups examined in this paper, including the Meiji Popular Rights Movement and the post World War II environmental movement, formed organizations to address the conflict that constantly attacked their personal values. In each case, the government refused to proactively respond, from the lack of representation during the late 19th century to the pollution that destroyed lands and lives in the 1960s and 1970s. Both of these groups asked for policy changes from local governments in order to promote their efforts through political participation, and some of these measures progressed to national levels. From the beginning of the Meiji Restoration to today, Japan exhibits dramatic progressive political awareness and engagement, therefore I deny any allegation that Japan was undemocratic until General MacArthur’s restructuring in 1945. Throughout the scholarly debate and the execution of factual evidence, three problems arise in the comparative analysis of the two examples. The first consists of the argument that Japan’s civil society, specific to the two time periods analyzed in this paper, was formulated directly within and of the state.

Saturday, August 17, 2019

? Analyses and Compare the Physical Storage Structures and Types of Available Index of the Latest Versions of: 1. Oracle 2. Sql Server 3. Db2 4. Mysql 5. Teradata

Assignment # 5 (Individual) Submission 29 Dec 11 Objective: To Enhance Analytical Ability and Knowledge * Analyses and Compare the Physical Storage Structures and types of available INDEX of the latest versions of: 1. Oracle 2. SQL Server 3. DB2 4. MySQL 5. Teradata First of all define comparative framework. Recommend one product for organizations of around 2000-4000 employees with sound reasoning based on Physical Storage Structures Introduction to Physical Storage Structures One characteristic of an RDBMS is the independence of logical data structures such as  tables,  views, and  indexes  from physical storage structures.Because physical and logical structures are separate, you can manage physical storage of data without affecting access to logical structures. For example, renaming a database file does not rename the tables stored in it. The following sections explain the physical database structures of an Oracle database, including datafiles, redo log files, and control f iles. Datafiles Every Oracle database has one or more physical  datafiles. The datafiles contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the datafiles allocated for a database.The characteristics of datafiles are: * A datafile can be associated with only one database. * Datafiles can have certain characteristics set to let them automatically extend when the database runs out of space. * One or more datafiles form a logical unit of database storage called a tablespace. Data in a datafile is read, as needed, during normal database operation and stored in the memory cache of Oracle. For example, assume that a user wants to access some data in a table of a database. If the requested information is not already in the memory cache for the database, then it is read from the appropriate atafiles and stored in memory. Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate datafiles all at once, as determined by the  database writer process (DBWn)  background process. Control Files Every Oracle database has a  control file. A control file contains entries that specify the physical structure of the database. For example, it contains the following information: * Database name * Names and locations of datafiles and redo log files * Time stamp of database creationOracle can  multiplex  the control file, that is, simultaneously maintain a number of identical control file copies, to protect against a failure involving the control file. Every time an  instance  of an Oracle database is started, its control file identifies the database and redo log files that must be opened for database operation to proceed. If the physical makeup of the database is altered, (for example, if a new datafile or redo log file is created), then the control file is autom atically modified by Oracle to reflect the change. A control file is also used in database recovery. Redo Log FilesEvery Oracle database has a set of two or more  redo log files. The set of redo log files is collectively known as the redo log for the database. A redo log is made up of redo entries (also called  redo records). The primary function of the redo log is to record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost. To protect against a failure involving the redo log itself, Oracle allows a  multiplexed redo log  so that two or more copies of the redo log can be maintained on different disks.The information in a redo log file is used only to recover the database from a system or media failure that prevents database data from being written to the datafiles. For example, if an unexpected power outage terminates database operation, then data in memory cannot be written to the datafiles, and the data is lost. However, lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent redo log files to the database datafiles, Oracle restores the database to the time at which the power failure occurred.The process of applying the redo log during a recovery operation is called  rolling forward. Archive Log Files You can enable automatic archiving of the redo log. Oracle automatically archives log files when the database is in  ARCHIVELOG  mode. Parameter Files Parameter files contain a list of configuration parameters for that instance and database. Oracle recommends that you create a server parameter file (SPFILE) as a dynamic means of maintaining initialization parameters. A server parameter file lets you store and manage your initialization parameters persistently in a server-side disk file.Alert and Trace Log Files Each server and background proces s can write to an associated trace file. When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Support Services. Trace file information is also used to tune applications and instances. The alert file, or alert log, is a special trace file. The alert file of a database is a chronological log of messages and errors. Backup Files To restore a file is to replace it with a backup file.Typically, you restore a file when a media failure or user error has damaged or deleted the original file. User-managed backup and recovery requires you to actually restore backup files before you can perform a trial recovery of the backups. Server-managed backup and recovery manages the backup process, such as scheduling of backups, as well as the recovery process, such as applying the correct backup file when recovery is needed . A database  instance  is a set of memory structures that manage database files. Figure 11-1  shows the relationship between the instance and the files that it manages.Figure 11-1 Database Instance and Database Files Mechanisms for Storing Database Files Several mechanisms are available for allocating and managing the storage of these files. The most common mechanisms include: 1. Oracle Automatic Storage Management (Oracle ASM) Oracle ASM includes a file system designed exclusively for use by Oracle Database. 2. Operating system file system Most Oracle databases store files in a  file system, which is a data structure built inside a contiguous disk address space. All operating systems have  file managers that allocate and deallocate disk space into files within a file system.A file system enables disk space to be allocated to many files. Each file has a name and is made to appear as a contiguous address space to applications such as Oracle Database. The database can creat e, read, write, resize, and delete files. A file system is commonly built on top of a  logical volume  constructed by a software package called a  logical volume manager (LVM). The LVM enables pieces of multiple physical disks to be combined into a single contiguous address space that appears as one disk to higher layers of software. 3. Raw device Raw devices  are disk partitions or logical volumes not formatted with a file system.The primary benefit of raw devices is the ability to perform  direct I/O  and to write larger buffers. In direct I/O, applications write to and read from the storage device directly, bypassing the operating system buffer cache. 4. Cluster file system A  cluster file system  is software that enables multiple computers to share file storage while maintaining consistent space allocation and file content. In an Oracle RAC environment, a cluster file system makes shared storage appears as a file system shared by many computers in a clustered env ironment.With a cluster file system, the failure of a computer in the cluster does not make the file system unavailable. In an operating system file system, however, if a computer sharing files through NFS or other means fails, then the file system is unavailable. A database employs a combination of the preceding storage mechanisms. For example, a database could store the control files and online redo log files in a traditional file system, some user data files on raw partitions, the remaining data files in Oracle ASM, and archived the redo log files to a cluster file system. Indexes in OracleThere are several types of indexes available in Oracle all designed for different circumstances: 1. b*tree indexes – the most common type (especially in OLTP environments) and the default type 2. b*tree cluster indexes – for clusters 3. hash cluster indexes – for hash clusters 4. reverse key indexes – useful in Oracle Real Application Cluster (RAC) applications 5. bi tmap indexes – common in data warehouse applications 6. partitioned indexes – also useful for data warehouse applications 7. function-based indexes 8. index organized tables 9. domain indexesLet's look at these Oracle index types in a little more detail. B*Tree Indexes B*tree stands for balanced tree. This means that the height of the index is the same for all values thereby ensuring that retrieving the data for any one value takes approximately the same amount of time as for any other value. Oracle b*tree indexes are best used when each value has a high cardinality (low number of occurrences)for example primary key indexes or unique indexes. One important point to note is that NULL values are not indexed. They are the most common type of index in OLTP systems. B*Tree Cluster IndexesThese are B*tree index defined for clusters. Clusters are two or more tables with one or more common columns and are usually accessed together (via a join). CREATE INDEX product_orders_ix O N CLUSTER product_orders; Hash Cluster Indexes In a hash cluster rows that have the same hash key value (generated by a hash function) are stored together in the Oracle database. Hash clusters are equivalent to indexed clusters, except the index key is replaced with a hash function. This also means that here is no separate index as the hash is the index. CREATE CLUSTER emp_dept_cluster (dept_id NUMBER) HASHKEYS 50; Reverse Key IndexesThese are typically used in Oracle Real Application Cluster (RAC) applications. In this type of index the bytes of each of the indexed columns are reversed (but the column order is maintained). This is useful when new data is always inserted at one end of the index as occurs when using a sequence as it ensures new index values are created evenly across the leaf blocks preventing the index from becoming unbalanced which may in turn affect performance. CREATE INDEX emp_ix ON emp(emp_id) REVERSE; Bitmap Indexes These are commonly used in data warehouse app lications for tables with no updates and whose columns have low cardinality (i. . there are few distinct values). In this type of index Oracle stores a bitmap for each distinct value in the index with 1 bit for each row in the table. These bitmaps are expensive to maintain and are therefore not suitable for applications which make a lot of writes to the data. For example consider a car manufacturer which records information about cars sold including the colour of each car. Each colour is likely to occur many times and is therefore suitable for a bitmap index. CREATE BITMAP INDEX car_col ON cars(colour) REVERSE; Partitioned IndexesPartitioned Indexes are also useful in Oracle datawarehouse applications where there is a large amount of data that is partitioned by a particular dimension such as time. Partition indexes can either be created as local partitioned indexes or global partitioned indexes. Local partitioned indexes mean that the index is partitioned on the same columns and wit h the same number of partitions as the table. For global partitioned indexes the partitioning is user defined and is not the same as the underlying table. Refer to the create index statement in the Oracle SQL language reference for details. Function-based IndexesAs the name suggests these are indexes created on the result of a function modifying a column value. For example CREATE INDEX upp_ename ON emp(UPPER(ename((; The function must be deterministic (always return the same value for the same input). Index Organized Tables In an index-organized table all the data is stored in the Oracle database in a B*tree index structure defined on the table's primary key. This is ideal when related pieces of data must be stored together or data must be physically stored in a specific order. Index-organized tables are often used for information retrieval, spatial and OLAP applications.Domain Indexes These indexes are created by user-defined indexing routines and enable the user to define his or h er own indexes on custom data types (domains) such as pictures, maps or fingerprints for example. These types of index require in-depth knowledge about the data and how it will be accessed. Indexes in Sql Server Index type| Description| Clustered| A clustered index sorts and stores the data rows of the table or view in order based on the clustered index key. The clustered index is implemented as a B-tree index structure that supports fast retrieval of the rows, based on their clustered index key values. Nonclustered| A nonclustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the nonclustered index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are not guaranteed to be in any particular order unless a clustered index is created on the table. | Unique| A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some way unique.Both clustered and nonclustered indexes can be unique. | Index with included columns| A nonclustered index that is extended to include nonkey columns in addition to the key columns. | Full-text| A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It provides efficient support for sophisticated word searches in character string data. | Spatial| A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column of the  geometry  data type.The spatial index reduces the number of objects on which relatively costly spatial operations need to be applied. | Filtered| An optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes. | XML| A shredded, and persisted, representation of the XML binary large objects (BLOBs) in the  xml  data type column. | SQL Server Storage StructuresSQL Server does not see data and storage in exactly the same way a DBA or end-user does. DBA sees initialized devices, device fragments allocated to databases, segments defined within Databases, tables defined within segments, and rows stored in tables. SQL Server views storage at a lower level as device fragments allocated to databases, pages allocated to tables and indexes within the database, and information stored on pages. There are two basic types of storage structures in a database. * Linked data pages * Index trees. All information in SQL Server is stored at the page level. When a database is created, all spaceAllocated to it is divid ed into a number of pages, each page 2KB in size. There are five types of pages within SQL Server: 1. Data and log pages 2. Index pages 3. Text/image pages 4. Allocation pages 5. Distribution pages All pages in SQL Server contain a page header. The page header is 32 bytes in size and contains the logical page number, the next and previous logical page numbers in the page linkage, the object_id of the object to which the page belongs, the minimum row size, the next available row number within the page, and the byte location of the start of the free space on the page.The contents of a page header can be examined by using the dbcc page command. You must be logged in as sa to run the dbcc page command. The syntax for the dbcc page command is as follows: dbcc page (dbid | page_no [,0 | 1 | 2]) The SQL Server keeps track of which object a page belongs to, if any. The allocation of pages within SQL Server is managed through the use of allocation units and allocation pages. Allocation Pages Space is allocated to a SQL Server database by the create database and alter database commands. The space allocated to a database is divided into a number of 2KB pages.Each page is assigned a logical page number starting at page 0 and increased sequentially. The pages are then divided into allocation units of 256 contiguous 2KB pages, or 512 bytes (1/2 MB) each. The first page of each allocation unit is an allocation page that controls the allocation of all pages within the allocation unit. The allocation pages control the allocation of pages to tables and indexes within the database. Pages are allocated in contiguous blocks of eight pages called extents. The minimum unit of allocation within a database is an extent.When a table is created, it is initially assigned a single extent, or 16KB of space, even if the table contains no rows. There are 32 extents within an allocation unit (256/8). An allocation page contains 32 extent structures for each extent within that allocation unit. Each extent structure is 16 bytes and contains the following information: 1. Object ID of object to which extent is allocated 2. Next extent ID in chain 3. Previous extent ID in chain 4. Allocation bitmap 5. Deallocation bitmap 6. Index ID (if any) to which the extent is allocated 7. StatusThe allocation bitmap for each extent structure indicates which pages within the allocated extent are in use by the table. The deallocation bit map is used to identify pages that have become empty during a transaction that has not yet been completed. The actual marking of the page as unused does not occur until the transaction is committed, to prevent another transaction from allocating the page before the transaction is complete. Data Pages A data page is the basic unit of storage within SQL Server. All the other types of pages within a database are essentially variations of the data page.All data pages contain a 32-byte header, as described earlier. With a 2KB page (2048 bytes) this leaves 2016 bytes for storing data within the data page. In SQL Server, data rows cannot cross page boundaries. The maximum size of a single row is 1962 bytes, including row overhead. Data pages are linked to one another by using the page pointers (prevpg, nextpg) contained in the page header. This page linkage enables SQL Server to locate all rows in a table by scanning all pages in the link. Data page linkage can be thought of as a two-way linked list.This enables SQL Server to easily link new pages into or unlink pages from the page linkage by adjusting the page pointers. In addition to the page header, each data page also contains data rows and a row offset table. The row-offset table grows backward from the end of the page and contains the location or each row on the data page. Each entry is 2 bytes wide. Data Rows Data is stored on data pages in data rows. The size of each data row is a factor of the sum of the size of the columns plus the row overhead. Each record in a data page is assi gned a row number. A single byte is used within each row to store the row number.Therefore, SQL Server has a maximum limit of 256 rows per page, because that is the largest value that can be stored in a single byte (2^8). For a data row containing all fixed-length columns, there are four bytes of overhead per row: 1. Byte to store the number of variable-length columns (in this case, 0) 1 byte to store the row number. 2. Bytes in the row offset table at the end of the page to store the location of the row on the page. If a data row contains variable-length columns, there is additional overhead per row. A data row is variable in size if any column is defined as varchar, varbinary, or allows null values.In addition to the 4 bytes of overhead described previously, the following bytes are required to store the actual row width and location of columns within the data row: 2 bytes to store the total row width 1 byte per variable-length column to store the starting location of the column wi thin the row 1 byte for the column offset table 1 additional byte for each 256-byte boundary passed Within each row containing variable-length columns, SQL Server builds a column offset table backward for the end of the row for each variable-length column in the table.Because only 1 byte is used for each column with a maximum offset of 255, an adjust byte must be created for each 256-byte boundary crossed as an additional offset. Variable-length columns are always stored after all fixed-length columns, regardless of the order of the columns in the table definition. Estimating Row and Table Sizes Knowing the size of a data row and the corresponding overhead per row helps you determine the number of rows that can be stored per page.The number of rows per page affects the system performance. A greater number of rows per page can help query performance by reducing the number of ages that need to be read to satisfy the query. Conversely, fewer rows per page help improve performance for c oncurrent transactions by reducing the chances of two or more users accessing rows on the same page that may be locked. Let's take a look at how you can estimate row and table sizes. Fixed-length fields with no null values.Sum of column widths overhead- The Row Offset Table The location of a row within a page is determined by using the row offset table at the end of the page. To find a specific row within the page, SQL Server looks in the row offset table for the starting byte address within the data page for that row ID. Note that SQL Server keeps all free space at the end of the data page, shifting rows up to fill in where a previous row was deleted and ensuring no space fragmentation within the page.If the offset table contains a zero value for a row ID that indicates that the row has been deleted. Index Structure All SQL Server indexes are B-Trees. There is a single root page at the top of the tree, branching out into N number of pages at each intermediate level until it reaches the bottom, or leaf level, of the index. The index tree is traversed by following pointers from the upper-level pages down through the lower-level pages. In addition, each index level is a separate page chain. There may be many intermediate levels in an index.The number of levels is dependent on the index key width, the type of index, and the number of rows and/or pages in the table. The number of levels is important in relation to index performance. Non-clustered Indexes A non-clustered index is analogous to an index in a textbook. The data is stored in one place, the index in another, with pointers to the storage location of the data. The items in the index are stored in the order of the index key values, but the information in the table is stored in a different order (which can be dictated by a clustered index).If no clustered index is created on the table, the rows are not guaranteed to be in any particular order. Similar to the way you use an index in a book, Microsoft ® SQL Serverâ„ ¢ 2000 searches for a data value by searching the non-clustered index to find the location of the data value in the table and then retrieves the data directly from that location. This makes non-clustered indexes the optimal choice for exact match queries because the index contains entries describing the exact location in the table of the data values being searched for in the queries.If the underlying table is sorted using a clustered index, the location is the clustering key value; otherwise, the location is the row ID (RID) comprised of the file number, page number, and slot number of the row. For example, to search for an employee ID (emp_id) in a table that has a non-clustered index on the emp_id column, SQL Server looks through the index to find an entry that lists the exact page and row in the table where the matching emp_id can be found, and then goes directly to that page and row. Clustered IndexesA clustered index determines the physical order of data in a table . A clustered index is analogous to a telephone directory, which arranges data by last name. Because the clustered index dictates the physical storage order of the data in the table, a table can contain only one clustered index. However, the index can comprise multiple columns (a composite index), like the way a telephone directory is organized by last name and first name. Clustered Indexes are very similar to Oracle's IOT's (Index-Organized Tables).A clustered index is particularly efficient on columns that are often searched for ranges of values. After the row with the first value is found using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent. For example, if an application frequently executes a query to retrieve records between a range of dates, a clustered index can quickly locate the row containing the beginning date, and then retrieve all adjacent rows in the table until the last date is reached. This can help increase the perf ormance of this type of query.Also, if there is a column(s) that is used frequently to sort the data retrieved from a table, it can be advantageous to cluster (physically sort) the table on that column(s) to save the cost of a sort each time the column(s) is queried. Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column.Note  Ã‚  PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a non-clustered index is not specified when you create the PRIMARY KEY constraint. Index Structures Indexes are created on columns in tables or views. The index provides a fast way to look up data based on the values within those columns. For example, if you create an index on the primary key and then search for a row of data based on one of the primary key values, SQL Server first finds that value in the index, and then uses the index to quickly locate the entire row of data.Without the index, a table scan would have to be performed in order to locate the row, which can have a significant effect on performance. You can create indexes on most columns in a table or a view. The exceptions are primarily those columns configured with large object (LOB) data types, such as  image,  text,  and  varchar(max). You can also create indexes on XML columns, but those indexes are slightly different from the basic index and are beyond the scope of this article. Instead, I'll focus on those indexes that are implemented most commonly in a SQL Server database.An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown in Figure 1. Figure 1: B-t ree structure of a SQL Server index When a query is issued against an indexed column, the query engine starts at the root node and navigates down through the intermediate nodes, with each layer of the intermediate level more granular than the one above. The query engine continues down through the index nodes until it reaches the leaf node.For example, if you’re searching for the value 123 in an indexed column, the query engine would first look in the root level to determine which page to reference in the top intermediate level. In this example, the first page points the values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on that level. The query engine would then determine that it must go to the third page at the next intermediate level. From there, the query engine would navigate to the leaf node for value 123.The leaf node will contain either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered. Clustered Indexes A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order.As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table. Note:  A table that has a clustered index is referred to as a  clustered table. A table that has no clustered index is referred to as a  heap. Nonclustered Indexes Unlike a clustered indexed, the leaf nodes of a nonclustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves.This means that the query engine must t ake an additional step in order to locate the actual data. A row locator’s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row. Nonclustered indexes cannot be sorted like clustered indexes; however, you can create more than one nonclustered index per table or view.SQL Server 2005 supports up to 249 nonclustered indexes, and SQL Server 2008 support up to 999. This certainly doesn’t mean you should create that many indexes. Indexes can both help and hinder performance, as I explain later in the article. In addition to being able to create multiple nonclustered indexes on a table or view, you can also add  included columns  to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes.For example, you can include non-indexed columns in order to exceed the size limit of indexed columns (900 bytes in most cases). Index Types In addition to an index being clustered or nonclustered, it can be configured in other ways: * Composite index:  An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit. Both clustered and nonclustered indexes can be composite indexes. * Unique Index:  An index that ensures the uniqueness of each value in the indexed column.If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual n ames can be duplicated. A unique index is automatically created when you define a primary key or unique constraint: * Primary key:  When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view.However, you can override the default behavior and define a unique, nonclustered index on the primary key. * Unique:  When you define a unique constraint, SQL Server automatically creates a unique, nonclustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table. * Covering index:  A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column.You can create a covering index that includes all three columns. Teradata What is the Teradata R DBMS? The Teradata RDBMS is a complete relational database management system. With the Teradata RDBMS, you can access, store, and operate on data using Teradata Structured Query Language (Teradata SQL). It is broadly compatible with IBM and ANSI SQL. Users of the client system send requests to the Teradata RDBMS through the Teradata Director Program (TDP) using the Call-Level Interface (CLI) program (Version 2) or via Open Database Connectivity (ODBC) using the Teradata ODBC Driver.As data requirements grow increasingly complex, so does the need for a faster, simpler way to manage data warehouse. That combination of unmatched performance and efficient management is built into the foundation of the Teradata Database. The Teradata Database is continuously being enhanced with new features and functionality that automatically distribute data and balance mixed workloads even in the most complex environments.Teradata Database 14  currently offers low total cost of ownership in a simple, scalable, parallel and self-managing solution. This proven, high-performance decision support engine running on the  Teradata Purpose-Built Platform Family offers a full suite of data access and management tools, plus world-class services. The Teradata Database supports installations from fewer than 10 gigabytes to huge warehouses with hundreds of terabytes and thousands of customers. Features & BenefitsAutomatic Built-In Functionality  | Fast Query Performance  | â€Å"Parallel Everything† design and smart Teradata Optimizer enables fast query execution across platforms| | Quick Time to Value  | Simple set up steps with automatic â€Å"hands off† distribution of data, along with integrated load utilities result in rapid installations| | Simple to Manage  | DBAs never have to set parameters, manage table space, or reorganize data| | Responsive to Business Change  | Fully parallel MPP â€Å"shared nothing† architecture scales linearly across data, us ers, and applications providing consistent and predictable performance and growth| Easy Set & G0† Optimization Options  | Powerful, Embedded Analytics  | In-database data mining, virtual OLAP/cubes, geospatial and temporal analytics, custom and embedded services in an extensible open parallel framework drive efficient and differentiated business insight| | Advanced Workload Management  | Workload management options by user, application, time of day and CPU exceptions| | Intelligent Scan Elimination  | â€Å"Set and Go† options reduce full file scanning (Primary, Secondary, Multi-level Partitioned Primary, Aggregate Join Index, Sync Scan)| Physical Storage Structure of Teradata Teradata offers a true hybrid row and Column database.All database management systems constantly tinker with the internal structure of the files on disk. Each release brings an improvement or two that has been steadily improving analytic workload performance. However, few of the key player s in relational database management systems (RDBMS) have altered the fundamental structure of having all of the columns of the table stored consecutively on disk for each record. The innovations and practical use cases of â€Å"columnar databases† have come from the independent vendor world, where it has proven to be quite effective in the performance of an increasingly important class of analytic query.These columnar databases store data by columns instead of rows. This means that all values of a single column are stored consecutively on disk. The columns are tied together as â€Å"rows† only in a catalog reference. This gives a much finer grain of control to the RDBMS data manager. It can access only the columns required for the query as opposed to being forced to access all columns of the row. It’s optimal for queries that need a small percentage of the columns in the tables they are in but suboptimal when you need most of the columns due to the overhead in a ttaching all of the columns together to form the result sets.Teradata 14 Hybrid Columnar The unique innovation by Teradata, in Teradata 14, is to add columnar structure to a table, effectively mixing row structure, column structures and multi-column structures directly in the DBMS which already powers many of the largest data warehouses in the world. With intelligent exploitation of Teradata Columnar in Teradata 14, there is no longer the need to go outside the data warehouse DBMS for the power of performance that columnar provides, and it is no longer necessary to sacrifice robustness and support in the DBMS that holds the post-operational data.A major component of that robustness is parallelism, a feature that has obviously fueled much of Teradata’s leadership position in large-scale enterprise data warehousing over the years. Teradata’s parallelism, working with the columnar elements, are creating an entirely new paradigm in analytic computing – the pinpoint accuracy of I/O with column and row partition elimination. With columnar and parallelism, the I/O executes very precisely on data interesting to the query. This is finally a strong, and appropriate, architectural response to the I/O bottleneck issue that analytic queries have been living with for a decade.It also may be Teradata Database’s most significant enhancement in that time. The physical structure of each container can also be in row (extensive page metadata including a map to offsets) which is referred to as â€Å"row storage format,† or columnar (the row â€Å"number† is implied by the value’s relative position). Partition Elimination and Columnar The idea of data division to create smaller units of work as well as to make those units of work relevant to the query is nothing new to Teradata Database, and most DBMSs for that matter.While the concept is being applied now to the columns of a table, it has long been applied to its rows in the form of partitioning and parallelism. One of the hallmarks of Teradata’s unique approach is that all database functions (table scan, index scan, joins, sorts, insert, delete, update, load and all utilities) are done in parallel all of the time. There is no conditional parallelism. All units of parallelism participate in each database action. Teradata eliminates partitions from needing I/O by reading its metadata to understand the range of data placed into the partitions and eliminating those that are washed out by the predicates.See Figure There is no change to partition elimination in Teradata 14 except that the approach also works with columnar data, creating a combination row and column elimination possibility. In a partitioned, multi-container table, the unneeded containers will be virtually eliminated from consideration based on the selection and projection conditions of the query. See Figure Following the column elimination, unneeded partitions will be virtually eliminated fro m consideration based on the projection conditions.For the price of a few metadata reads to facilitate the eliminations, the I/O can now specifically retrieve a much focused set of data. The addition of columnar elimination reduces the expensive I/O operation, and hence the query execution time, by orders of magnitude for column-selective queries. The combination of row and column elimination is a unique characteristic of Teradata’s implementation of columnar. Compression in Teradata Columnar Storage costs, while decreasing on a per-capita basis over time, are still consuming increasing budget due to the massive increase in the volume of data to store.While the data is required to be under management, it is equally required that the data be compressed. In addition to saving on storage costs, compression also greatly aids the I/O problem, effectively offering up more relevant information in each I/O. Columnar storage provides a unique opportunity to take advantage of a series of compression routines that make more sense when dealing with well-defined data that has limited variance like a column (versus a row with high variability. ) Teradata Columnar utilizes several compression methods that take advantage of the olumnar orientation of the data. A few methods are highlighted below. Run-Length Encoding When there are repeating values (e. g. , many successive rows with the value of ‘12/25/11’ in the date container), these are easily compressed in columnar systems like Teradata Columnar, which uses â€Å"run length encoding† to simply indicate the range of rows for which the value applies. Dictionary Encoding Even when the values are not repeating successively, as in the date example, if they are repeating in the container, there is opportunity to do a dictionary representation of the data to further save space.Dictionary encoding is done in Teradata Columnar by storing compressed forms of the complete value. The dictionary representatio ns are fixed length which allows the data pages to remain void of internal maps to where records begin. The records begin at fixed offsets from the beginning of the container and no â€Å"value-level† metadata is required. This small fact saves calculations at run-time for page navigation, another benefit of columnar. For example, 1=Texas, 2=Georgia and 3=Florida could be in the dictionary, and when those are the column values, the 1, 2 and 3 are used in lieu of Texas, Georgia and Florida.If there are 1,000,000 customers with only 50 possible values for state, the entire vector could be stored with 1,000,000 bytes (one byte minimum per value). In addition to dictionary compression, including the â€Å"trimming†8 of character fields, traditional compression (with algorithm UTF8) is made available to Teradata Columnar data. Delta Compression Fields in a tight range of values can also benefit from only storing the offset (â€Å"delta†) from a set value. Teradata Co lumnar calculates an average for a container and can store only the offsets from that value in place of the field.Whereas the value itself might be an integer, the offsets can be small integers, which double the space utilization. Compression methods like this lose their effectiveness when a variety of field types, such as found in a typical row, need to be stored consecutively. The compression methods are applied automatically (if desired) to each container, and can vary across all the columns of a table or even from container to container within a column9 based on the characteristics of the data in the container.Multiple methods can be used with each column, which is a strong feature of Teradata Columnar. The compounding effect of the compression in columnar databases is a tremendous improvement over the standard compression that would be available for a strict row-based DBMS. Teradata Indexes Teradata provides several indexing options for optimizing the performance of your relati onal databases. i. Primary Indexes ii. Secondary Indexes iii. Join Indexes iv. Hash Indexes v. Reference Indexes Primary Index Primary index determines the distribution of table rows on the disks controlled by AMPs.In Teradata RDBMS, a primary index is required for row distribution and storage. When a new row is inserted, its hash code is derived by applying a hashing algorithm to the value in the column(s) of the primary code (as show in the following figure). Rows having the same primary index value are stored on the same AMP. Rules for defining primary indexes The primary indexes for a table should represent the data values most used by the SQL to access the data for the table. Careful selection of the primary index is one of the most important steps in creating a table.Defining primary indexes should follow the following rules: * A primary index should be defined to provide a nearly uniform distribution of rows among the AMPs, the more unique the index, the more even the distrib ution of rows and the better space utilization. * The index should be defined on as few columns as possible. * Primary index can be either Unique or non-unique. A unique index must have a unique value in the corresponding fields of every row;   a non-unique index permits the insertion of duplicate field values. The unique primary index is more efficient. Once created, the primary index cannot be dropped or modified, the index must be changed by recreating the table. If a primary index is not defined in the CREATE TABLE statement through an explicit declaration of a PRIMARY INDEX, the default is to use one of the following: * PRIMARY key * First UNIQUE constraint * First column The primary index values are stored in an integral part of the primary table. It should be based on the set selection most frequently used to access rows from a table and on the uniqueness of the value.Secondary Index In addition to a primary index, up to 32 unique and non-unique secondary indexes can be def ined for a table. Comparing to primary indexes, Secondary indexes allow access to information in a table by alternate, less frequently used paths. A secondary index is a subtable that is stored in all AMPs, but separately from the primary table. The subtables, which are built and maintained by the system, contain the following; * RowIDs of the subtable rows * Base table index column values * RowIDs of the base table rows (points)As shown in the following figure, the secondary index subtable on each AMP is associated with the base table by the rowID . Defining and creating secondary index Secondary index are optional. Unlike the primary index, a secondary index can be added or dropped without recreating the table. There can be one or more secondary indexes in the CREATE TABLE statement, or add them to an existing table using the CREATE INDEX statement or ALTER TABLE statement. DROP INDEX can be used to dropping a named or unnamed secondary index.Since secondary indexes require subtab les, these subtables require additional disk space and, therefore, may require additional I/Os for INSERTs, DELETEs, and UPDATEs. Generally, secondary index are defined on column values frequently used in WHERE constraints. Join Index A join index is an indexing structure containing columns from multiple tables, specifically the resulting columns form one or more tables. Rather than having to join individual tables each time the join operation is needed, the query can be resolved via a join index and, in most cases, dramatically improve performance.Effects of Join index Depending on the complexity of the joins, the Join Index helps improve the performance of certain types of work. The following need to be considered when manipulating join indexes: * Load Utilities  Ã‚  Ã‚   The join indexes are not supported by MultiLoad and FastLoad utilities, they must be dropped and   recreated after the table has been loaded. * Archive and Restore  Ã‚  Ã‚   Archive and Restore cannot be us ed on join index itself. During a restore of   a base table or database, the join index is marked as invalid.The join index must be dropped and recreated before it can be used again in the execution of queries. * Fallback Protection  Ã‚  Ã‚   Join index subtables cannot be Fallback-protected. * Permanent Journal Recovery  Ã‚  Ã‚   The join index is not automatically rebuilt during the recovery process. Instead, the join index is marked as invalid and the join index must be dropped and recreated before it can be used again in the execution of queries. * Triggers  Ã‚  Ã‚   A join index cannot be defined on a table with triggers. Collecting Statistics  Ã‚  Ã‚   In general, there is no benefit in collecting statistics on a join index for joining columns specified in the join index definition itself. Statistics related to these columns should be collected on the underlying base table rather than on the join index. Defining and creating secondary index Join indexes can be create d and dropped by using CREATE JOIN INDEX and DROP JOIN INDEX statements. Join indexes are automatically maintained by the system when updates (UPDATE, DELETE, and INSERT) are performed on the underlying base tables.Additional steps are included in the execution plan to regenerate the affected portion of the stored join result. Hash Indexes Hash indexes are used for the same purposes as single-table join indexes. The principal difference between hash and single-table join indexes are listed in the following table. Hash indexes create a full or partial replication of a base table with a primary index on a foreign key column table to facilitate joins of very large tables by hashing them to the same AMP. You can define a hash index on one table only.The functionality of hash indexes is a superset to that of single-table join indexes. Hash indexes are not indexes in the usual sense of the word. They are base tables that cannot be accessed directly by a query. The Optimizer includes a has h index in a query plan in the following situations. * The index covers all or part of a join query, thus eliminating the need to redistribute rows to make the join. In the case of partial query covers, the Optimizer uses certain implicitly defined elements in the hash index to join it with its underlying base table to pick up the base table columns necessary to complete the cover. A query requests that one or more columns be aggregated, thus eliminating the need to perform the aggregate computation For the most part, hash index storage is identical to standard base table storage except that hash indexes can be compressed. Hash index rows are hashed and partitioned on their primary index (which is always defined as non-unique). Hash index tables can be indexed explicitly, and their indexes are stored just like non-unique primary indexes for any other base table.Unlike join indexes, hash index definitions do not permit you to specify secondary indexes. The major difference in storage between hash indexes and standard base tables is the manner in which the repeated field values of a hash index are stored. Reference Indexes A reference index is an internal structure that the system creates whenever a referential integrity constraint is defined between tables using a PRIMARY KEY or UNIQUE constraint on the parent table in the relationship and a REFERENCES constraint on a foreign key in the child table.The index row contains a count of the number of references in the child, or foreign key, table to the PRIMARY KEY or UNIQUE constraint in the parent table. Apart from capacity planning issues, reference indexes have no user visibility. References for Teradata http://www. teradata. com/products-and-services/database/ http://teradata. uark. edu/research/wang/indexes. html http://www. teradata. com/products-and-services/database/teradata-13/ http://www. odbms. org/download/illuminate%20Comparison. pdf