Monday, September 30, 2019

Nationalism and its key factors Essay

Nationalism is the belief that people should be loyal to their nation rather than their king. The six bonds that create a nation-state are nationality, language, culture, history, religion, and territory. While the United States does not share all of these same features, I still believe it is a nation-state. Nationality is a belief in a common ethnic ancestry. I believe that the United States does not have a common ancestry. Almost everyone is not a true American and has had ancestors immigrate here from another country. We are not all from the same place. Although there are many bi-lingual people in the US, almost everyone speaks a common language, English. It is our national language and it is standardized just about everywhere. Culture is another area where the US fits into Nationalism. Almost everyone follows the American styles of clothing. We also all, for the most part, eat all of the same foods and behave in the same ways. I believe the US also has a common history. Even though almost everyone has an ancestor that immigrated from another country at one time or another, most Americans believe that the history of the United States is their own. There is not one single religion in the United States. One of the United States’ selling points was its freedom of religion. While there are some more popular religions, there is not one that is shared by mostly all Americans. While most of these points can be argued, I believe that the issue on territory cannot. The United States has its own borders and areas that belong to it. This land is known to the world as United States territory and is considered by everyone to be its land. While the United States does not hold all of the bonds to be true, I still believe it to be a Nation-State. It cannot be argued that almost every  citizen of the United States is loyal to the country itself.

National Government in America 1775 to 1789

Americans developed many types of â€Å"national† governments between 1775 to 1789. Each of these variations in centralized governments served different purposes through out this time period. They also represented the ideologies and fears of the people in how they were regarded, empowered, and organized. One of the first unified fronts that the colonial states presented in a form of centralized governments was the formation of the Second Congress. The Second Congress met on May 10, 1775 in Philadelphia. It had many of the same restrictions that the First Congress had when it met in September 1774. Their purpose was to perform in two contradictory ways. First they had to raise money for an army. All the while negotiating a reconciliation with England. Some of the delegates included, John Hancock, John and Samuel Adams, John Dickinson, George Washington, Benjamin Franklin, and James Madison. Although these delegates were, for the most part, of the same mind in 1775, times would later change them: influencing all of them in different political directions. This Congress had virtually no power. They did not have any authority to write or change laws. But they could raise an army, finance the war, gathering a pro-independence coalition, and they could explore diplomatic alliances with foreign countries. So little power was given to Congress, by the states, because of a deeply embedded fear of a powerful centralized government. Unwilling to repeat the mistake made in Britain, placing so much power in such a small governing body, was something that the states strived to not repeat. And they kept that in mind when they elected to draft the Articles of Confederation. The Articles of Confederation, drafted by John Dickinson in May 1775, allowed Congress to issue bills, borrow money, to settle all disputes between states, and to administer unsettled western lands. However, many state governments did not like the last two provisions (settle disputes between states and control all western lands). Those issues would cause Congress to debate the Articles for years. To amend the Articles, all states had to unanimously agree to the changes. Again the second class powers given to the national government was due to the states fear of an all-powerful central government. For it could potentially jeopardize the freedoms of the people it governed. Just like it had when the king of England and Parliament passed various revenue generating taxes on the colonies without representation. By 1781 economic turmoil began to weaken the newly formed confederation of the states. The cost of the war had plunged the colonies into economic hardship. From 1781 to 1788 is known as the â€Å"critical period.† After the revolution the first priority was to pay for the war itself. Congress had given land certificates to solders that fought in the war against the British, as payment for their service. They had also printed money to pay for the military supplies and pay solders, but the money was never backed by â€Å"hard money.† Hard money is gold or silver. In 1775 this printed money had some value, but it was virtually worthless by 1781. Many states had also printed paper money in excess, as well. Further confusing and disrupting the economy and plunging the country into deeper economic debt. Even though Congress was granted the right to print money, it did not have the right to tax. Without the ability to tax, Congress had no means of collecting revenue to pay for the war. A weakness that was discovered when Robert Morris served as Superintendent of Finance for the Confederation from 1781 to 1784. Morris originally proposed a five-percent impost tax on all imported goods into the country. But most coastal states already had impost taxes, which they used to pay for their potions of the war debt. Also Congress did not have the authority to impose such taxes on the states populations according to the Articles of Confederation. Nor did they have any means of enforcing compliance of such tax laws. This proposal was soon dropped. A second plan by Morris called for a nationally supported bank that would hold Congress†s hard money along with other investors and private citizens. In return the bank would give the government short-term loans. This plan also allowed the bank to print â€Å"banknotes.† Banknotes were paper money that was backed by hard money in the bank vaults: therefore they would not depreciate in value. The theory behind this was that with paper money backed by hard money it would provide the nation with some economic stability. Morris† national bank worked with limited success. The bank was relatively small; it printed little money (even thought it actually printed more paper money than what it could back in hard money) for circulation. Therefore, it had limited impact on the economy: providing little stability. In the fall of 1786 the economic troubles of the Confederation reached a peak. Armed men threatened the courts in Massachusetts over the newly imposed taxes passed by the state. Not only were additional taxes passed, but also the state insisted that they be paid in hard money. Most citizens at the time had little hard money on hand. This caused many to arm themselves again, in protest against the hardships that the government was imposing on them. Daniel Shays was the leader, who was a farmer, and also had served as a captain in the Continental army during the revolution. Shays, with 2,500 other, marched on the courts of Massachusetts. James Bowdoing, governor of Massachusetts at the time, quickly put the rebellion down. Later this uprising would be called Shays Rebellion. The significance of Shays Rebellion was that it demonstrated that the nation was still in unrest. Originators of the revolution found themselves on the other side of the table. In their efforts to repay the war debt and maintain a standard of living and success of their businesses, they had placed economic hardships on the people in the form of excessive taxes. Although Congress and the state governments had few options (one being to print money in excess or to heavily tax the people), some thought that there was a better way. Economic problems come from the simple fact that all thirteen states printed their own money. Some states (with strong economies: Virginia and New York) relied on taxes solely to repay their portions of the war debt quickly. While other states that had poor economies simply printed more money to compensate for monetary fluctuations. One theory was that if a unified economy could be established it would help ease the situation and growing tensions. But to have that you would need a unified national government, one with more powers than the present Congress had to manage it. At the prompting of James Madison, the Virginia legislature called a meeting of the states. The way this meeting was called bypassed the confederation Congress. The purpose of this meeting was to try and modify the Articles of Confederation, to give Congress power to regulate trade in hopes to improve the economic problems. But only five of the nine states, which agreed to participate, attended. Out of those who did attend, all had the same impression of a pending national crisis. So the meeting was rescheduled for Philadelphia in May 1787 in order to try and get more participants to attend. During the time it took for a quorum to gather, Madison and the Virginian delegates drafted a fifteen-point plan, which totally restructured the confederation. Once the seriousness was reveled of what was really under discussion, it was unanimously decided to keep all of the proceedings completely confidential. To help keep order, George Washington was elected to preside over the convention. Virginia was the first to propose vast changes in the federal government. Their plan, presented by Edmund Randolph, called for a three-branch government. With a two chamber legislature, a powerful executive, and judiciary branch. This government operated directly on the people. Congress had the right to veto state legislation, coerce states militarily to obey national laws, and to legislate in areas were states are incompetent. The executive and judiciary branch could veto jointly any legislation presented by Congress. To say the least this plan was heavily debated. But it did not meet any out right opposition. William Paterson, who was from New Jersey, presented an alternative plan in mid June. This plan became know as the New Jersey Plan and resembled some of the Articles of Confederation. It had a single house Congress in which the states would have one vote. But it would have a shared three-man presidency, of who were elected by Congress. This three-man group took the place of the executive and judiciary branches. This plan gave vast powers to Congress: it was allowed to regulate trade, and to use force on unruly states. However, the plan still rested on the confederation principle of the national government that was to be an assembly of states and not of the people. A compromise later broke the heavy debates over the two plans. By mid July it was agreed that the new form of government should be a three-branch government with supreme power over the states and bicameral legislature (with a Lower House of Representatives appointed by population and the Senate who represented each state). In the Senate the two senators could vote independently of each other. This was the first emergence of the present day federal government; a government based on the representation of the people. The next hurtle was to define who the people were. In southern states they had large majorities of people who could not vote, but would give power to them through the new form of Congress. But these people were slaves: the debate was, are they citizens or are they property. To the southern states they were citizens, with the idea that they would allow more power for them in the Congress. However, smaller northern states with little or no slaves viewed them as property. Who had no right to representation in Congress. This debate created what is known as the â€Å"three-fifths clause.† Which stated that only three-fifths of the non-voting population could be counted when deciding the number of representatives in Congress. With most of the problems out of the way, the next step was to have the thirteen states ratify the new form of government. Only nine states needed to ratify, and pass, the proposal in order to make it law, however, it was going to be an up hill battle. For the states would not give up their powers so easily. The proponents of the new government called themselves Federalist; opponents to the new government took the name of Anti-Federalist. By May 1788, eight of the states ratified the proposal. To help gain more support, the federalists James Madison and John Jay wrote a series of essays called â€Å"The Federalists Papers.† The essays started in October 1787, and totaled eighty-five altogether. They were published in New York newspapers in hopes to win the states vote for the new government. New York was critical to the success of the proposal, after Virginia, New York was the next most influential state. If New York could be persuaded to pass the new form of government it would assure solidity and legitimacy to the new government. Even though Virginia and New York†s ratification was not necessary to the passing of the new government, the federalists wanted to have a unanimous vote. Having these two states would help in pulling the remaining two states in (North Carolina and Rhode Island) into a unanimous agreement among the thirteen states. These two states did finally ratify the new government, but not until May of 1790, and at that, they barely ratified the new government by only a two-vote margin. Prior to the revolution the ideology that prevailed was that government should be local, and directly represent the people. If a government was to be too large and to far from the people it served, it had the potential to become a dictatorship in its management of country affairs. But because of the economic strain of the war, the thirteen different economies and monetary systems were not adequate. Nor could they stabilize the economics of the confederacy. A few politicians of the time (like James Madison and Alexander Hamilton) had a vision of a more powerful centralized government that would be able to bring the states in line with national policy and help to stabilize the local economies. While showing the world a unified front among the states. Several debates would develop over the idea of a more powerful government over such things as the definition of representation by population, the western territories, and the power of the states vs. the power of the federal government and Congress. Compromises, persuasive arguments, and essays would have to be made by everyone. But finally, in May of 1790, the thirteen states would agree on a larger, more powerful federal government. Which had authority over the states in matters of taxation, trade, and fundamental laws that transverse state lines.

Sunday, September 29, 2019

Extensive Reading Essay

There are many experts who give the definition of reading. One of them is Aebersold and Field. They say: â€Å"†¦ , reading is what happens when people look at a text and assign meaning to the written symbols in that text, further, the text and the reader are the two physical entities necessary for the reading process to begin (1997: 15). † It means that when someone sees written symbols in a text, there will be something visualized on the reader’s mind. This process is called reading. Another expert, Williams (1999: 2) states that reading is a process whereby one looks at and understands what has been written. In line with Williams, Heilman (1961: 8) says that reading is a process of getting meaning from printed word symbols. It is not merely a process of making conventionalized noises associated with these symbols. In line with them, De Boer and Dallmann (1982: 23) say that reading is a process involving meaningful reaction to printed symbols. Wallace, in his book entitled â€Å"Reading† adds that reading is interpreting which means reacting to a written text as a piece of communication (1996: 4). These four definitions have the same point. The point is that reading is a process of getting the meaning of written text and giving reaction of it as the form of communication between the reader and the writer. Different from some experts above, Davies defines reading from the other point of view. He says that reading is a private. It is a mental or cognitive process which involves a reader in trying to follow and respond to a message from a writer, who is in distant space and time (1995: 1). It means that reading activity connects the reader and the writer although they live in different places and life in different period. Reading is a mental cognitive process, so as the result of this activity the reader is able to give responses about the text’s message. Because reading is a private activity, the process of reading and responding is not directly observable. Most events told in written texts are past experiences; either it is the writer’s experiences or the others’. The success of reading activity is depending on the reader’s ability to visualize it in order to be able to understand and interpret its meaning. Dealing with this fact, Kennedy says: Reading is ability of an individual to recognize a visual form associate the form with a sound and/or meaning acquired in the past, and on the basis of past experience, understand and interpret its meaning (1981: 5). Another expert, Grellet (1981: 7), defines reading as a constant process of guessing, and what one brings to the text is often more important than what one finds in it. It means that before the reader reads the text, he guessed the content of the text and he had already had his own concept. After he reads the text, the reader relates his own concept with the text’s message. Based on some definitions above, it can be concluded that reading is the process of bringing a concept to the text and relating it with the meaning got from the text, in which it is usually a past experience, visualizing it, understanding it, and giving responses as interpretation of this process. Kennedy (1981: 188) says that comprehension is the ability of one to find, interpret, and use ideas. Then, in Oxford Advanced Learner’s dictionary, comprehension is defined as a power to understand something (Hornby, 1995: 235). According to these two definitions, it can be said that comprehension is the ability to understand something through finding interpreting, and using ideas. In line with the statement above, it can be concluded that reading comprehension is the ability to get the meaning of written symbols, visualize it, and give responses as the interpretation of this process. Narrative text There are some approaches in teaching reading; one of them is genre based approach. According to Hartono (2005:4) the term â€Å"genre† is used to refer to particular text-types, not to traditional varieties of literature. It means that genre is a type or kind of text, defined in terms of its social purposes; also the level of context dealing with social purposes. Based on the communicative purpose, Pardiyono (2007:93-98) classifies the text into eleven types, they are; description, recount, narration, procedure, explanation, discussion, exposition, news item, report, anecdote, and review. However in this study, the text will be focused on the narrative text. Considering the social function, generic structure, and language features of narrative text, narrative text can be defined as a text which tells about past activities or event which concerns on the problematic experience and resolution in order to amuse and even give the moral messages to the reader. The explanation about social function, generic structure, and language features of narrative text is as follow: 1. Social function The social function of a text is quite similar with the purpose of the text. Related to narrative text, the social function is to amuse, entertain and to deal with actual or various experience in different ways. 2. Generic structure The generic structure of narrative text consits of three parts, those are orientation, complication, resolution, and sometimes completed by coda. The further explanation about these parts of narrative text is as follow: a. Orientation Orientation is the introduction of the text. It includes what is inside the text, what the text talks in general, who involves in the text, when and where it happen. b. Complication. In complication, the text talks about what happens with the participants. It explores the conflict among the participants. Complication is the main element of narrative. Without complication, the text is not narrative. The conflict can be shown as natural, social or psychological conflict. c. Resolution Resolution is the end of narrative text. This is the phase where the participants solve the problem aroused by the conflict. It is not matter whether the participants succeed or fail. The point is the conflict becomes ended. 3. Language features According to Hartono (2005:7), the language features used in narrative  text are: a. Focus on specific participants b. Use of past tense c. Use of temporal conjunction d. Use of material (or action) processes Video as Media in Teaching 1. Media a. The Definition of Media Etymologically, the word â€Å"media† comes from Latin language â€Å"medius†. Literally, it means mediator or companion. Media is the messages mediator or companion from the sender to the receiver (Arsyad, 2005:3) Association for education and communication technology (AECT) in Sadiman (2002:6) defines media as all forms and lines which are used by people to convey information. According to Gagne, media is many kinds of components in students’ environment that can stimulate them to study (Sadiman dkk, 2002:6). Based on those definitions above, it can be concluded that media are all things that can be used to deliver the message from sender to receiver so it can stimulate the mind, feeling, attention, and students’ interest in order to attain the teaching and learning process. b. Kinds of Media Media can be classified into three categories: visual, audio, and audio visual. (http://edu-articles. com) 1) Visual media. There are two kinds of visual media; those are unprojected media and projected media. a) Unprojected media Unprojected media can be divided into: (1) Realia or real thing. The object must not be presented in class, but students should be able to see and observe them. For example the students observe the ecosystem, plant, the diversity of living thing, et cetera. This media is able to give real experiences to the students. (2) Model. Model is the imitation of real thing presented in three dimension form as substitution of the real thing. This media helps the teacher to present the object that cannot be brought into the class, for example digestion system, respiration system, excretion system, et cetera. (3) Graphic. The functions of graphic are to catch the students’ attention, clarify the lesson, and illustrate the fact or forgettable concept. There are many kinds of graphic, such as picture, sketch, scheme, chart, and graph. b) Projected media There are two types of projected media: (1) Transparency of OHP. This is stated as the real media because the teacher must not change the lay out of the class and still able to face the students. Transparency media includes software (OHT) and hardware (OHP). (2) Bordered film or slide. This is a transparent film that usually has measurement of 35mms and border 2Ãâ€"2 inches. The use of this media is the same as OHP, but the visualization of this media is better than OHP. 2) Audio media There are two kinds of audio media that are commonly used: a) Radio. Radio is electronic tool that can be used to listen to the news, new important events, life problems, et cetera. b) Audio cassette. This tool is cheaper than the other because the supplying and the treatment cost are relatively cheap. 3) Audio visual media There are many kinds of audio visual tool: a) Video. This is one kind of audio visual media, besides film. In learning process, this tool is usually presented in the form of VCD. b) Computer. This tool has all the benefit of the other media. Computer is able to show text, picture, sound and picture, and can also be used interactively. Even, computer can be connected to internet to browse the unlimited learning sources. c. The Characteristics of Education Media Gerlach and Ely in Arsyad (2005: 12-14) propose three characteristics of education media; those are fixative property, manipulative property, and distributive property. 1) Fixative property This characteristic explains the ability of media to record, save, continue, and reconstruct an event or object. The event or object can be put in the right order and rearranged using media such as photograph, video tape, audio tape, computer disc, and film. By this characteristic, an event that just  once in a life time can be perpetuated and rearranged for education 2) Manipulative property This characteristic enables an event to be transformed, so the event that needs long time can be shorted in order to be showed in class, for example the process of metamorphosis, the record of motion in sport class, the plant treatment, et cetera. 3) Distributive property This kind of characteristics enables an object or event to be transported through space and be served together in a number of students, in which each other get the same experiences. Once information was recorded, it can be reproduced for many times and be together in many different places. d. The Importance of Media in Teaching The importance of media can be seen from its roles and functions in education. As stated by Prawiradilaga and Siregar, media have two main roles, those are: media as AVA (audio visual aids) so it can give the students concrete experiences and media as communication so it can connect the students as receivers with the material in order that it can be received well (2004: 6). In the next pages, Prawiradilaga and Siregar (2004:8-13) explain the detail functions of media are: 1) Give the knowledge about the learning goals 2) Motivate the students 3) Present the information 4) Stimulate the discussion 5) Lead the students’ activities 6) Do the exercises and quizzes 7) Strength the learning process 8) Give the simulation experiences Meanwhile, Encyclopedia of Educational Research in Arsyad (2005: 25) elaborates the functions of media in teaching as follow: 1) Put on the concrete basics to think, so it decreases the verbalism 2) Improve the students’ attention. 3) Put on the important basics for the development of study, so it makes the lesson more steady 4) Give real experiences for the students so they can effort by them self 5) Emerge the regular and continued thinking, especially about life pictures 6) Help the emerge of understanding that can help the students’ language development 7) Give experiences that cannot be achieved by the other way and give the efficiency and variety in the way of study Besides, Nugraha adds the importance of media (http://yudinugraha. co. cc ), such as: 1) The presentation of the material becomes more standard. 2) The arrangement of the media that is structured and planned well helps the teacher teaches in the same quality and quantity for all classes. 3) The learning process is more interesting and interacting. 4) The students are more active. 5) It is efficient in using time. 6) The learning quality of the students can be improved. 7) Et cetera. e. The Ways in Choosing Media in Teaching Sudirman (1991) in Nugraha (http://yudinugraha. co. cc ) proposes three principles of choosing media in teaching as follow: 1) The goal of choosing media. The choosing of the media that will be used should be based on the goal of its choosing. 2) The characteristic of media. Each media has its own characteristic so it should be adjusted with the material. 3) Alternative choices. Choosing media is the process of making decision and many of alternative choices. Besides the principles above, according to Aristo, the factors that should be taken into account in choosing media are (http://aristorahadi. wordpress. com): 1) Objectivity. A teacher should be objective. It means that a teacher cannot choose the media based own his own. 2) Learning program. The media that will be used should be suited with the level of the students. 3) Technical quality. Technically, the media used should be checked whether it is filling the requirement or not. 4) The effectiveness. Are the media can help the students achieve the learning goal? 5) Time. How long time is needed to prepare and present this media? 6) Cost. The cost that should be paid to present this media must be adjusted with the budget. 7) Availability. The easiness of finding this media should be considered too. If the media we look for are not available, we can substitute it with other media that are suitable. 2. Narrative Video a. The Definition of Narrative Video Video is one of media used to convey the learning’s message. In Oxford Learner’s Dictionary, video is defined as type of magnetic tape used for recording moving pictures and sound (1995: 1327). It means that video has two elements, those are audio and visual. The audio enables the students to receive the message using their hearing and the visual enables the students to receive the message using their eyesight. According to Sadiman (2002: 76), the message presented in the video can be a fact or fictitious, can be informative, educative, or instructive. It is informative, it means that much information from many experts in this world can be recorded in video tape, so it can be received by the students everywhere they are. Video is also educative and instructive; it means that the message of the video can give concrete experiences to the students, so they can apply it in their daily life. Related to narrative, narrative can be defined based on its social function, generic structure, and language feature as a text which says the past activities or event which concerns on the problematic experience and resolution in order to amuse and even give the moral messages to the reader. Considering the definition above, narrative video can be describe as a certain kind of magnetic tape used for recording moving pictures and sound about past activities or event which concerns on the problematic experience and resolution in order to amuse and even give the moral messages to the reader. b. The Benefit of Using Narrative Video in Teaching Generally, the benefit of using narrative video in teaching is quite the same as the benefit of using other videos in teaching. According to Sadiman dkk, (2002:76-77) video has some benefits, those are: 1) It can catch the students’ attention easily. 2) Much information from many experts in this world can be recorded in video tape, so it can be received by the students everywhere they are. 3) The difficult demonstration can be prepared before, so the teacher is able to concern on his presentation. 4) It is more efficient in using time. 5) It can present dangerous object that cannot be brought into the class. 6) The volume can be adjusted. 7) The picture can be frozen so it can be inserted the teacher’s comment. 8) The light of the room does not need to be turn off. c. The Purposes of the Use of Narrative Video in Teaching Anderson (1994: 104-105) proposes some purposes of the use of video in teaching. These purposes are divided into three aspect, for cognitive aspect, for psychomotor aspect, and for affective aspect. These purposes are the same with the purposes of narrative video in teaching, those are: 1) For cognitive aspect a) Develop the recall and motion skill. For example, the observation about relative speed and a moving object. b) Able to show a series of motionless pictures, without sound, as photo or bordered film c) Able to give knowledge about certain laws and principles d) Able to show the right way in having attitude in a performance, especially about the students’ interaction 2) For psychomotor aspect a) Able to show the skill about motion well because it can speed up or down so the motion can be observed clearly. b) The students get the feedback directly and visually about a motion so they can repair their motion well. 3) For affective aspect Video can be a good media to influence the attitude and emotion. For example, play a short story that is suitable with the topic. BIBLIOGRAPHY Aebersold, Jo ann and Mary Lee Field. 1997. From Reader to Reading Teacher. USA: Cambridge University Press Anderson, Ronald. 1987. Pemilihan dan Pengembangan Media dalam Pembelajaran. Jakarta: Rajawali Press Arsyad, Azhar. 2005. Media Pembelajaran. Jakarta: Raja Grafindo Persada Brown, H. Douglas. 1994. Priciples of Language Learning and Teaching. New Jersey: Prentice Hall Inc. Burns, Anne. 1999. Collaborative Action Research for English Language Teachers. New York: Cambridge University Press. Dallman, Martha, Roger L.R. , Lynette Y. C. C. , John J. D. 1982. Reading . New York: CBS College Publishing Davies, Florence. 1995. Introducing Reading. England: Penguin Book Elliot, et al†¦ 1999. Educational Psychology: Effective Teaching, Effective Learning. Boston: Mc GrawHill. Grellet, Francoise. 1981. Developing Reading Skills: A Practical guide to Reading Comprehension Exercises. New York: Cambridge University Press Furchan, Arief. 1982. Pengantar Penelitian dalam Pendidikan. Surabaya: Usaha Nasional. Harmer, Jeremy. 1998. How To Teach English. Harlow: Longman Hartono, Rudi. 2005. Genre of Texts. Semarang: Semarang State University Heilman, Arthur W. 1961. Principles and Practices of Teaching Reading. Columbus: Charles E Merrill Books Inc. Hopkins, David. 1985. A Teacher’s Guide to Classroom Research. Philadelphia: Open University Press. Hornby, A. S. 1995. Oxford Advanced Learners’ Dictionary. New York: Oxford University Press Kartono, Kartini. 1983. Pengantar Metodologi Riset Sosial. Bandung: Penerbit Mandar Maju. Kennedy, Eddie C. 1981. Methods of Teaching Developmental Reading. USA: FE Peackock Publisher Inc. Nugraha, Yudi. _____. Media Pembelajaran dalam Pendidikan. Available at http://yudinugraha. co. cc Nunan, David. 1992. Research Method in Language Teaching. New York: Cambridge University Press. Pardiyono. 2007. Pasti Bisa! Teaching Genre-Based Writing. Yogyakarta: Andi Offset. Prawiradilaga, Dewi Salma dan Eveline Siregar. 2004. Mozaik Technology Pendidikan. Jakarta: Prenada Media Rahadi, Aristo. 2008. Bagaimana Memilih Media Pembelajaran. Available at http://aristorahadi. wordpress. com Sadiman, Arif S. Dkk. 2002. Media Pendidikan. Jakarta: Raja Grafindo Perkasa Wallace, Catherine. 1996. Reading. New york: Oxford University Press Williams, Eddie. 1999. Reading in the Language Classroom. London: Pheonix FLT Zainul, Asmawi and Noehl Nasoetion. 1997. Program Pengembangan Keterampilan Teknik Intruksional (pekerti) Untuk Dosen Muda. Jakarta: Universitas Terbuka Jakarta Press. Zuber, Ortrun and Skerritt. 1996. New Directions in Action Research. London: Falmer Press. www. smanbanyumas. sch. id www. youtube. com IMPROVING STUDENT’S READING COMPREHENSION ON NARRATIVE TEXT USING NARRATIVE VIDEO (An Action Research at Tenth Grader of SMA Negeri Banyumas in Academic Year of 2010/2011) PRI WAHYUDI HERMAWAN K2208043 ENGLISH DEPARTMENT FACULTY OF TEACHER TRAINING AND EDUCATION SEBELAS MARET UNIVERSITY 2010.

Open innovation

Open Innovation is a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as they look to advance their technology. Open Innovation processes combine internal and external ideas into architectures and systems. Open Innovation rocesses utilize business models to define the requirements for these architectures and systems. The business model utilizes both external and internal ideas to create value, while defining internal mechanisms to claim some portion of that value.Open Innovation assumes that internal ideas can also be taken to market through external channels, outside the current businesses of the firm, to generate additional value. The open innovation paradigm treats research and development as an open system. Open Innovation suggests that valuable ideas can come from inside or outside the ompany and can go to market from inside or outside the company as well. This approach places external ideas and external paths to market on the same level of importance as that reserved for internal ideas and paths to market in the earlier era.Open Innovation is sometimes conflated with open source methodologies for software development. There are some concepts that are shared between the two, such as the idea ot greater external sources ot intormation to create value. However, open innovation explicitly incorporates the business model as the source of both value creation and value capture. This 2 latter role of the business model enables the organization to sustain its position in the industry value chain over time. While open source shares the focus on value creation throughout an industry value chain, its proponents usually deny or downplay the importance of value capture.Chapter 5 in this volume will consider these points at greater length. At its root, open innovation assumes that useful knowledge is widely distributed, and that even the most capable R&D organizations must identify, connect to, and leverage external knowledge sources as a core process in innovation. Ideas that once germinated only in large companies now may be growing in a variety of settings – from the individual inventor or high tech start up in Silicon Valley, to the research facilities of academic institutions, to spin-offs from large, established firms.These conditions may not be present in every business environment, and scholars must be alert to the institutional underpinnings that might promote or inhibit the adoption of open innovation . The Open Innovation Paradigm The book Open Innovation (Chesbrough, 2003a) describes an innovation paradigm shift from a closed to an open model. Based on close observation of a small number of companies, the book documents a number of practices associated with this new paradigm. That book was written for managers of industrial innovation processes, and the work has received significant attention among managers.To the extent that such managers ar e able to assess the utility of new approaches, Open Innovation has achieved a certain degree of face validity within at least a small portion of high technology industries. Open Innovation has taken on greater saliency in light of the debate about globalization and the potential for the R&D function itself to become utsourced, as the manufacturing function was 20 years earlier. l 3 Figure 1. 1 shows a representation of the innovation process under the previous Closed model of innovation.Here, research projects are launched from the science and technology base of the firm. They progress through the process, and some of the projects are stopped, while others are selected for further work. A subset of these are chosen to go through to the market. This process is termed a â€Å"closed† process because projects can only enter in one way, at the beginning, and can only exit in one way, by going into the market. AT&T's Bell Laboratories stands as an exemplar of this model, with man y notable research achievements, but a notoriously inwardly focused culture.

Saturday, September 28, 2019

Cebu City Essay

The City of Cebu (Cebuano: Dakbayan sa Sugbo, Tagalog: Lungsod ng Cebu, Spanish: Ciudad de Cebà º) is the capital city of the province of Cebu and is the â€Å"second city† of the Philippines, being the center of Metro Cebu, the second most populous Metropolitan area in the Philippines after Metro Manila. With a population of 866,171 as per the 2010 census, it is the fifth most populated city in the country.[2] Cebu City is a significant center of commerce, trade and education in the Visayas area. The city is located on the eastern shore of Cebu island. It is the first Spanish settlement and the oldest city in the Philippines.[3] Cebu is the Philippines’ main domestic shipping port and is home to about 80% of the country’s domestic shipping companies.[citation needed] It is the center of a metropolitan area called Metro Cebu, which includes the cities of Carcar, Danao, Lapu-lapu, Mandaue, Naga, Talisay and the municipalities of Compostela, Consolacion, Cordova, Liloan, Minglanilla and San Fernando . Metro Cebu has a total population of about 2.55 million people (2010 Census). Cebu City is bordered to the northeast by Mandaue City and the town of Consolacion, to the west are Toledo City, and the towns of Balamban and Asturias, to the south are Talisay City and the town of Minglanilla. Across Mactan Strait to the east is Mactan Island. Geography Cebu City has a land area of 315 square kilometres (122 sq mi). To the northeast of the city is Mandaue City and the town of Consolacion; to the west is Toledo City and the towns of Balamban and Asturias; to the south is Talisay City and the town of Minglanilla. Across Mactan Strait to the east is Mactan Island where Lapu-Lapu City is located. Further east across the Cebu Strait is the Island of Bohol. Demographics Around the 1960s, the population of the city was about 91,000. The population reached 799,762 people in 2007, and as of the 2010 Census, the city’s population has grown to 866,171 in over 161,151 households.[2] Education Cebu City currently has ten large universities each with a number of college branches throughout the city and more than a dozen other schools specializing in various courses. Among these schools is the University of San Carlos. It has four campuses around the metropolitan area. It is currently headed by the Society of the Divine Word. University of the Philippines Cebu, University of San Jose–Recoletos Cebu Normal University Cebu Doctors’ University University of Cebu University of the Visayas.. Cebu Institute of Technology – University, Southwestern University, St. Theresa’s College, University of Southern Philippines Foundation Cebu Technological University Cebu Institute of Medicine Cebu International School,Sacred Heart School – Ateneo de Cebu Colegio de la Inmaculada Concepcion. The upcoming Centro Escolar University – Cebu will be the fourth campus of the university after its Manila (Main), Malolos, and Makati campuses.[17] Cebu City has 68 public elementary schools, 23 national high schools and 28 night high schools. These night high schools are operated by the city government. The Cebu City Public Library and Information Center is the only public library in Cebu.

Soicial Issues and Warren Court Decisions Essay

Soicial Issues and Warren Court Decisions - Essay Example In addition, immigrants have long faced discrimination in areas such as housing, employment opportunities and education. Moreover, civil rights accords do not incorporate racial minorities such as the disabled, women and homosexuals. Till 1860s, numerous states prevented or restricted women from having their own property. The right of a woman to vote was not protected constitutionally until 1920 when the Nineteenth Amendment was ratified. Anti-gender discrimination campaign commenced with the enactment of 1964 Civil Rights Act came into force, effectively illegalizing any forms of gender based discrimination. As a result, individuals could not be discriminated based on national origin, religion, age and race. Various supreme courts have ruled on the above two issues. A case in point is the Reed v. Reed Supreme Court Drama involving Sally Reed as the appellant and Cecil Reed as the appellee. According to this case, the appellant claimed that the Idaho law favored the appointment for t he mere reason of being male over a woman for purposes of being an administrator of an estate whose owner had died. The decision was made in favor of Sally Reed, the Appellant after finding out that the probate law of Idaho discriminated against women. This ruling was the first in favor of women’s right following the Fourteenth Amendment. ... s states from enacting any law which shall abridge (lessen) the constitutional rights and privileges of citizens of the United States nor deny to any person within its jurisdiction the equal protection of the law.† The Equal Protection Clause guarantees that individuals in groups of persons or persons in situations that are similar should be equally treated. Ruth Bader Ginsburg the case lawyer and the subsequent judge of the Supreme Court labeled the Reeds case as â€Å"the turning point case.† The state law for the first time was held invalid because it allowed discrimination against women. The U.S Supreme Court in 1857 in the Dred Scott v. Sandford, 60 U.S. (19How.)393,15L. ED. 691, concluded that the constitution did not find its application to the African Americans as they were not considered to be citizens during the drafting of the constitution. New laws were mandatory after the civil war for purposes of extending former slaves civil liberties. How the Court Decisi ons Affected the Society after the Ruling Reed v Reed was the initial U.S Supreme Court ruling that concluded that laws subjective to gender discrimination were violating the Fourteenth Amendment’s Equal Protection Clause. Decades after the case ruling, the court utilized the precedence set in the ruling to make rulings eliminating discriminatory laws against women. On the other hand, the ruling also benefitted men as it prevented courts from basing their views on gender generalizations. The constitution’s Thirteenth Amendment was enacted for purposes of making the involuntary servitude and slavery unlawful acts. Moreover, the power to enact laws was handed to the Congress which necessitated the new amendments enforcement. Both the cases had a positive impact in the society. For instance, the women

Friday, September 27, 2019

The Mass - An Obligation Or Joy Term Paper Example | Topics and Well Written Essays - 3250 words

The Mass - An Obligation Or Joy - Term Paper Example The primary objective of this paper is to establish that the Mass is a feast of joy and not an obligation. This research paper will compare and contrast the Old and New Testament viewpoints of the Mass and illustrate how different parts make the Mass a joyful feast. The paper will also highlight the significance of active participation during progress of the Mass makes it a feast of joy. The concept and process of the Mass are similar in the both the New and Old Testaments, although there are some differences in viewpoints of the Mass between the two Testaments. The New Testament mass comprises of two main parts: the Liturgy of the Word and the Liturgy of the Eucharist2. These major parts are further divided into subsections to make a whole Mass. The standard duration for a Catholic Mass is two or three hours, and two or three Mass services may run on a Sunday. The Mass is a symbol of Jesus’ sacrifice, which makes present the passion of Christ through the priest and joins huma n beings as partakers of His meal. Most people believe that the Mass is an obligation for all Catholics; however, the Mass is a joyful and voluntary activity among the Catholics. The Celebration of the Mass in the Old Testament and Comparisons to the New Testament Concept of the Mass The fundamental nature of the Mass is contingent upon the venue and functions that participants perform. The Mass in the Old Testament was celebrated in Tents and Temples. There existed the Holy of Holies that housed the Ark of the Covenant. The Ark of the Covenant was covered with the Propitiatory or Mercy Seat. The Ark of the Covenant contained the Ten Commandments, Aaron’s staff and the vessel containing Manna. The Holy of Holies also contained the Cherubim, which was a winged creature supporting the throne of God and acting as a guardian Spirit. The Holy of Holies and the Ark of the Covenant were kept in the Temple. Old Testament priests were allowed to access the Tabernacle and the Ark of th e Covenant during feasts and sacrifices. The Ark of the Covenant and the Holy of Holies are similar to the Tabernacle in the New Testament. The Old Testament Temple and Tents of Worship housed the sanctuary. The sanctuary was the Altar of incense, which contained ten candlesticks. The sanctuary also contained the table of loaves, which was also referred to as the bread of the presence. The frankincense that the New Testament priests spray to the congregation during the Mass commemorates the Old Testament incense. The Catholic Church has an altar table from where the priest prepares the Eucharist before distributing it to the congregation3. This table holds the Eucharist bread and the wine that symbolizes the blood of Christ. The candles keep burning on the sanctuary throughout the church service. These candles are similar to the ten candlesticks in the Old Testament. The New Testament, however, burns two candles while the Old Testament used ten candlesticks to represent the Ten Comm andments. The table for the loaves in the Old Testament is similar to the Eucharist table in the New Testament. Another component of the Old Testament Temples was the Vestibule. The Vestibule was the bronze altar of sacrifice. The Vestibule contained the bronze sea of water for purificatio

Case Study on Profitability Assignment Example | Topics and Well Written Essays - 1500 words

Case Study on Profitability - Assignment Example Thus the decrease in operating profit margin indicates that operating expenses of Deutsche Brauerei rise faster than its sales, which can be clearly seen from exhibit 1: 48.4% increase in sales against 49.5% increase in operating expenses. In turn this means Deutsche Brauerei now has less flexibility in determining prices, and therefore less safety in tough economic times. The ratio of income taxes to earnings before taxes has also increased to 39.5% in 1999 and 39% in 2000 from 33.8% in 1997 and 34.5% in 1998. From exhibit 1 we can see that taxable income increase steadily over years (which can be explained by unstable economic situation in Ukraine), while earnings before taxes grow slower. Consequently return on sales, which shows the operational efficiency of the company dividing earnings before tax by total sales, has decreased from 4% in 1998 (before default) to 2.8% in 1999 leveling the breakdown to 3.2% in 2000. Still shareholders' equity continues to increase shifting the return on equity ratio up to 10.3% in 2000 - the highest measure for four years; the business looks good from this perspective. Return on net assets which is equal to net income divided by fixed assets and net working capital also shows signs of healthy performance increasing to 8.4% in 2000 6.9% in previous year. The return on assets ratio have returned to its value in 1998 - 4.7% - indicating that a company puts its assets to good use when restoring profitability after economic breakdown in former USSR region. As can be seen from the exhibit 1, sales in Germany have been increasing slowly over the last four years, while the main stake was made on the Ukrainian market. Therefore changes in profitability of DB are greatly affected by local economic climate, which was very unstable these years. Although experiencing difficulties in generating profit, DB has made a successful recover from economic difficulties of the year 1998. Leverage Leverage ratios determine the company's long-term solvency. "Financial leverage is the name given to the impact on returns of a change in the extent to which the firm's assets are financed with borrowed money." (Scott, 1998) For instance debt/equity ratio shows how much money the company can safely borrow over long-terms and it is measured with dividing the total debt with total equity. The debt/equity ratio for DB has fallen from 72.3% in 1997 to 66% in 2000. The company has borrowed funds in 1997 making investments into Ukrainian market, which is the reason of such high debt/equity ratio in 1997. It is decreasing along with debt/total capital ratio (long-term debt/ long term-debt + shareholder's equity), which was 39.8% in 2000 comparing to 41.9% in 1997. This is a good sign of increasing long-term solvency. EBIT/interest ratio, which shows how many times the company can cover its obligations was rather stable during the last three years (4.7 in 1999, 2000, 4.8 in 1998) increasing significantly from 3.8 in 1997. The company has significantly decreased its debt in 1998, which was reflected in the increased solvency in the last three years. Asset Utilization The efficiency of the business is measured by asset usage ratios. Asset utilization ratios are especially important for internal monitoring concerning performance over multiple periods, serving as warning signals or benchmarks from which meaningful conclusions may be reached on operational issues (Blok and Hirt, 2005). Asset turnover is one of the most important

Thursday, September 26, 2019

Being Polycultural Essay Example | Topics and Well Written Essays - 1250 words

Being Polycultural - Essay Example He discusses his own problems as a child who comes from such lineage. He also brings a new point of view regarding various cultures and impact of each on the other. With the help of Robin Kelly’s article, we are going to analyse and discuss, acceptance of children of mixed parentage in the western society and how Kelly’s concept of being polycultural has helped him in his struggle for acceptance. Today we talk about child psychology and not to hurt the young, vulnerable hearts of kids and young children. But whenever we talk about somebody’s lineage and parentage do we realise that the discussion can leave a permanent scar on the mind of the kid? Upbringing of these children in the society: It becomes a challenge for the parents to keep these prejudices at bay when they bring up their children in as neutral environment as possible. However the tragedy starts when the children grow and start realising that they are someone different from the people around. The dif ference is only the colour of the skin they carry but they are constantly made uncomfortable in their own skin. Robin Kelly has described his life as a normal American teenager. In Harlem in the late 1960s and 1970s, Nehru suits were as popular—and as â€Å"black†Ã¢â‚¬â€as dashikis, and martial arts films placed Bruce Lee among a pantheon of black heroes that included Walt Frazier of the New York Knicks and Richard Rountree who played John Shaft in blaxploitation cinema. How do we understand the zoot suit—or the conk—without the pachuco culture of Mexican American youth, or low riders in black communities without Chicanos? How can we discuss black visual artist in the interwar years without reference to the Mexican muralist, or the radical graphics tradition dating back to the late nineteenth century, or the Latin American artists influenced by surrealism? (Kelly page 2). In this paragraph he does not wish to describe himself as any person who stands out because of his colour. By quoting common references of every person’s childhood he establishes his connection with them very strongly. He even uses terms and phrases which are used by everyone else. Does that indicate his desire to connect with everyone around him? To be accepted as a normal person who probably thinks the same or is brought up with same ideologies as any other person in America? Robin Kelly has also described the suffering of his younger brother because of the question regarding his mixed culture. Perhaps the most sensitive and protected in the family, his younger brother might have been hugely affected by requiring constant approval and acceptance from his friends. Finally his brother gave up his struggle and chose to settle down in a completely different culture and to the other side of the world. This is sort of voluntary resigning from the situation. Even if he might have gone for his personal benefit, he might have thought of it better to move rather th an have a questioning look on the faces of the people around him. This might be the most difficult decision he has made in his life. Robin Kelly’s sister got her name changed because of the same question, â€Å"What are you†. She tried to solve this problem her way by changing her name. Everyone in the family was terribly affected by the question and every one of them tried to find out his or her own way of dealing with it. How difficult it might be for the parents to create a neutral and believable situation for a healthy and normal upbringing! Refusing the acceptance and denying the existence: Examples like Robin Kelly are abundant in western society. There are so many authors who have written about mixed parentage and the reaction of the society, mostly adverse to them. Like in book ‘Life on the colour Line’

In what respects has sovereignty been redefined in the post-Cold War Essay

In what respects has sovereignty been redefined in the post-Cold War era - Essay Example The fierce cold war between United States and former Soviet Union created lot of tensions across the world during that period. The superiority of these political powers forced other countries to align towards either of them for their safety and security. At the same time, such polarization towards either of these superpowers was forced other countries to formulate their foreign policies and economic activities strictly in accordance with the interests of the superpower related to them. In other words, during cold war era, countries which sought the protection from either of the superpowers lost their sovereignty and they forced to support all the actions of the superpower under which they aligned or polarized. Thus the individuality, freedom, and identity of such countries were in question during the cold war era. Many changes happened in international politics during the post cold war era. Many countries which were once sidelined under the banner of these superpowers, started to bre athe free air and experience freedom. For example, countries like Poland, Bulgaria and Rumania were under the Soviet banner during the cold war era and after the destruction of Soviet Union, these countries started to embrace democracy and experienced the value of human rights and freedom. Such countries started to speak in their own language on international political affairs instead of speaking in the language of communism or the Soviet Union after the cold war era. They restored their sovereignty and individuality.

Staples.com Essay Example | Topics and Well Written Essays - 750 words

Staples.com - Essay Example Would you pursue wallet share or market share as the first priority? Or would you pursue both? Staples.com strategy is very timely as the only online competitor they had was Office Depot and as per the Forrester Research online sales of office supplies were expected to reach $65 billion by 2003. Their cohesive marketing campaign aimed at offering multiple channels so they could reach more customers. They were realistic in their approach as far as advertising budgets were concerned despite having ample capital. They did not want to follow what others were doing and wanted to use the traditional, cost-effective direct marketing strategies. They were not following a ‘get big strategy’ because they differed in their marketing approach. They had a balanced approach. Lewis’ strategy to first capture the market share holds more importance. To expand and achieve the target growth, competing with mass discounters and mass merchants would not commensurate with the image that they were trying to build. Once the market share is captured, wallet share would happen auto matically. Staples.com should expand into the SOHO services market because for small businesses it is time and cost effective to find all services from one source. To offer services like intranet, telecommunications, take care of payroll and other accounting services, it would be better for Staples.com to tie up with external service providers. Creating services would require more manpower and there is the possibility that their focus might shift from their primary goal. They can oversee the services to ensure quality and professionalism is maintained. Staples.com should not compete with mass discounters and merchants. This adversely affects the image of the company. They should aim at capturing the market share which would help them to meet their growth targets. If they start offering discounts to match competition, they might have to compromise on services. Besides, the

Wednesday, September 25, 2019

US Army and the Cyber Domain Research Paper Example | Topics and Well Written Essays - 1500 words

US Army and the Cyber Domain - Research Paper Example orthy in this context that cyberspace has actually enhanced the operational efficiency of the US Army and has actually increased the convenience of exchange of information. Overall, development has been induced into the operation of the military forces. However, the introduction of technology has also increased dependency of the department on ‘cyberspace’, which has at times proved to be crucial for the army. Numerous flaws and loopholes persisting in the system are further examined that might become fatal in due course to support army operations, making it more important for the department to have continuous monitoring of the issue2. Correspondingly, this research paper briefly defines the term ‘cyber’, as used by the military at present, stating the roles that the military should be taking in order to enhance their responsibilities into the domain, and subsequently, structuring a rough layout about the future mode of operation. Cyber, popularly called Cyberspace, is an electromagnetic domain, which serves as a spectrum to store, modify and exchange data through the virtually networked association3. At present, the use of cyberspace has increased at a vast magnitude with chances of massive disruptions with the unauthorised intervention, further raising the risks that the capabilities projected can be seized4. As the size of cyberspace is increasing, the complexity of the network is also accelerating in manifold. The gap in understanding the terminology of cyberspace has in turn increased its vulnerabilities to be adequately sound in its effectiveness. The modelling and simulation has also become the most important facet, which needs to be explored and refined with increasing dependency on the networking domain. As the domain is connected with thousands of networks, healthy functioning of the domain has become a myth5. To be noted in this regard, the Department of Defence, which is one of the important depar tments of the government, has become highly

Statistics Assignment Example | Topics and Well Written Essays - 500 words - 5

Statistics - Assignment Example Print material is also more credible since internet sources are generally easier to circulate material on the internet than to print within a book. Moreover, print material is timeliness thus making it more credible than internet material. Credibility of a source is determined by the author and the publisher. Renowned publishers particularly associated with reputable universities are considered to be reliable sources. This is because reputable authors and publishers are considered to be having better credentials thus making the source more credible. Credibility mainly depends on the author’s background information that ought to display evidence of being credible, truthful and knowledgeable. Poor credibility is determined by the tone, style and competence of the under writing that lack anonymity, lack of quality control, negative metainformation, and poor grammar. Accuracy mainly relies on the date of the prevailing information. It ought to be timeliness, comprehensiveness and audience focused. Lack of accuracy on the internet information is depicted by lack of date of the underlying document, vagueness, very old date information that swiftly changes and single sided perception of ideas. Reasonableness mainly entails analyzing the information in regard to fairness, objectivity, moderateness, and consistency of the information in the underlying source. Lack of reasonableness is depicted by unbalanced tone, over claims, presence of massive sweeping statements in regard to unnecessary significance and corresponding conflicts of interest. Support is depicted by statistics and corresponding claim of facts of the underlying source. Poor support sources is one that have presentation of statistics devoid of the identification of the source, lack of the source documentation in cases where documentations are vital and lack of supportive sources that have similar information. The

Tuesday, September 24, 2019

The Renaissance Man in Michelangelo Essay Example | Topics and Well Written Essays - 750 words

The Renaissance Man in Michelangelo - Essay Example Discussing the three works --- their similarities and differences --- will help evaluate if Michelangelo indeed is able to present great talent in several different art realms. THE RENAISSANCE MAN IN MICHELANGELO 3 The Renaissance Man in Michelangelo Renaissance is a big part in the history of art. Here, art was reborn through the rediscovery of Greco-Roman tradition. The word renaissance itself came from the term â€Å"la rinascita,† meaning â€Å"rebirth.† (Rubin, 2006, p.563) During this period, pieces of art developed from the supernatural to the natural, as man's expansion of the scientific knowledge progressed. Study of human beings and scientific research caused societies to believe more in the self (Rubin, 2006, p.565-6). The Renaissance era spanned for two centuries, with historians dividing it into three periods --- Quattrocento or Early Renaissance, High Renaissance, and Mannerism. If modern artists today take pride in being a master of a certain field, Renai ssance people are considered great if they are able to present great talent in several fields (Rubin, 2006, p.574-5). ... Even though his talent is mostly considered as â€Å"only† within the realm of arts, the mastery he showed in every piece of work in every discipline is enough to make one understand that he is no ordinary â€Å"Jack of All Trades† (Emison and Chapman, 2006, p.508). He is considered a part of the High Renaissance period because there is naturalism in his works, there are no halos in religious pieces, and there is the balance between movement and stillness. There is also a great sense of harmony and balance in his works, which are both characteristics of Renaissance work. Furthermore, the Renaissance period is characterized by private or government funded art THE RENAISSANCE MAN IN MICHELANGELO 4 commissions, as compared to the common practice of art commissions by religious sectors (Rubin, 2006, p.576-7). â€Å"Statue of David† is among Michelangelo's sculptures, and perhaps can be considered one of the most famous sculptures in the world. The Guild of Wool Merch ant commissioned Michelangelo to create the â€Å"Statue of David.† As mentioned above, private commissions became the norm during the Renaissance period particularly for sculptures, especially due to its high cost. This sculpture breaks away from the traditional way of presenting David (Allen, 2001, p.18). In Michelangelo's sculpture, he does not show David as a winner, but rather as a youth just about to gain power right before the fight. This technique is also a characteristic of the Renaissance period, and Michelangelo brilliantly created the image of balance between stillness and movement by creating a sculpture that is both calm and smooth, yet dynamic and combat-ready (Allen, 2001, p.19). The tendons and the muscles of David's

Supreme Court Holdings Essay Example | Topics and Well Written Essays - 500 words

Supreme Court Holdings - Essay Example United States, 365 U.S. 505, 509 -512 (1961).†2 â€Å"[This] however, [which held] that, when the Government does engage in physical intrusion of a constitutionally protected area in order to obtain information, that intrusion may constitute a violation of the Fourth Amendment even if the same information could have been obtained by other means.†3 The search and seizure in U.S. v. Karo was highly unusual. However, it was held that â€Å"[t]he evidence seized in the house in question, however, should not have been suppressed with respect to any of the respondents.†4 â€Å"The information that the ether was in the house, verified by use of the beeper without a warrant, would be inadmissible†¦invalidat[ing] the search warrant†¦Ã¢â‚¬ 5 So, even though the search warrant was eventually inadmissible, there was enough evidence that was pertinent to the case which was not tainted which allowed for the defendant to finally be prosecuted. â€Å"[This premise won’t] be violated,†¦no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the person or things to be seized.†7] To this writer, what is particularly disturbing is that the Fourth Amendment does not guarantee completely against unreasonable searches and seizures, as the plaintiff in Knotts and the defendant in Karo were both subject to actions basically without warrants. Further, what is more cogent is that one needs to impress upon those in power that the authorities must â€Å"obtain a search warrant from a magistrate by showing the need for it, and to conduct themselves according to law. This is an important guarantee of the right of privacy.†8 To the naked eye it seems that Constitutional rights were violated. In both cases, sufficient evidence was supposedly later found to corroborate with not having invaded personal privacy—and if personal privacy was invaded, Constitutional

Monday, September 23, 2019

Hypothesis and Research Question Case Study Example | Topics and Well Written Essays - 1000 words

Hypothesis and Research Question - Case Study Example For this paper, I expect that more is uncovered to understand the benefits, the consequences, and most importantly, the challenges of this new addictive technological way of communicating. It is interesting to note that even though the young generation in particular tends to enjoy most the benefits of Facebook and twitter, the literature review has focused on the dangers of being addicted to social media. This has resulted in online social networking and demands increased accountability and regulation of providers of the two major sites. Early reviews or studies suggested that online communication had negative effects on the user by reducing face-to-face contact and increasing the level of loneliness. More recent study shows that social media addiction leads to a new generation with egocentric-based approach to life, overdependence, and the addiction only acts to compound these eventualities (Zemmels, 15). Through the use of the Survey method and content analysis method of research, I predict that a huge percentage of facebook and Twitter addicts tend to spend less than an hour on the sites daily. The most exciting part is what the media addicts do to stay online though they could have had bad experiences. However, such experiences are managed through means such as blocking the disturbing person from contacting, indicating that such experiences make them to consider mechanisms to avoid the bad experiences in future in order to continue with the same business on the sites. Content analysis: It entails a mixture of quantitative and qualitative research methods that focus on messages, making it able for users to quantify information by use of frequency counts and percentages. Any kind of content can be analyzed, including focus groups, television programming, interviews, editorials, and news releases. The population that would be used for the research are generally social media users who increasingly use Facebook and

How did Ford Motor Company Successfully Turn Around its Business Research Paper

How did Ford Motor Company Successfully Turn Around its Business During the Recent Recession - Research Paper Example In these circumstances companies have also resorted to price wars that have further eroded the profit margins of various organizations. The present study would analyze the case of Ford Motors and the business strategies that were adopted and implemented by the organization during the period of economic recession. According to a report, the recession in the year 2009 led to a decline in new registrations by about 24.8 percent in a month as compared to the same in the previous year. In addition to cars vehicles across all categories like trucks and vans also reported a dip by about 31.4 percent and 49.2 percent respectively. This has led to severe implications for car manufacturers as they started reported lower income. This led to job cuts and other cost control measures besides putting all expansion plans on a hold (IMI Research Department, â€Å"Summary†). Issues Ford Motors is US based automobile manufacturing organization, established in 1903 by Henry Ford the company is en gaged in the manufacture of cars, trucks, SUV’s and other vehicles. The company is preset across all the major markets of the world and headquartered in the USA. It stocks are listed and actively traded on the bourses of the New York Stock Exchange (Ford Motors Corporation, â€Å"Our Company†). The economic recession had a severe impact on the business prospects and profitability of the company. The effect of recession on the fortunes of the company can be analyzed from the fact that the company applied for a 9 billion bailout from the US government in the form of short term and working capital loans so as to ensure that the company was sustainable. However the company was better placed than any of its competitors as it was able to maintain grounds even without the bailout package (The New York Times, â€Å"Background†). Management Strategies The period of economic recession was among the worst periods seen by the company as it was the period that saw the lowest sales since ten years. The absence of credit facilities and price margin pressures led to considerable issues for the organization. However in spite of the issues Ford Motor Corporation was able to maintain sustainability even without a bailout package as was given out to the top competitors of the company namely General Motors, Chrysler that were on the verge of closure and bankruptcy. The company fared better than most of its competitors and also overtook Toyota Motors as the rank one automobile company in the US market in the year 2010. Behind this successful management of a stressful period lies a mix of effective strategies sided by a visionary leader and a good management that has successfully used the best financial, HR and marketing strategies to help make the company stronger even in the most turbulent periods in history (The New York Times, â€Å"Background†). One of the reasons for the successful turnaround of Ford Motors Corporation was the successful marketing s trategies being implemented by the organization. The company unveiled a product strategy that involved launching smaller cars and economic versions rather than focusing on premium products. In this regard the company chose to keep focus on the sub compact cars like Fiesta that was economical. This strategy played off during the times

Sunday, September 22, 2019

Poetry Analysis Essay Example | Topics and Well Written Essays - 750 words - 2

Poetry Analysis - Essay Example ‘Your absence distributed itself†¦ When I sat down in the armchair The silent memories of the departed soul have made a strong impression on the poet who was pregnant at that time. She has expertly used the imagery in the text to capture the soul stirring emotional gap that is evident in the place and time that was once inhabited by him. ‘Friends and relatives kept coming, trying to fill up the house†¦ the green hanger swang empty/ and the head of the table/ demanded a plate’. People and acquaintances come voluntarily to visit the place and pay homage to a person who is so patently loved and who is present despite his physical absence. The poet’s use of the figurative speech, has correctly reflected the acute sense of loss one and all. Another very important feature of the poet’s text is that she has beautifully associated the death with the beginning of life that is growing inside her body. According to her, the inevitability of the death and inconsolable loss has brought for the ultimate truth of the universe. Death is final and one is totally helpless in front of it. The poet has compared this feeling of helplessness to that of the child who is still growing inside the womb of the mother and is totally dependent on her for his survival. Indeed the allegory of death and life is the philosophical reminder that it is a cycle that must be encountered by all. ‘I lay down in the cool waters/ of my own womb/ and became the child/ inside, innocuous/ as a button, helplessly growing’. The stark realities of the life are beyond our control and the poet has succeeded in expressing this ideological philosophy through the simple words by ending the poem with ‘I slept because it was the only/ thing I could do. I even dreamed/ I couldnt stop myself’. ‘Those Winter Days’ by Robert Hayden, is a poem that shows that death has a strange way of acknowledging love that

Attachment in the United States and Amae in Japan Essay Example for Free

Attachment in the United States and Amae in Japan Essay Culture enables people to adjust to their physical as well as social environment. Culture enables the members of society to develop ways of coping with the exigencies of nature as well as ways of harnessing their environment. People also have to learn to relate themselves with others in order to survive. As Schwartz (1998:48) pointed out, the culture of any society represents an adaptation or adjustment to the various conditions of life, including their physical, social, and supernatural environment. No culture is completely static. Every culture is in constant flux; and the changes represent adjustments to the environment. Culture changes at different rates. The changes occur as a result of discoveries, inventions, and cultural borrowing. In some areas, control of the natural environment has been pursued to a point that the society has become endangered. Natural resources, such as bodies of water, forests, plant and animal life and minerals, have been so exploited that the environment is close to destruction. The acceptance of change depends on the exposure of the members of society to new ideas and ways different from their own and their opportunity to accept ideas and ways through diffusion. United States and Japan are belonging to different continents, locations and have different set of people; however, these two countries have similarities in some ways. Its people adapt and practice different cultures. Its culture shapes the behaviors and characteristics of its people that will make them noticeable that these groups of people are come from United States and Japan. Thesis Statement: This study will conduct a cross cultural psychology comparison of Attachment in the United States and Amae in Japan; thus, describes its functions, similarities and differences. II. Discussion A. Its culture, similarities and differences o United States There are a lot of groups of people that reside in United States; whites made up 83. 2 percent of the populations; blacks 11. 7 percent; American Indians, Eskimos, and Aleuts 0. 6 percent; Asian and Pacific Islanders 1. 5 percent; and other nonwhites 3. 0 percent. The nonwhite groups are concentrated in various parts of the country. Freedom in matters of education and the right of every child to have an education are basic principles in the United States. Unlike many other nations, the United States does not have a central or federal system of education, Establishing and administering public schools is one of the powers exercised by each state (Fritsch, 2001). The state, in turn, delegates much of this responsibility to local school districts. Literacy in the United States is high just like in Japan. In comparison to Japan (which only has two major religions), no other country in the world has a greater variety of religions, communions, denominations, and sects than the United States. More than 220 religious bodies report membership figures. Nearly all branches of Christianity and almost all Protestant denominations are represented. Japan and United States has the same views when it comes to religion (Katzman, 2003). The United States also believe that religious freedom and separation of church and state should be made. Government cannot interfere with religion or show preference for one religion over another. It cannot set up an official, or established, church, nor give support to any religion or to all religions. In the early days of the republic, United States artists and writers were generally regarded as inferior to those in Europe. Be the end of the 19th century, however, an independent national literature of high quality had been established by renowned writers (O’Neill, 2004). Music in the United States was strongly influenced by European music, and study in Europe was considered a necessary part of musical training far into the 20th century. America’s most influential contribution to music was jazz, a form originated by blacks and based on African rhythms. The musical which evolved from burlesque and operetta, was another American innovation. For many years, architects in the United States simply adapted European styles to American climate, landscape, and materials. The favorable economic position and amount of leisure enjoyed by the people of the United States give them unusual opportunities for recreation. Paid vacations became the rule for most industrial and office workers. The most popular outdoor spectator sports are football and baseball. Horse racing and automobile racing have large followings (Kurelek, 2005). o Japan The Japanese people are largely of Mongoloid stock, but little is known about their specific origin. Successive groups of migrating Asians from the mainland are believed to have settled on the islands some time before 300 A. D. Confronting them were the islands’ earliest known inhabitants— the Ainus, a Japanese people have developed from the mingling of these different ethnic groups. Only a few hundred full-blooded Ainus remain, on Hokkaido. Japanese culture is partly of Chinese origin and partly indigenous, for the Japanese adapted and did not merely imitate the culture of the mainland. Since the middle of the 19th century, Japan has been influenced more by the culture of Western countries than by that of its neighbors (Morton, 2004). Adoption of many Western ways produced sharp contrasts between the new and the old. Buildings and clothing, for example, are now seen in both traditional and Western styles. Among forces that have helped to mold the Japanese character are Buddhist, Shinto, and Confucian religious beliefs, the effects of a long feudal period, and the influences of the Japanese industrial revolution. With industrialization came a change from rural to urban living. American influences have been particularly strong since World War II (Smith, 2005). Moreover, its art has been strongly influenced too by Chinese art. From the mainland came the technique of ink painting on silk and the Buddhist influences in sculpture and painting. Flourishing throughout Japan are no, classical plays in which the actors wear masks depicting their character; bunraku, puppet plays; and kabuki, drama with stylized chanting and dancing. An important part of Japanese culture is the tea ceremony, a highly formal ritual, of which there are many variations. As a way of entertaining guests, it is regarded as the best expression of traditional etiquette. Some of the traditional arts—especially classical Japanese music and dance and the tea ceremony—are part of the repertoire of geisha, female entertainers who perform for groups of men. In addition, the family is a traditional and strong institution in Japan. It has a formal structure with authority vested in the male head of the family. The wife is expected to be subservient. Children learn discipline and their respective roles in the family at an early age. Sons are given preference over daughters, and the eldest son is superior to all others (Elkin, 2004). However, many of the more repressive aspects of the family, such as that of parents determining marriages, have weakened since World War II. Japanese homes are noted for their simplicity. Nearly all are built of wood. In many homes, paper-covered wooden frames, called shoji, are used for windows and doors. Being light and easily moved, they allow much of the house to be opened to the out-of-doors. Some homes are adjoined by landscaped gardens. Rooms usually have thick mats, called tatami, on the floor and very little furniture (Elkin, 2004). With regards to Japanese language and religion, the Japanese language is unrelated to other Oriental tongues. However, it is written in characters that originally adapted from Chinese writing. Furthermore, like in the United States, the Japanese constitution provides for freedom of religion and separation of church and state. The two major religions are Shinto and Buddhism. Many Japanese adhere, in varying degrees, to both. With regards to their education, six years elementary education and three of lower secondary school are free and compulsory for children 6 to 15 years of age. At the three-year upper secondary schools, tuition is charged. Education in Japan is highly competitive, and admission to upper secondary school and to college is determined by rigorous entrance examination. As a result, many Japanese children spend their after-school hours attending jukas, â€Å"cram† schools that specialize in preparing students for entrance examinations and other school tests. Japan has virtually no illiteracy (Christopher, 2003). III. Conclusion In conclusion, as I study the two different cultures, I have realized that United States and Japan have some similarities when it comes to their origin. Japan was most influenced by the Westerners and its origin was contributed by other indigenous groups and so is with United States. Everything that we can see from the Japanese and American culture are already been modified by other influences. However, in spite of the strong adaptation of different culture, Japanese remained their being family-oriented. They value the essence of having a united family; thus, a well-structured family role is formed so that each member can have its function. Unlike with the United States, it is very much influenced by the European settlers and based their competencies in European countries. Its culture is more focused on its development to the extent that internal competencies are suffered. I would say that Japanese culture is superb compared to United States because Japan is able to maintain their traditional ways in spite of economic development. Reference: 1. Fritsch, A. J. (2001). The Ethnic Atlas of the United States (Facts on File). 2. Katzman, D. M. (2003). Plain Folks: the Life Stories of Undistinguished Americans (University of Illinois). 3. O’Neill, Thomas. (2003). Back Roads America: a Portfolio of Her People (National Geographic Society). 4. Kurelek, William (2005). They Sought a New World: the Story of European Immigration to North America (Tundra Books). 5. Morton, W. S. (2004). Japan: Its History and Culture (McGraw-Hill). 6. Smith, R. J. (2005). Japanese Society: Tradition, Self, and the Social Order (Cambridge University). 7. Elkin, Judith. (2004). A Family in Japan (Lerner). 8. Christopher, R. C. (2003). The Japanese Mind: the Goliath Explained (Linden Press).

Saturday, September 21, 2019

How Accurate Is It to Say That the Black Power Movements Essay Example for Free

How Accurate Is It to Say That the Black Power Movements Essay In some ways I agree that the Black power Movements of the 1960’s achieved nothing for the Black people because by 1968 little had changed, and it is therefore easy to claim that Black Power movements achieved nothing, and in fact had a negative impact on black Americans. However in some ways I disagree because the Black Power movements in the early 1960s coincided with the peak of success for the Civil Rights campaign such as the freedom cities of 1966 or the Free D. C. movement. Firstly I agree that the Black Power Movements achieved nothing for Black people relations between King and other Civil Rights groups were never entirely secure, and he was often accused of taking credit for the efforts of others, for example in the student sit-ins of 1961. He was criticised for a cynical use of children in the Birmingham campaign of 1963 and for cowardice in halting the first Selma March. These attacks reflect internal rivalries that had nothing to do with Black Power. They increased after 1966 when he moved his focus to the north. The Chicago campaign of 1966 was a dismal failure and also revealed a cultural gap between the respectable bible-belt leaders of the south and the ghetto-based youth of the north, who found Malcolm X a more inspiring figure. The whole situation was made much worse by the war in Vietnam, which diverted money and media attention and created a widening gap between black and white communities. Many black people resented having to fight for a country that valued them so little, while white public opinion saw the refusal of some to serve, like Mohammed Ali, as unpatriotic. The most important point, however, is that once legal equality had been achieved in 1965 and the focus shifted to the social and economic effects of long-term discrimination, King’s methods were ineffective. Secondly the Chicago campaign. The Albany movement Thirdly the Memphis Sanitation workers strike. The Mississippi Freedom Summer On the other hand the Freedom cities were aimed to bring ‘home rule’ to the black community of Washington D. C. The project was started with the demonstration against the way the local schools were administered. Towards the end of 1966 the black citizens of Washington D. C. had won the right to elect their own school boards. SNCC gained $3 million worth of government funding to improve community policing. SNCC innovated similar projects for example in New York the campaign saw black people take control of the intermediate School in Harlem as well as in Mississippi set up a Child Development Group in which the group raised $1. 5 million from the churches and the federal government in order to set up 85 head start centres to support young children . Furthermore the March on Washington was a massive success groups such as the SCLC. SNCC, CORE and the NAACP were involved it was also to commemorate the 100 years since the Emancipation Proclamation was created the campaigned was initially designed to pass a Civil Rights Bill. 250,000 people marched to the Lincoln memorial to hear Kings famous ‘I have a dream speech’ as well as other figures of the Civil Rights Movement. The March drew a vast amount of media attention. The March ensured support for new civil rights legislation which gave the government power to desegregate southern states. It presented the civil rights movement as a united front. Additionally the Birmingham campaign aimed to desegregate the city’s largest shopping areas schools and public parks as well as demanding an end to racial discrimination in employment. ‘Bull’Connor obtained a court injunction against demonstrations in certain precincts to weaken protests. The 3rd of May the police demonstrators with high pressure fire hoses and arrested and imprisoned 1300 children which caused a media frenzy Kennedy was sickened by the images of police violence from Birmingham. The significance of the campaign was that the department stores were desegregated and the racial discrimination was ended. The Greensboro sit-ins were a success it aimed to desegregate public places such as restaurants or swimming pools. In February 1960 the sit-in escalated to 300 students by the fourth protest it became highly influential as there were similar protests like watch-ins in cinema which by the start of 1961 over 70,000 people black and white had taken part in demonstrations. The significance of the sit-ins brought a mass of media attention which increased the support towards the civil rights campaigns. By the end of 1961 810 towns had desegregated their public places. Woolworths lost decreased by a third during the campaign which showed the economic power of black people. Finally the Freedom rides designed to turn de jure victories of Morgan v. Virginia and Boynton v. Virginia into de facto desegregation of interstate transport and interstate transport facilities set up by SNCC and CORE. The significance of the freedom rides was that it showed that Kennedy supported the civil rights movement and that it marked a new high cooperation within the civil rights movements. The Poor Peoples Campaign aimed to create a coalition big enough to solve the social and economic problems identified during the Chicago campaign In conclusion the Black Power declined very quickly in the late 1960s because its organisation was very poor and it had little money to support itself. It also declined because the government preferred King’s the peaceful methods to the violence and hatred of Black Power. Thus it seemed as if Black Power had not achieved anything of real importance for black people, and was a factor in the ending of the civil rights movement as a whole. However, it can be said that Black Power did manage to achieve something for black people as a whole. Black Power leaders did try to help the people in the inner-city ghettos, and they did increase black pride and a sense of Black Nationalism. Malcolm X in particular was very important in raising the morale of many black people, and became a hero to young black people in the USA and around the world. The emergence of the Black Power movements in the early 1960s coincided with the peak of success for the Civil Rights campaign the legislation of 1964-65. Thereafter, the focus of campaigns had to move the practical issues related to social and economic deprivation, and the ability to exercise the rights that had been gained. By 1968 little had changed, and it is therefore easy to claim that Black Power movements achieved nothing, and in fact had a negative impact on black Americans. It is hard to deny that the Black Power movements had a damaging impact in the 1960s. The preaching of Elijah Mohammed and later Malcolm X that integration was impossible and undesirable, that white people were devils and Christianity just a legacy of slavery, created a mirror of white racism that could only be divisive. They rejected the support of white liberals and divided white from black. They subjected integrationist leaders like Martin Luther King to campaigns of personal abuse, calling him a hypocrite, a coward and an Uncle Tom. They even indulged in vicious internal feuding, such as the assassination of Malcolm X by members of the Nation of Islam in 1965. Incidents of violence, such as attacks on white people, the race riots of Harlem in 1964 and Watts in 1965, damaged the black community and created a white backlash. This threatened the promised government expenditure on housing, schools and job creation under the Great Society. As casualties from Vietnam increased, they campaigned against the draft and argued that black youths should not serve, infuriating an increasingly patriotic public and media. The existing Civil Rights movement disintegrated, as the student organisations led by SNCC under Stokely Carmichael adopted Black Power symbols and slogans, and refused to co-operate with Martin Luther King’s SCLC. The government and many white Americans saw the black communities as ungrateful, and King as a spent force. The links that had helped him to gain reforms and investment disappeared, and nothing of significance was achieved for black Americans after 1966. The emergence of Black Power was totally negative. 24. In many ways, however, this argument is over-simplified. The problems faced by the Civil Rights movement in the 1960s had begun to surface before the Black Power movements developed, and could be said to have contributed to their growth. Relations between King and other Civil Rights groups were never entirely secure, and he was often accused of taking credit for the efforts of others, for example in the student sit-ins of 1961. He was criticised for a cynical use of children in the Birmingham campaign of 1963 and for cowardice in halting the first Selma March. These attacks reflect internal rivalries that had nothing to do with Black Power. They increased after 1966 when he moved his focus to the north. The Chicago campaign of 1966 was a dismal failure and also revealed a cultural gap between the respectable bible-belt leaders of the south and the ghetto-based youth of the north, who found Malcolm X a more inspiring figure. The whole situation was made much worse by the war in Vietnam, which diverted money and media attention and created a widening gap between black and white communities. Many black people resented having to fight for a country that valued them so little, while white public opinion saw the refusal of some to serve, like Mohammed Ali, as unpatriotic. The most important point, however, is that once legal equality had been achieved in 1965 and the focus shifted to the social and economic effects of long-term discrimination, King’s methods were ineffective. This means that by 1966, methods of campaigning to improve conditions for black people had to change, and the Black Power movements did offer some alternatives. When the Black Panthers set up community projects and policed the housing estates of Chicago, they offered a more direct and practical form of help. More generally, Black Power offered black people a sense of their own culture and pride in their identity. The late 1960s saw changes in music, fashion and style that celebrated black identity rather than attempting to look like whites, such as the Afro hairstyles, the growth of a new soul music and the later development of hip-hop and rap. The use of Black Power salutes by American athletes offended many whites, but it drew the attention of the world to the continuing levels of discrimination suffered by many black Americans. It is difficult to measure the results, but it can be argued that by helping to maintain attention on the problems and demanding change, the Black Power movements helped the black communities to keep fighting for better conditions. By comparison with the gains made through ‘peaceful’ protest, the impact of Black Power was mixed and its achievements limited, but to claim that it achieved nothing for black people is an exaggeration.

Design and Planning of 2G, 3G and Channel Modelling of 4G

Design and Planning of 2G, 3G and Channel Modelling of 4G Chapter 1 Fundamentals of Cellular Communication In this chapter, all the background knowledge which is required for this project has been discussed. 1.1 Cell The area covered by single BTS(base transceiver station) is known as cell. 1.1.1 Shape of cell The shape of cell depends upon the coverage of the base station. The actual coverage of the base station is called footprint and is found with the help of measurements from the field. We can make our calculations easier by using the shape of circle noting that there would not be spaces between them. As, the purpose is to provide coverage to each and every subscriber. But if there are spaces between the coverage areas then the person in that specific area will not be able to get any coverage. To cover the problem of interleaving spaces, the shapes that can be used theoretically are: Square Triangle Hexagon But in selection criteria one thing must be kept in mind that every person within a cell get same coverage specially the person at the edges of the cell. So hexagon is the shape among these three choices with largest coverage area. Its coverage area and shape is closest to the circle and it helps tessellate. Omnidirectional antenna is used in the center of it, and if we want to use sectored directional antenna then it must be used at any three corners of it. 1.1.2 Area of the Cell The area of a cell with radius R is shown in figure 1.1(a), is given by: 1.2 Frequency planning While developing the cellular system, it has limited capacity due to the given bandwidth. So, in order to solve this problem Cellular Systems have to depends on an intelligent and more use of channels through out the area. Every cellular base station is alloted a group of different radio channels to be used in a cell. Base station in the adjacent cells use completely different frequencies. For this purpose antennas are used such that their power may get limited within the cell. In this way the allocated frequencies maybe reused in different cells again. The process of allocating and selecting channel groups for all the base stations in a system is known as frequency reuse or frequency planning. We use two types of antennas: Omnidirectional antenna Sectored directional antenna Omnidirectional antennas are used in the cells which are centrally excited and sectored directional antennas are used in the edge excite cells. To understand the concept of frequency reuse, let us say that S are the total no. of duplex channels available for use, k number of channels given to each cell i.e. k S=kN (1.2) Where N is no. of cells which uses the complete set of available frequencies known as cluster frequency reuse factor (1.3) Each cell is in the cluster is assigned of the available channels. The radio frequency from 3Hz to 3000GHz are separated into 12 bands, as shown in the table. Frequency spectrum has different propagation characteristics. As far as concerned to the mobile communication, we only pay attention to the UHF spectrum. 1.2.1 Cluster size(N) If we use N large (a large cluster), the ratio of the cell radius and the distance between co-channel decreases, which causes weaker co-channel interference. But if N is smaller, by keeping the cell size same then we more clusters are needed to cover an area. Hence the capacity is increased. So if we use N larger then the quality of voice is good but the capacity is less and vice versa. 1.3 Interference Interference is one of the major factor in the capacity and performance of a cellular network. The interference is due to a call in the neighbouring cell, another base station operating in the same frequency. Interference causes crosstalk and noise. There are two types of interference. Adjacent channel interference Co-channel interference 1.3.1 Adjacent channel interference Adjacent channel interference results from the signals which are side by side in frequencies to the desires signal. Adjacent channel interference is caused by wrong filtering, like incomplete filtering of not wanted modulation in frequency modulation (FM) systems, not proper tuning, or poor control of frequency. It causes problem. Adjacent channel interference can be reduced by careful channel assignment, filtering and power control within a cell. 1.3.2 Co-channel interference Co-channel cells are the cell which use the same set of frequencies. For example, in the figure 1.2 all the letter ‘A are the co-channel cell because they use the same set of frequencies. Interference due to the co-channel cells is called co-channel interference. It can be reduced by using greater value of N(cluster size). If D is the distance between the co-channel cells and R is radius of the cell, then by using greater value of N the ratio between D to R is increased hence reducing co-channel interference. The relation can b written as: 1.4 Improving coverage and capacity The number of channels assigned to a cell became insufficiently as the demand of wireless system increases. To provide more channels per coverage, some techniques are introduced which improve the coverage and capacity. These techniques are: Cell splitting Sectoring Microcell zone concept 1.4.1 Cell Splitting Cell splitting is the process of dividing a cell into smaller cells. In this process we reduce the antenna height and power of the base station. Cell splitting increases the capacity by increasing frequency reuse factor. In cell splitting Channel assignment techniques remain the same. SIR remains the same Trunking inefficiency do not get suffer. Trunking efficiency is the measure of the number of users which can be offered a particular Grade of service with the specific configuration of the channels. The grade of service (GOS) is the measure of the ability to access a trunked system during the busy hours. The radius of the new cell is reduce to half. So power is also reduced. 1.4.2 Sectoring Sectoring uses directional antennas for controlling the interferences and frequency reuse of channels. The co-channel interference is reduced and thus increasing system performance by using directional antenna. A cell is normally divided into three 120 sectors or six 60 °sectors. When sectoring is used, the channels used in a particular cell are broken into sectored groups and are used only within a particular sector. The no. of channels get divided into sectored groups, so the trunking efficiency is reduced. In sectoring SIR is improved by reducing interference and trunking efficiency is reduced. Handoff increased in sectoring. The s/I improvement allows to decrease the cluster size N in order to improve the frequency reuse, and thus the system capacity. Further improvements in s/I is achieved by downtilting the sector antennas. 1.4.3 Microcell Zone Concept Microcell Zone concept distributes the coverage of a cell and extends the cell boundry to hard to reach places. It maintains the S/I and trunking efficiency, and increases the coverage and capacity of an area. 1.5 Radio wave propagation Radio waves propagate through different channels and by different ways to reach the MS(Mobile Station). It also depends upon the speed of the wave. The propagation of radio waves depends into two types: Large scale propagation Small scale propagation(Fading) 1.5.1 Large scale propagation The model predicts that the average signal strength for all transmitter-receiver (TR) distance on a scale known as large scale propogation model. 1.5.2 Small scale propagation The models that predicts the rapid fluctuation of the received signal strength over a short distance known as small scale propagation model or fading. 1.5.3 Free Space Propagation Model The free space propagation model is used when the transmitter and receiver have line of sight (LOS) between them to predict the received signal strength. Where Pr = received power. Pt = transmitted power, Gt and Gr = transmitter and receiver antenna gain, do= T-R separation, L = system loss factor ÃŽ » = wavelength. 1.6 Propagation Mechanisms The propagation mechanisms which effect propagation are: Reflection Scattering Diffraction Reach directly (in case of Line of Sight) If there is line of sight signal reach the Mobile station directly and signal power is very strong. 1.6.1 Reflection Reflection occurs when an electromagnetic wave falls upon an object which is large as compare to the wavelength of the wave. It occurs from buildings, walls, surface of earth etc. 1.6.2 Diffraction Diffraction happens when the path between the transmitters and receivers is disturbed by a surface with sharp edges. It source is any sharp edge object. Knife edge diffraction Model is used for diffraction. 1.6.3 Scattering Scattering occurs when an electromagnetic wave falls upon an object which has small dimension as compared to the wavelength of the wave. Scattering occurs due to small objects, rough surfaces or any irregularities. Objects such as lamp posts, trees scatter the radio waves. Radar Cross Section Model is used for sectoring. 1.7 Small Scale Fading Fading is the fluctuation in the received signal strength over very short distance. Fading is due to reception of different versions of same signals. Following are the factors which influence Small-Scale Fading are: Multipath propagation: Due to absence of LOS signal follows the multipath due to reflection, diffraction, scattering. Speed of the mobile: Fading also accurs due to the movement of the mobile as the signal strength changes. Speed of the surrounding objects: Fading also occurs due to the movement of mobile, if the speed of the surrounding object is much faster then the speed of the mobile then it also induces Doppler shift. The transmission BW (bandwidth) of the signal: The received signal is distorted if the transmitted signal bandwidth is greater than the bandwidth of the channel. 1.8 GSM The first GSM network was launched in 1991. The GSM network was structured hierarchically. It consists of one administrative region, which is assigned to MSC. Each administrative region is consists of at least one location area (LA). LA is also called the visited area. An LA consists of several cell groups. Each cell group is assigned to a base station controller (BSC). Cells of one BSC may belong to different LAs. GSM distinguishes explicitly between users and identifiers. The user identity associates with a MS by mans of personal chip cards, the subscriber identity module (SIM). The SIM is portable and transferable MSs. The mobile Roaming number is a temporary location-dependent ISDN number. It is assigned by a locally responsible Visited Location Number (VLR). The GSM network can defined into four major parts. Mobile station (MS). Base station Sub-system (BSS). Network and switching Sub-system (NSS). Operation and support Sub-system (OSS). 1.8.1 Mobile station A mobile station consists of two parts. Mobile equipment and terminal. Subscriber identity module (SIM). 1.8.2 THE Terminal There are different types of terminal distinguished principally by their power and application: The fixed terminals are installed in cars. The GSM portable terminals can be used in the vehicles. The hand held terminals have experienced a biggest success depending upon their weight and volume, which are decreasing continuously. These terminals can emit power of 2 w. The evolution of technologies decreases the maxpower to 0.8 watts. 1.8.3 SIM Sim is a smart card which identifies the terminal. Using the sim card in the mobile, the user can access all the services provided by the provider. Terminal does not operate without the sim,. Personal identification number(PIN) helps protect sim. 1.9 The Base Station Subsystem The BSS connects the MS to Network Switching Sub-system. It is incharge of transmission as well as reception. The BSS is further divided into two main parts. Base transceiver station (BTS) or base station. Base Station Controller(BSC). 1.9.1 The Base Transceiver Station The BTS deals with the transceivers and antennas which are used in each cell of a network. BTS is usually in the center of cell. Size of the cell is defined by its transmitting power. Each BTS has one to sixteen transceivers which depends upon the density of users. 1.10 The Base Station Controller The BSC controllers the group of BTS and manages radio resources. The BSC is incharge of handover, frequency hoping and exchange of radio frequency power level of BTSs. 1.11 The Network and Switching Subsystem It is to manage the communication between mobile and other users, such as ISDN users, telephony users. It store the information in data bases about the subscriber and manage their mobility. 1.12 The Mobile Services Switching Center (MSC) It is the central component of the NSS. Network Switching Functions are performed by the MSC. It provides connection to more other networks. Chapter 2 Planning One of the important phase of the project in which all the detail information is gathered about different areas and their population including city boundary, market analysis and roads are the key features in these details are city profiling. This phase is divided into different tasks. 2.1 Lahore City Map First is to get the detailed map of the Lahore city, which includes all the aspects related to the project. These are following:- Area division Dense area Sub-urban area open area Boundaries of City 2.2 Boundary Marking The project â€Å"Radio Frequency Planning † is basically the frequency planning of the city, not to its belongings areas. The exact boundary of the city is marked in order to concentrate on the marked area. 2.3 Population Population of the city plays an important role in the frequency planning. It helps a lot in the estimations and assumptions. The population of the city is around 10 million. 2.4 Estimations and Assumptions This part is mainly concerned with the frequency planning. When a new telecommunication company comes in the market, it estimates it users. This estimation is done with respect to the total population of the particular area. The estimations are done to estimate the users on urban, suburban and open areas. 2.5 Area Division The area division depends upon the percentage of population in an area and type of area as it is the important factor in the site as wall as frequency planning. The Lahore city is divided into three major areas. 2.5.1 Urban Area Urban area is an area which is surrounded by more density of humans and structures in comparison to the areas surrounding it 2.5.2 Sub-Urban Area Suburban area is districts located either inside a town or citys outer premises or just outside its limits. 2.5.3 Open Area Open area is partially settled places away from the large cities. Such areas are different from more intensively settled urban and suburban areas. There are less population as compared to urban and sub-urban areas. 2.6 Site Planning 2.6.1 Map of Lahore 2.6.2 Urban Area 2.6.3 Sub-Urban Area 2.6.4 Open Area HATA Model for Urban Area = Path loss in Urban Areas in decibel (dB) = Height of base station in meters (m) = Height of mobile station Antenna in meters (m) = Frequency of Transmission in megahertz (MHz). = Distance between the base station and mobile stations in kilometers To calculate radius of a site of Urban Area For Downlink =-75 dBm(this power covers both indoor and outdoor coverage range -70 to -90 dBm ) = 35 m(Average height of antenna in city is 30 to 200 m) = 1.5 m = 13 dBm = 46 dBm (Max Power transmitted by Base Station) = Cable loss = 2.01 dBm = 945 Mhz (Downlink frequency 935 to 960 MHz) = Combine Loss= 5.5 dBm Putting in HATA equation For Uplink = -102 dbm(Min Power received by Base Station) = 29.1 dBm (Max transmitted power mobile) = 900 MHz (890 to 915 MHz) Putting in HATA equation We will be using d=0.90 Km as it covers both Uplink and Downlink. For Sub-Urban Area For Downlink For downlink of Suburban parameters are same as for Urban. For Uplink Uplink parameters are also same as Urban Areas We will be using d=2.32 Km for Suburban Area. For Open Areas Downlink For downlink parameters are same as Urban Areas For Uplink We will be using d=8 for Open Areas. We will be using 65 degree directional Antennas. Angle between 2 consecutive lobes is 120 degree. r=Radius of lobes For Full Lobe For All 3 Lobes Area of site in Urban Area of site in Suburban Area of site in Fields(Open Area) Calculations for Number of BTS 2.7 Frequency Planning One of the breakthrough in solving the problem of congestion and user capacity is the cellular concept. Cellular radio systems rely on reuse of channels throughout a coverage region. A group of radio channels are allocated to each cellular base station to be used within a area known as cell. Different channels are assingned in the adjacent cells of the base station. The same group of channels can be used by limiting the coverage area, within the boundaries of a cell to cover different levels, within tolerable limits. Frequency planning is the design process of selecting, allocating or assinging channel group stations within a system. The theoretical calculations, and fixed size of a cell is assumed, that can differentiate no of channels in a cell and from that can differentiate cluster size and will differ, the capacity of the cellular system. There is a trade between the interference abd capacity in theoretical calculation as if we reduce the cluster size more cells are needed to cover the area and more capacity. But from another perceptive small cluster size causes the ratio between cell radius, and the distance between co-channels cells to increase, leading to stronger co-channels interference. In practical calculations, a fixed no of channels are allocated to a cell. One channel per lobe 3channels are allocated to a cell. The capacity can be increased by allocating 2 channels per lobe or 6 channels per cell. But after allocating channels once, they will remain fixed for the whole cellular system and frequency planning. Now as with the fixed no of channels as per cell, the capacity will remain constant of the system and we can achieve weaker co-channel interference, by having a small cluster size(N). A cluster size of 7 is selected in this project, which is also discussed. So in later practical world , there is not a trade-off between capacity and co-channel interference. 2.7.1 Calculations The city of Lahore is divided into 120 cells. We take 3 channels per cell that gives us 1 cell = 3 channels Reuse factor = 1/N = 1/7 Which means that frequency can be reused after a cluster of 7 cells. That gives us the total of 7 x 3 =21+ 2(guard cells)=23 channels We will be using 23 channels with a reuse factor of 1/7. 2.8 Implementation in GAIA Figure 2.1 is a snapshot of GAIA planning tool showing us the structure of an urban area. This figure illustrates the urban boundary which we calculate during city profiling. It also shows the antenna system used, in this case 3 sectors with 120 degree azimuth spacing is used. Antennas are installed on the rooftop of buildings or houses due to dense population and to provide a better coverage. Figure 2.2 shows us the planning of a Sub-Urban area with sites more distance apart as population is less, compared to urban. In Sub-Urban 3 sector cell is used which is similar to the ones used in Urban Figure 2.3 shows us the coverage planning of a network in an open area. Here the sites are further apart as open area has least population. 3 sector cell is used with the antennas installed above a steel structure for better coverage. Figure 2.4 shows the sector wise cell area of the sites in the urban area of the city in GAIA, which can be differentiated with the help of different color for each sector, also it shows the coverage area of every site. We have used grid approach in this planning, it is the most widely used and most effective technique used theoretically and practically. Figure 2.5 shows the cell boundary of sites in Sub-urban area of the city. Figure 2.6 shows the cell boundary in the open area of the city. Figure 2.7 illustrates the signal strength in the urban area of the city. Because of the dense population the signal power is strong throughout to ensure high quality calls to the subscribers with minimum interference and call drop. Figure 2.8 shows the 2G signal strength in the Sub-urban areas where population density is low and so the power required is less as compared to urban areas. Figure 2.9 shows the serving signal strength in open area. The signal is the weakest as there is the least number of people in open area. CHAPTER 3 FUNDAMENTALS OF 3G 3.1 INTRODUCTION The Universal Mobile Telephony System (UMTS) or 3G as it is known is the next big thing in the world of mobile telecommunications. It provides convergence between mobile telephony broadband access and Internet Protocol (IP) backbones. This introduces very variable data rates on the air interface, as well as the independence of the radio access infrastructure and the service platform. For users this makes available a wide spectrum of circuit-switched or packet data services through the newly developed high bit rate radio technology named Wideband Code Division Multiple Access (WCDMA). The variable bit rate and variety of traffic on the air interface have presented completely new possibilities for both operators and users, but also new challenges to network planning and optimization. The success of the technology lies in optimum utilization of resources by efficient planning of the network for maximum coverage, capacity and quality of service. This part of our project aims to detail method of UMTS Radio Network (UTRAN) Planning. The new technologies and services have brought vast changes within the network planning; the planning of a 3G network is now a complex balancing act between all the variables in order to achieve the optimal coverage, capacity and Quality of Service simultaneously. 3.2 WCDMA In UMTS access scheme is DS-CDMA (Direct Sequence CDMA) which involves that a code sequence is directly used to modulate the transmitted radio signal with information which is spreaded over approximately 5 MHz bandwidth and data rate up to 2 Mbps. Every user is assigned a separate code/s depending upon the transaction, thus separation is not based on frequency or time but on the basis of codes. The major advantage of using WCDMA is that there is no plan for frequency re-use. 3.3 NODE B Node B functions as a RBS (Radio Base Station) and provides radio coverage to a geographical area, by providing physical radio link between the UE (User Equipment) and the network. Node B also refer the codes that are important to identify channels in a WCDMA system. It contains the RF transceiver, combiner, network interface and system controller, timing card, channel card and backplane. The Main Functions of Node B are: Closed loop power control CDMA Physical Channel coding Modulation /Demodulation Micro Diversity Air interface Transmission /Reception Error handling Both FDD and TDD modes are supported by Single node B and it can be co-located with a GSM BTS to reduce implementation costs. The conversion of data from the Radio interface is the main task of Node B. It measures strength and quality of the connection. The Node B participates in power control and is also responsible for the FDD softer handover. On the basis of coverage, capacity and antenna arrangement Node B can be categorizes as Omni directional and Sectorial: OTSR (Omni Transmitter Sector Receiver) STSR (Sector Transmitter Sector Receiver) 3.3.1 OTSR (Omni Transmit Sector Receive) The OTSR configuration uses a single (PA) Power Amplifier, whose output is fed to a transmit splitter. The power of the RF signal is divided by three and fed to the duplexers of the three sectors, which are connected to sectorized antennas. 3.3.2 STSR (Sectorial Transmit Sector Receive) The STSR configuration uses three (PA) Power Amplifier, whose output is fed directly to the duplexers of the three sectors, which are connected to sectorized antennas. Node B serve the cells which depend on sectoring. 3.4 ACCESS MODES 3.4.1 FDD (Frequency Division Duplex) A duplex method whereby uplink and downlink transmissions use two separated radio frequencies. In the FDD, each downlink and uplink uses the different frequency band. 3.4.2 TDD (Time Division Duplex) It is a method in which same frequency is used for the transmission of downlink and uplink by using synchronized time intervals. Time slots are divided into transmission and reception part in the physical channel. 3.4.3 Frequency Bands 3.4 CELLULAR CONCEPT The UMTS network is third generation of cellular radio network which operate on the principle of dividing the coverage area into zones or cells (node B in this case), each of which has its own set of resources or transceivers (transmitters /receivers) to provide communication channels, which can be accessed by the users of the network. A cell is created by transmitting numerous number of low power transmitters. Cell size is determined by the different power levels according to the subscriber demand and density within a specific region. Cells can be added to accommodate growth. Communication in a cellular network is full duplex, which is attained by sending and receiving messages on two different frequencies. In order to increase the frequency reuse capability to promote spectrum efficiency of a system, it is desirable to reuse the same channel set in two cells which are close to each other as possible, however this increases the probability of co-channel interference . The performance of cellular mobile radio is affected by co channel interference. Co-channel interference, when not minimized, decreases the ratio of carrier to interference powers (C/I) at the periphery of cells, causing diminished system capacity, more frequent handoffs, and dropped calls. Usually cells are represented by a hexagonal cell structure, to demonstrate the concept, however, in practice the shape of cell is determined by the local topography. 3.4.1 Types of Cell The 3G network is divided on the basis of size of area covered. Micro cell the area of intermediate coverage, e.g., middle of a city. Pico cell the area of smallest coverage, e.g., a hot spot in airport or hotel. Macro cell the area of largest coverage, e.g., an complete city. 3.5 FADING Fading is another major constraint in wireless communication. All signals regardless of the medium used, lose strength this is known as attenuation/fading. There are three types of fading: Pathloss Shadowing Rayleigh Fading 3.5.1 Pathloss Pathloss occurs as the power of the signal steadily decreases over distance from the transmitter. 3.5.2 Shadowing Shadowing or Log normal Fading is causes by the presence of building, hills or even tree foilage. 3.5.3 Rayleigh Fading Rayleigh Fading or multipath fading is a sudden decrease in signal strength as a result of interference between direct and reflected signal reaching the mobile station. 3.6 HANDOVER IN CDMA The term handover or handoff refers to the process of transferring data session or an ongoing call from channel to channel connected to the core network to another. The handover is performed due to the mobility of a user that can be served in another cell more efficiently. Handover is necessary to support mobility of users. Handover are of following types (also known as handoff): Hard Handover Soft Handover Softer Handover 3.6.1Hard.Handover In Hard handover the old radio links in the UE are dispose of before the new radio links takes place. It can be either seamless or non-seamless. In seamless hard handover, the handover is not detected by the user. A handover that needs a change of the carrier frequency is a hard handover. 3.6.2Soft.Handover Soft handover takes place when cells on the same frequency are changed. Atleast one radio link is always kept to the UTRAN in the removal and addition of the radio links. It is opperated by means of macro diversity in which many radio links are active. 3.6.3Softer.handover It is one of the important case of soft handover which describe the removal and addition of the radio links which is being belonged by the same Node B. Macro diversity can be performed in the NODE B with maximum ratio combining in softer handover. There are inter-cell and intra-cell handover. Handover 3G 2G (e.g. handover to GSM) FDD inter-frequency hard handover TDD/FDD handover (change of cell) TDD/TDD handover FDD/TDD handover (chan Design and Planning of 2G, 3G and Channel Modelling of 4G Design and Planning of 2G, 3G and Channel Modelling of 4G Chapter 1 Fundamentals of Cellular Communication In this chapter, all the background knowledge which is required for this project has been discussed. 1.1 Cell The area covered by single BTS(base transceiver station) is known as cell. 1.1.1 Shape of cell The shape of cell depends upon the coverage of the base station. The actual coverage of the base station is called footprint and is found with the help of measurements from the field. We can make our calculations easier by using the shape of circle noting that there would not be spaces between them. As, the purpose is to provide coverage to each and every subscriber. But if there are spaces between the coverage areas then the person in that specific area will not be able to get any coverage. To cover the problem of interleaving spaces, the shapes that can be used theoretically are: Square Triangle Hexagon But in selection criteria one thing must be kept in mind that every person within a cell get same coverage specially the person at the edges of the cell. So hexagon is the shape among these three choices with largest coverage area. Its coverage area and shape is closest to the circle and it helps tessellate. Omnidirectional antenna is used in the center of it, and if we want to use sectored directional antenna then it must be used at any three corners of it. 1.1.2 Area of the Cell The area of a cell with radius R is shown in figure 1.1(a), is given by: 1.2 Frequency planning While developing the cellular system, it has limited capacity due to the given bandwidth. So, in order to solve this problem Cellular Systems have to depends on an intelligent and more use of channels through out the area. Every cellular base station is alloted a group of different radio channels to be used in a cell. Base station in the adjacent cells use completely different frequencies. For this purpose antennas are used such that their power may get limited within the cell. In this way the allocated frequencies maybe reused in different cells again. The process of allocating and selecting channel groups for all the base stations in a system is known as frequency reuse or frequency planning. We use two types of antennas: Omnidirectional antenna Sectored directional antenna Omnidirectional antennas are used in the cells which are centrally excited and sectored directional antennas are used in the edge excite cells. To understand the concept of frequency reuse, let us say that S are the total no. of duplex channels available for use, k number of channels given to each cell i.e. k S=kN (1.2) Where N is no. of cells which uses the complete set of available frequencies known as cluster frequency reuse factor (1.3) Each cell is in the cluster is assigned of the available channels. The radio frequency from 3Hz to 3000GHz are separated into 12 bands, as shown in the table. Frequency spectrum has different propagation characteristics. As far as concerned to the mobile communication, we only pay attention to the UHF spectrum. 1.2.1 Cluster size(N) If we use N large (a large cluster), the ratio of the cell radius and the distance between co-channel decreases, which causes weaker co-channel interference. But if N is smaller, by keeping the cell size same then we more clusters are needed to cover an area. Hence the capacity is increased. So if we use N larger then the quality of voice is good but the capacity is less and vice versa. 1.3 Interference Interference is one of the major factor in the capacity and performance of a cellular network. The interference is due to a call in the neighbouring cell, another base station operating in the same frequency. Interference causes crosstalk and noise. There are two types of interference. Adjacent channel interference Co-channel interference 1.3.1 Adjacent channel interference Adjacent channel interference results from the signals which are side by side in frequencies to the desires signal. Adjacent channel interference is caused by wrong filtering, like incomplete filtering of not wanted modulation in frequency modulation (FM) systems, not proper tuning, or poor control of frequency. It causes problem. Adjacent channel interference can be reduced by careful channel assignment, filtering and power control within a cell. 1.3.2 Co-channel interference Co-channel cells are the cell which use the same set of frequencies. For example, in the figure 1.2 all the letter ‘A are the co-channel cell because they use the same set of frequencies. Interference due to the co-channel cells is called co-channel interference. It can be reduced by using greater value of N(cluster size). If D is the distance between the co-channel cells and R is radius of the cell, then by using greater value of N the ratio between D to R is increased hence reducing co-channel interference. The relation can b written as: 1.4 Improving coverage and capacity The number of channels assigned to a cell became insufficiently as the demand of wireless system increases. To provide more channels per coverage, some techniques are introduced which improve the coverage and capacity. These techniques are: Cell splitting Sectoring Microcell zone concept 1.4.1 Cell Splitting Cell splitting is the process of dividing a cell into smaller cells. In this process we reduce the antenna height and power of the base station. Cell splitting increases the capacity by increasing frequency reuse factor. In cell splitting Channel assignment techniques remain the same. SIR remains the same Trunking inefficiency do not get suffer. Trunking efficiency is the measure of the number of users which can be offered a particular Grade of service with the specific configuration of the channels. The grade of service (GOS) is the measure of the ability to access a trunked system during the busy hours. The radius of the new cell is reduce to half. So power is also reduced. 1.4.2 Sectoring Sectoring uses directional antennas for controlling the interferences and frequency reuse of channels. The co-channel interference is reduced and thus increasing system performance by using directional antenna. A cell is normally divided into three 120 sectors or six 60 °sectors. When sectoring is used, the channels used in a particular cell are broken into sectored groups and are used only within a particular sector. The no. of channels get divided into sectored groups, so the trunking efficiency is reduced. In sectoring SIR is improved by reducing interference and trunking efficiency is reduced. Handoff increased in sectoring. The s/I improvement allows to decrease the cluster size N in order to improve the frequency reuse, and thus the system capacity. Further improvements in s/I is achieved by downtilting the sector antennas. 1.4.3 Microcell Zone Concept Microcell Zone concept distributes the coverage of a cell and extends the cell boundry to hard to reach places. It maintains the S/I and trunking efficiency, and increases the coverage and capacity of an area. 1.5 Radio wave propagation Radio waves propagate through different channels and by different ways to reach the MS(Mobile Station). It also depends upon the speed of the wave. The propagation of radio waves depends into two types: Large scale propagation Small scale propagation(Fading) 1.5.1 Large scale propagation The model predicts that the average signal strength for all transmitter-receiver (TR) distance on a scale known as large scale propogation model. 1.5.2 Small scale propagation The models that predicts the rapid fluctuation of the received signal strength over a short distance known as small scale propagation model or fading. 1.5.3 Free Space Propagation Model The free space propagation model is used when the transmitter and receiver have line of sight (LOS) between them to predict the received signal strength. Where Pr = received power. Pt = transmitted power, Gt and Gr = transmitter and receiver antenna gain, do= T-R separation, L = system loss factor ÃŽ » = wavelength. 1.6 Propagation Mechanisms The propagation mechanisms which effect propagation are: Reflection Scattering Diffraction Reach directly (in case of Line of Sight) If there is line of sight signal reach the Mobile station directly and signal power is very strong. 1.6.1 Reflection Reflection occurs when an electromagnetic wave falls upon an object which is large as compare to the wavelength of the wave. It occurs from buildings, walls, surface of earth etc. 1.6.2 Diffraction Diffraction happens when the path between the transmitters and receivers is disturbed by a surface with sharp edges. It source is any sharp edge object. Knife edge diffraction Model is used for diffraction. 1.6.3 Scattering Scattering occurs when an electromagnetic wave falls upon an object which has small dimension as compared to the wavelength of the wave. Scattering occurs due to small objects, rough surfaces or any irregularities. Objects such as lamp posts, trees scatter the radio waves. Radar Cross Section Model is used for sectoring. 1.7 Small Scale Fading Fading is the fluctuation in the received signal strength over very short distance. Fading is due to reception of different versions of same signals. Following are the factors which influence Small-Scale Fading are: Multipath propagation: Due to absence of LOS signal follows the multipath due to reflection, diffraction, scattering. Speed of the mobile: Fading also accurs due to the movement of the mobile as the signal strength changes. Speed of the surrounding objects: Fading also occurs due to the movement of mobile, if the speed of the surrounding object is much faster then the speed of the mobile then it also induces Doppler shift. The transmission BW (bandwidth) of the signal: The received signal is distorted if the transmitted signal bandwidth is greater than the bandwidth of the channel. 1.8 GSM The first GSM network was launched in 1991. The GSM network was structured hierarchically. It consists of one administrative region, which is assigned to MSC. Each administrative region is consists of at least one location area (LA). LA is also called the visited area. An LA consists of several cell groups. Each cell group is assigned to a base station controller (BSC). Cells of one BSC may belong to different LAs. GSM distinguishes explicitly between users and identifiers. The user identity associates with a MS by mans of personal chip cards, the subscriber identity module (SIM). The SIM is portable and transferable MSs. The mobile Roaming number is a temporary location-dependent ISDN number. It is assigned by a locally responsible Visited Location Number (VLR). The GSM network can defined into four major parts. Mobile station (MS). Base station Sub-system (BSS). Network and switching Sub-system (NSS). Operation and support Sub-system (OSS). 1.8.1 Mobile station A mobile station consists of two parts. Mobile equipment and terminal. Subscriber identity module (SIM). 1.8.2 THE Terminal There are different types of terminal distinguished principally by their power and application: The fixed terminals are installed in cars. The GSM portable terminals can be used in the vehicles. The hand held terminals have experienced a biggest success depending upon their weight and volume, which are decreasing continuously. These terminals can emit power of 2 w. The evolution of technologies decreases the maxpower to 0.8 watts. 1.8.3 SIM Sim is a smart card which identifies the terminal. Using the sim card in the mobile, the user can access all the services provided by the provider. Terminal does not operate without the sim,. Personal identification number(PIN) helps protect sim. 1.9 The Base Station Subsystem The BSS connects the MS to Network Switching Sub-system. It is incharge of transmission as well as reception. The BSS is further divided into two main parts. Base transceiver station (BTS) or base station. Base Station Controller(BSC). 1.9.1 The Base Transceiver Station The BTS deals with the transceivers and antennas which are used in each cell of a network. BTS is usually in the center of cell. Size of the cell is defined by its transmitting power. Each BTS has one to sixteen transceivers which depends upon the density of users. 1.10 The Base Station Controller The BSC controllers the group of BTS and manages radio resources. The BSC is incharge of handover, frequency hoping and exchange of radio frequency power level of BTSs. 1.11 The Network and Switching Subsystem It is to manage the communication between mobile and other users, such as ISDN users, telephony users. It store the information in data bases about the subscriber and manage their mobility. 1.12 The Mobile Services Switching Center (MSC) It is the central component of the NSS. Network Switching Functions are performed by the MSC. It provides connection to more other networks. Chapter 2 Planning One of the important phase of the project in which all the detail information is gathered about different areas and their population including city boundary, market analysis and roads are the key features in these details are city profiling. This phase is divided into different tasks. 2.1 Lahore City Map First is to get the detailed map of the Lahore city, which includes all the aspects related to the project. These are following:- Area division Dense area Sub-urban area open area Boundaries of City 2.2 Boundary Marking The project â€Å"Radio Frequency Planning † is basically the frequency planning of the city, not to its belongings areas. The exact boundary of the city is marked in order to concentrate on the marked area. 2.3 Population Population of the city plays an important role in the frequency planning. It helps a lot in the estimations and assumptions. The population of the city is around 10 million. 2.4 Estimations and Assumptions This part is mainly concerned with the frequency planning. When a new telecommunication company comes in the market, it estimates it users. This estimation is done with respect to the total population of the particular area. The estimations are done to estimate the users on urban, suburban and open areas. 2.5 Area Division The area division depends upon the percentage of population in an area and type of area as it is the important factor in the site as wall as frequency planning. The Lahore city is divided into three major areas. 2.5.1 Urban Area Urban area is an area which is surrounded by more density of humans and structures in comparison to the areas surrounding it 2.5.2 Sub-Urban Area Suburban area is districts located either inside a town or citys outer premises or just outside its limits. 2.5.3 Open Area Open area is partially settled places away from the large cities. Such areas are different from more intensively settled urban and suburban areas. There are less population as compared to urban and sub-urban areas. 2.6 Site Planning 2.6.1 Map of Lahore 2.6.2 Urban Area 2.6.3 Sub-Urban Area 2.6.4 Open Area HATA Model for Urban Area = Path loss in Urban Areas in decibel (dB) = Height of base station in meters (m) = Height of mobile station Antenna in meters (m) = Frequency of Transmission in megahertz (MHz). = Distance between the base station and mobile stations in kilometers To calculate radius of a site of Urban Area For Downlink =-75 dBm(this power covers both indoor and outdoor coverage range -70 to -90 dBm ) = 35 m(Average height of antenna in city is 30 to 200 m) = 1.5 m = 13 dBm = 46 dBm (Max Power transmitted by Base Station) = Cable loss = 2.01 dBm = 945 Mhz (Downlink frequency 935 to 960 MHz) = Combine Loss= 5.5 dBm Putting in HATA equation For Uplink = -102 dbm(Min Power received by Base Station) = 29.1 dBm (Max transmitted power mobile) = 900 MHz (890 to 915 MHz) Putting in HATA equation We will be using d=0.90 Km as it covers both Uplink and Downlink. For Sub-Urban Area For Downlink For downlink of Suburban parameters are same as for Urban. For Uplink Uplink parameters are also same as Urban Areas We will be using d=2.32 Km for Suburban Area. For Open Areas Downlink For downlink parameters are same as Urban Areas For Uplink We will be using d=8 for Open Areas. We will be using 65 degree directional Antennas. Angle between 2 consecutive lobes is 120 degree. r=Radius of lobes For Full Lobe For All 3 Lobes Area of site in Urban Area of site in Suburban Area of site in Fields(Open Area) Calculations for Number of BTS 2.7 Frequency Planning One of the breakthrough in solving the problem of congestion and user capacity is the cellular concept. Cellular radio systems rely on reuse of channels throughout a coverage region. A group of radio channels are allocated to each cellular base station to be used within a area known as cell. Different channels are assingned in the adjacent cells of the base station. The same group of channels can be used by limiting the coverage area, within the boundaries of a cell to cover different levels, within tolerable limits. Frequency planning is the design process of selecting, allocating or assinging channel group stations within a system. The theoretical calculations, and fixed size of a cell is assumed, that can differentiate no of channels in a cell and from that can differentiate cluster size and will differ, the capacity of the cellular system. There is a trade between the interference abd capacity in theoretical calculation as if we reduce the cluster size more cells are needed to cover the area and more capacity. But from another perceptive small cluster size causes the ratio between cell radius, and the distance between co-channels cells to increase, leading to stronger co-channels interference. In practical calculations, a fixed no of channels are allocated to a cell. One channel per lobe 3channels are allocated to a cell. The capacity can be increased by allocating 2 channels per lobe or 6 channels per cell. But after allocating channels once, they will remain fixed for the whole cellular system and frequency planning. Now as with the fixed no of channels as per cell, the capacity will remain constant of the system and we can achieve weaker co-channel interference, by having a small cluster size(N). A cluster size of 7 is selected in this project, which is also discussed. So in later practical world , there is not a trade-off between capacity and co-channel interference. 2.7.1 Calculations The city of Lahore is divided into 120 cells. We take 3 channels per cell that gives us 1 cell = 3 channels Reuse factor = 1/N = 1/7 Which means that frequency can be reused after a cluster of 7 cells. That gives us the total of 7 x 3 =21+ 2(guard cells)=23 channels We will be using 23 channels with a reuse factor of 1/7. 2.8 Implementation in GAIA Figure 2.1 is a snapshot of GAIA planning tool showing us the structure of an urban area. This figure illustrates the urban boundary which we calculate during city profiling. It also shows the antenna system used, in this case 3 sectors with 120 degree azimuth spacing is used. Antennas are installed on the rooftop of buildings or houses due to dense population and to provide a better coverage. Figure 2.2 shows us the planning of a Sub-Urban area with sites more distance apart as population is less, compared to urban. In Sub-Urban 3 sector cell is used which is similar to the ones used in Urban Figure 2.3 shows us the coverage planning of a network in an open area. Here the sites are further apart as open area has least population. 3 sector cell is used with the antennas installed above a steel structure for better coverage. Figure 2.4 shows the sector wise cell area of the sites in the urban area of the city in GAIA, which can be differentiated with the help of different color for each sector, also it shows the coverage area of every site. We have used grid approach in this planning, it is the most widely used and most effective technique used theoretically and practically. Figure 2.5 shows the cell boundary of sites in Sub-urban area of the city. Figure 2.6 shows the cell boundary in the open area of the city. Figure 2.7 illustrates the signal strength in the urban area of the city. Because of the dense population the signal power is strong throughout to ensure high quality calls to the subscribers with minimum interference and call drop. Figure 2.8 shows the 2G signal strength in the Sub-urban areas where population density is low and so the power required is less as compared to urban areas. Figure 2.9 shows the serving signal strength in open area. The signal is the weakest as there is the least number of people in open area. CHAPTER 3 FUNDAMENTALS OF 3G 3.1 INTRODUCTION The Universal Mobile Telephony System (UMTS) or 3G as it is known is the next big thing in the world of mobile telecommunications. It provides convergence between mobile telephony broadband access and Internet Protocol (IP) backbones. This introduces very variable data rates on the air interface, as well as the independence of the radio access infrastructure and the service platform. For users this makes available a wide spectrum of circuit-switched or packet data services through the newly developed high bit rate radio technology named Wideband Code Division Multiple Access (WCDMA). The variable bit rate and variety of traffic on the air interface have presented completely new possibilities for both operators and users, but also new challenges to network planning and optimization. The success of the technology lies in optimum utilization of resources by efficient planning of the network for maximum coverage, capacity and quality of service. This part of our project aims to detail method of UMTS Radio Network (UTRAN) Planning. The new technologies and services have brought vast changes within the network planning; the planning of a 3G network is now a complex balancing act between all the variables in order to achieve the optimal coverage, capacity and Quality of Service simultaneously. 3.2 WCDMA In UMTS access scheme is DS-CDMA (Direct Sequence CDMA) which involves that a code sequence is directly used to modulate the transmitted radio signal with information which is spreaded over approximately 5 MHz bandwidth and data rate up to 2 Mbps. Every user is assigned a separate code/s depending upon the transaction, thus separation is not based on frequency or time but on the basis of codes. The major advantage of using WCDMA is that there is no plan for frequency re-use. 3.3 NODE B Node B functions as a RBS (Radio Base Station) and provides radio coverage to a geographical area, by providing physical radio link between the UE (User Equipment) and the network. Node B also refer the codes that are important to identify channels in a WCDMA system. It contains the RF transceiver, combiner, network interface and system controller, timing card, channel card and backplane. The Main Functions of Node B are: Closed loop power control CDMA Physical Channel coding Modulation /Demodulation Micro Diversity Air interface Transmission /Reception Error handling Both FDD and TDD modes are supported by Single node B and it can be co-located with a GSM BTS to reduce implementation costs. The conversion of data from the Radio interface is the main task of Node B. It measures strength and quality of the connection. The Node B participates in power control and is also responsible for the FDD softer handover. On the basis of coverage, capacity and antenna arrangement Node B can be categorizes as Omni directional and Sectorial: OTSR (Omni Transmitter Sector Receiver) STSR (Sector Transmitter Sector Receiver) 3.3.1 OTSR (Omni Transmit Sector Receive) The OTSR configuration uses a single (PA) Power Amplifier, whose output is fed to a transmit splitter. The power of the RF signal is divided by three and fed to the duplexers of the three sectors, which are connected to sectorized antennas. 3.3.2 STSR (Sectorial Transmit Sector Receive) The STSR configuration uses three (PA) Power Amplifier, whose output is fed directly to the duplexers of the three sectors, which are connected to sectorized antennas. Node B serve the cells which depend on sectoring. 3.4 ACCESS MODES 3.4.1 FDD (Frequency Division Duplex) A duplex method whereby uplink and downlink transmissions use two separated radio frequencies. In the FDD, each downlink and uplink uses the different frequency band. 3.4.2 TDD (Time Division Duplex) It is a method in which same frequency is used for the transmission of downlink and uplink by using synchronized time intervals. Time slots are divided into transmission and reception part in the physical channel. 3.4.3 Frequency Bands 3.4 CELLULAR CONCEPT The UMTS network is third generation of cellular radio network which operate on the principle of dividing the coverage area into zones or cells (node B in this case), each of which has its own set of resources or transceivers (transmitters /receivers) to provide communication channels, which can be accessed by the users of the network. A cell is created by transmitting numerous number of low power transmitters. Cell size is determined by the different power levels according to the subscriber demand and density within a specific region. Cells can be added to accommodate growth. Communication in a cellular network is full duplex, which is attained by sending and receiving messages on two different frequencies. In order to increase the frequency reuse capability to promote spectrum efficiency of a system, it is desirable to reuse the same channel set in two cells which are close to each other as possible, however this increases the probability of co-channel interference . The performance of cellular mobile radio is affected by co channel interference. Co-channel interference, when not minimized, decreases the ratio of carrier to interference powers (C/I) at the periphery of cells, causing diminished system capacity, more frequent handoffs, and dropped calls. Usually cells are represented by a hexagonal cell structure, to demonstrate the concept, however, in practice the shape of cell is determined by the local topography. 3.4.1 Types of Cell The 3G network is divided on the basis of size of area covered. Micro cell the area of intermediate coverage, e.g., middle of a city. Pico cell the area of smallest coverage, e.g., a hot spot in airport or hotel. Macro cell the area of largest coverage, e.g., an complete city. 3.5 FADING Fading is another major constraint in wireless communication. All signals regardless of the medium used, lose strength this is known as attenuation/fading. There are three types of fading: Pathloss Shadowing Rayleigh Fading 3.5.1 Pathloss Pathloss occurs as the power of the signal steadily decreases over distance from the transmitter. 3.5.2 Shadowing Shadowing or Log normal Fading is causes by the presence of building, hills or even tree foilage. 3.5.3 Rayleigh Fading Rayleigh Fading or multipath fading is a sudden decrease in signal strength as a result of interference between direct and reflected signal reaching the mobile station. 3.6 HANDOVER IN CDMA The term handover or handoff refers to the process of transferring data session or an ongoing call from channel to channel connected to the core network to another. The handover is performed due to the mobility of a user that can be served in another cell more efficiently. Handover is necessary to support mobility of users. Handover are of following types (also known as handoff): Hard Handover Soft Handover Softer Handover 3.6.1Hard.Handover In Hard handover the old radio links in the UE are dispose of before the new radio links takes place. It can be either seamless or non-seamless. In seamless hard handover, the handover is not detected by the user. A handover that needs a change of the carrier frequency is a hard handover. 3.6.2Soft.Handover Soft handover takes place when cells on the same frequency are changed. Atleast one radio link is always kept to the UTRAN in the removal and addition of the radio links. It is opperated by means of macro diversity in which many radio links are active. 3.6.3Softer.handover It is one of the important case of soft handover which describe the removal and addition of the radio links which is being belonged by the same Node B. Macro diversity can be performed in the NODE B with maximum ratio combining in softer handover. There are inter-cell and intra-cell handover. Handover 3G 2G (e.g. handover to GSM) FDD inter-frequency hard handover TDD/FDD handover (change of cell) TDD/TDD handover FDD/TDD handover (chan