Thursday, October 31, 2019

Importance of Radioisotopes and Isotopes Research Paper - 1

Importance of Radioisotopes and Isotopes - Research Paper Example Some of the isotopes undergo radioactive decay over time, therefore, known as radioactive isotopes. On the other hand, those isotopes not been observed to undergo any form of decay are known as stable isotopes. In general, isotopes have similar chemical properties but different physical properties. For example, hydrogen has three different isotopes (fig 1); 1H, 2H, and 3H. Hydrogen 1 or protium is the most abundant isotope. As they all have similar chemical properties they can form similar bonds. H2O and D2O are some examples but they have different physical properties. H2O has melting point of 0.0oC and boiling point of 100.0oC but D2O melts at 3.82oC and boils at 101.4oC. (Stoker 55). Isotopes have various applications in different sectors. In the medical field, radioactive and stable isotopes are used in medical procedures for the purpose of diagnosis and therapeutic use. Isotopes have a significant application in biomedical research field as well as research in physics, biology, chemistry, geosciences and other branches of science and technology. Isotopes can be used in various ways in the various fields discussed above. They are generally helpful because of their emission properties. Isotopes with short half life decay and emit various radiations such as beta emissions which can be detected by various means. Therefore, they can be used as ‘tracers’. For example, scientists can measure the uptake of nutrients in a plant by using a radioactive isotope of phosphorous. 32P containing compound can be introduced in the soil which is taken up by the plant. It has a short half life of about 2 weeks and the rate of uptake can be found my measuring the time taken for it to appear in the leaves. It can be traced in the leaves by detecting the beta emissions. (Kotz, Treichel and Townsend 1086). Many other applications of radioactive isotopes apply similar technique. There are many applications of isotopes in the field of medicine. Iodine is an essential

Tuesday, October 29, 2019

Food Insecurities Essay Example for Free

Food Insecurities Essay Have you ever seen a person yell at his colleague or his partner, overreacting on a particular matter that has caused not only the person who got yelled at to feel annoyed, but also third parties who know about it or have witnessed the scene? For the many who do not wish to get into the mess, or has enough logic sense to not judge immediately, they will most likely give out a fair statement and try to reason out that person’s overreaction by saying he probably had a bad day or he probably has dilemma at home. However, if we take a bit closer and look closely, all of us will eventually realise that it all goes down to one matter; insecurity. Insecurities are not something new and unfamiliar to the human kind. Everyone has insecurities, regardless of whether they realise it or not. The Oxford Dictionary defines insecurity as the uncertainty or anxiety about oneself or lack of confidence. Insecurities exist in every living soul on this planet. Till today, it is still seen as a negative matter as not many have addressed this issue in proper ways using proper mediums. Most parents don’t even talk about it and shove it away when their children decide to speak up about it. Little do people know that the slightest things in life are the ones that add up to our insecurities. The amount of insecurity in a person differs from one to the other. The types of insecurities that one possesses also vary. The most common type of insecurity is physical insecurity. Let’s face it; human beings are never satisfied. Even when you have all the parts of the body needed to sustain and go through your daily routine with ease, you still beg for more. Some want healthier and shiner hair, some want to be taller, but most importantly, everyone wants something. It is not just human beings as an individual who face insecurities, but also countries and states. Currently, the world is looking at the issue of food insecurity, which is also classified as a type of insecurity. Food security may be said as the availability of food and one’s access to it. Hence, the United Nations have defined food security as all people at all times having both physical and economic access to the basic food they need. For more than 2 billion of people on this planet, they are lucky to not worry about this form of insecurity. However, we might not realise this but this matter is more complicated than it seems. Food securities may result from many different causes. It is imperative that we focus on why are the food insecure, and why are the people are food insecure. Among the most common causes of food insecurities are drought and extreme weather changes. This setback, which is very commonly faced by third world countries, usually ranges from overnight floods to droughts. In short, the climate changes faced by these countries are extreme. In most African countries, like Nigeria, droughts are not new to them. It has been a setback since the time of their ancestors; nonetheless, they are helpless at it and have no comeback on solving this matter. In many comparisons throughout time, some of the most severe food crises were all preceded by drought or by other similarly extreme weather events. These extremities result in poor and failed harvests which in turn results food scarcity and high prices of the available food. As mentioned in the Climate and Development Knowledge Network report entitled ‘Managing Climate Extremes and Disasters in the Agriculture Sectors: Lessons from the IPCC SREX Report’, such force of nature causes impacts which will include not only food insecurity, but changing productivity and livelihood patterns, economic losses, and impacts on the infrastructure. Besides that, the natural resource base for the poor and food-insecure is invariably narrow and, in many areas, fragile. With the exception of Uganda only 4 to 10 percent of the land area is classed as arable, and just a small area of land suitable for rainfed cultivation. The greatest numbers of poor people are concentrated in the arid and semi-arid ecosystems and on marginal land in the higher rainfall parts of the region. It has become axiomatic to say that poverty is one of the main causes of environmental degradation. This can be seen all too clearly in the farming of steep slopes, which takes place as an increasing population is forced to cultivate marginal land. The falling crop yields that characterize the marginal areas are a result of the loss of massive quantities of topsoil throughout the region, declining soil fertility as fallow systems are replaced by continuous cultivation, reductions in soil organic matter as manure is burnt for fuel, and shrinking holding sizes. However, the poor are also the most vulnerable to environmental degradation because they depend on he exploitation of common property resources for a greater share of their incomes than richer households do. In the rangelands, the evidence for long-term secular environmental degradation is ambiguous. The successive cyclical growth and decline of herds reflects cycles of rainfall and rangeland productivity, and is perfectly normal. As animals die in large numbers, the rangelands recover remarkably quickly. However, when there i s a major drop in the number of animals, the people who depend on them for their livelihoods also suffer. Development programmes that have sought to increase animal production on rangelands through water development and animal disease prevention have all too often failed to find, at the same time, sustainable ways of increasing animal nutrition, so the resulting increased numbers of animals may wreak havoc on the range itself. Many of the available freshwater resources are in river basins and lakes that extend beyond the boundaries of individual nations. Shared water resources include lakes Victoria, Albert, Edward, Kivu and Turkana and major rivers such as the Blue Nile, White Nile, Atbara, Awash and Shebele. The potential for developing irrigation from these sources is constrained by the problem of achieving agreement on sharing the resources and avoiding conflict. Although natural climatic factors have played their part in the process of desertification, in general, it is increased population and the related development of unsustainable production systems that have had most negative impact on the fragile natural resource base. Wood and manure have remained the main sources of domestic energy, even in urban centres. This situation has contributed to depleting the forest and range resources, resulting in an overall decrease in biomass and biodiversity, reduced water infiltration and increased runoff and soil erosion. These factors, which contribute to the impoverishment of ecosystems, have led to a vicious circle of environmental degradation, lower system resilience to erratic rainfall, decreased agricultural productivity and increased poverty and food insecurity. Not only that, the cause of food insecurity in these third world countries are also caused by the poor state of development and maintenance of roads and transport, energy sources and telecommunications in the marginal areas of countries in the Horn of Africa makes it difficult for these areas to become integrated into the national and regional economy. As with all other indicators of development, the countries of the region have some of the worst figures worldwide with respect to access to roads and water supply. A recent report suggests that, in terms of access to infrastructure, the gap between Africa and the rest of the world has widened over the past 15 years. The sparse road and communications network hampers emergency relief operations as well as the commercialization of the rural economy. The density of the road network in the countries of the region gives an idea of both how difficult it is to reach people in rural areas with services and the problems such people face in participating in the market economy. For example, in Ethiopia, every kilometre of road serves 72 km 2 and 3 000 people, compared with only 8 km 2 and 850 people in North Africa. Even after strenuous efforts by development agencies and NGOs, access to a clean water supply is still an unobtainable luxury for most rural inhabitants in the Horn. Piped systems are uncommon in rural areas and protected wells and hand pumps are the best that rural communities can expect. The burden of collecting water, as with so many other menial tasks, falls almost exclusively on women in the communities, who must spend many hours each day collecting water from unsafe sources. The statistics on access to water and sanitation reveal wide differences within the region. In three countries, namely Eritrea, Ethiopia and Somalia, only one-quarter of the population has access to safe water, and in two others (the Sudan and Uganda) the figure is less than 50 percent. Access to sanitation is as low as 13 percent and, except for Kenya, barely exceeds 50 percent anywhere. In addition to that, the indicators of access to social services in the countries that face the setback of food insecurity are also among the lowest in the world. While the average figures are bad enough, they mask fundamental inequalities in access to services within the region. Again, rural areas, especially remote, low-potential areas are the least well served. Nomadic and semi-nomadic pastoralists are the most difficult populations to provide services to and, consequently, they are invariably the ones with the poorest health services and least education. All these indicators, combined with malnutrition and poor access to safe water, have adverse consequences for productivity and for the long-term physical and cognitive development of people in the region. Also, let us not forget the fact that crop and plants as well face diseases. Diseases affecting livestock or crops can have devastating effects on food availability especially if there are no emergency back-up plans in place. For example, an epidemic of stem rust on wheat which was spreading across Africa and into Asia in 2007 caused major concern. A virulent wheat disease could destroy most of the world’s main wheat crops, leaving millions to starve. The fungus had spread from Africa to Iran and may already be in Pakistan. A different threat, on the other hand, has attacked the African continent’s second biggest crop; wheat. In 1999, 50 years since the last outbreak, a contemporary and virulent strain of stem rust attacked the Ugandan crops. Its spores then travelled to Ethiopia and Kenya, before appearing in Iran last year. The Food and Agriculture Organisation of the United Nation (FAO) has given warning to six other countries in the Central and South Asia to be prepared and keep an eye for symptoms of this new strain while scientists in the United States of America are working diligently in searching for a resistant that combats this problem. It is important that the remedy for this will be obtained quickly as in India alone; more than 50 million small-scale farmers are at risk because they depend on wheat for their food and earnings. Most importantly, we must not overlook that politics and dictatorship also play a role in food insecurity. Many do not realise that politics play a part in something as serious as this. As mentioned by Nobel Prize-winning economist Amarya Sen, â€Å"There is no such thing as an apolitical food problem. It is more often than not that the administration of the country that determines its severity, or even whether the famine will occur. If truth be told, the 20th century is full of examples of governments undermining the food security of their own nations. Let us take a look at Nigeria, Africa’s most densely inhabited state, where a legacy of corrupted governance and an economy based primarily on oil exports has left the agriculture sector significantly undermined, leaving millions of N igerians in deep hunger. True, the neighbouring countries export food to Nigeria in exchange for money, but remember; the people in these neighbouring countries need food too. And they are much poorer than those living in Nigeria. It was reported by the United Nations that thousands of children in countries neighbouring Nigeria died because of malnutrition. These kids paid the price not because of food shortage in their country, but because of food shortage in Nigeria. The distribution of food is often a political issue in most countries. The government will always give priority to urban areas and cities, since most influential and powerful families and enterprises are located there. The ruling government over and over again for generations overlooks the subsistence farmers and rural areas in general. In other words, the more rural an area, the less likely the government will pay attention to solving its needs. Whats more is that the governments of these countries would normally keep the price of basic grain at extremely low levels that subsistence farmers cannot accumulate sufficient capital to make investments to improve their production. Hence, they are prevented from getting out of their precarious situation. In addition, food has always been a political arsenal by the dictators and warlords, where they reward their supporters and deny food supplies to those areas that are against them. Under this condition, food has become more like a currency instead of a basic need that cannot be denied rights of. Food has become the money to buy support and used against the opposition. Even in Guatemala, income inequality is amongst the worst in the world, with indigenous communities at a disadvantage. In some areas, an estimated 75 percent of the children, ranging from infants to children ages six and seven years old, are severely malnourished. And this is a shocking statistic relating food scarcity coming from a country that is merely a four-hour flight away from the USA. Furthermore, it was pointed out in William Bernstein’s 2004 publication entitled ‘The Birth of Plenty’ that individuals without property will lead to starvation and it is much easier to bend the fearful and the hungry to the will of the state. If a farmer’s property can be arbitrarily threatened by the state, that power will inevitably be used to intimidate those with different political and religious opinions. It is fundamental and crucial that we understand and be aware of the consequences of this global food scarcity. The effects might be similar to the effects of malnutrition and hunger, where, at the outset, the human population will be affected greatly in the sense where stunted growth may occur. The stunting starts in when the baby is still in the mother’s womb and happens till the age of three. Once stunting happens, giving proper nutrition to these helpless children will not help in reversing the damage or improving the child’s condition. Pregnant mothers who do not receive the correct amount of nutrition needed may risk of having a higher chance of infant and child mortality later on, which is, of course, a very heartbreaking circumstance. Apart from that, severe malnutrition during one’s early childhood also leads to defects in cognitive development. Stunted individuals also have a higher chance of getting diseases and illnesses as compared to those who have not experienced stunting. It must also come to the attention that food insecurity is also associated with various developmental consequences for children in the United States. A research was conducted by Diana F. Jyoti, Edward A. Frongillo, and Sonya J. Jones to prove that food insecurity is linked to specific developmental consequences for children, and whether these consequences may be both nutritional and nonnutritional.

Saturday, October 26, 2019

General Happiness Equation Using Econometric Models Of Panel Data Methods Philosophy Essay

General Happiness Equation Using Econometric Models Of Panel Data Methods Philosophy Essay This study presents a general happiness equation using econometric models of panel data methods. The model tries to observe and estimate the relationship between income and happiness after controlling for other factors. With advanced methods, we also test for the presence of personality bias and whether it correlates with income. Finally, we provide some analysis of our estimation results and briefly discuss alternative approaches in the literature. Introduction Empirical research on human happiness have only recently in the last few decades received serious attention from both economists and non-economists. The lack of national-level representative survey data and the difficulty to apply econometric techniques were the stumbling blocks for further research in the past. With the establishments of national socio-economic panel surveys as well as technological advancements that gave birth to neat econometric software packages, the literature experienced a surge in the amount of research as well as the popularity drawn to these works. Things began to look brighter and brighter, and as a result came the birth of a new field called happiness economics. What happiness economists typically try to do is to estimate what they call happiness equations. Using econometric techniques, they could test for a causal link between income and happiness. After controlling for other factors that can cause happiness (eg. education, marital status, disability, unemployment etc.), early work which used simple cross sectional methods suggest a positive and statistically significant correlation. To run Ordinary Least Squares (OLS) regressions on cross sectional data sounds decent, but is in actual fact highly inadequate. What if happiness is also caused by another factor that is unobservable in the data, such as personality? Could it be that ones happiness strongly depends on who he is as a person? On face value, it seems plausible or at least interesting to suggest that peoples capacity to be happy vary from individual to individual. Perhaps some people are born extrovert and optimistic, and as a result tend to be happier than others even if they have less income than them. Then simple OLS will suffer from an omitted variable bias problem, which causes one or more of its classical assumptions to be violated and hence estimates to be biased. To solve this problem of unobserved heterogeneity bias, we can use panel data and propose a fixed effects model. We can run a pooled OLS regression on panel data, but it would still be susceptible to the omitted variable bias problem. Firstly, we can think of the personality variable as a time-constant effect. By exploiting the nature of panel data, which follows the same individual over time, we can eliminate this unobserved time-constant effect by doing some transformation on the data. The simplest way is to perform first-differencing. Namely, we take observations on an individual for two time periods and we calculate the differences. Then we run an OLS regression on these transformed values. In effect, we have removed all unobserved time-constant variables not only limited to personality. Maybe an individuals thumbprints or DNA may be correlated with happiness, we do not know for sure. But the elegance of first-differencing makes it sure that we remove all nuisance unobserved time -constant variables that disturb our primary goal. Through transforming the data in such a way that we are now dealing with relative rather than absolute values, we have also mitigated the problem of heterogeneous scaling in subjective responses. Every individual have their own perception on the happiness score. A score of 7 may be others score of 6, and so on. This would make interpersonal (cross-sectional) comparisons meaningless, and is part of the reason why in the past empirical work on this literature have been viewed with scepticism by many economists. By reasonably assuming that a persons metric or perception is time-invariant, this issue is dealt with in a fixed effects model. There are other advanced transformation techniques that uses data on multiple time periods. One technique performs a time-demeaning transformation on the data. Again, all unobserved time-constant variables will be eliminated. But for details presented later, OLS regression on these transformed values provides more efficient estimators than on the first-differenced values for our purposes. Estimators that result from this method are called fixed effects (FE) estimators. While the fixed effects model allows for arbitrary correlation between the explanatory variables and the unobserved time-constant effect, a random effects model explicitly assumes that there is no such correlation. Estimation on this model is typically done by transforming the data using a method of quasi-demeaning, and then a Generalised Least Squares (GLS) regression is run on the transformed values. The resulting estimators are called random effects (RE) estimators. How these techniques are performed as well as the intuition behind them is explained with technical detail in Section 3. Why we may want to use a random effects model over a fixed effects model is because we may believe that personality has no effect on any of the independent variables, including income. If this is true, then using FE estimators will result in relatively inefficient estimates than RE estimators. But intuitively, personality is likely to be correlated with the ability to make money, and thus income. Studies have shown that happy people tend to earn more in general (eg. see Lyubomirsky et al. 2005). If this were true, simple pooled OLS methods will lead to inaccurate estimates where the effect of income on happiness will be overstated or biased upwards. The fixed effects model allows for this correlation, and is thus more widely accepted in the literature to fit the data better. Lastly, can we test for this assumption? Is the unobserved time-constant variable correlated with any of the explanatory variables? Which model fits the data better? We can do what is called a Hausman test, which tests for statistically significant differences in the coefficients on the time-varying explanatory variables between fixed effects and random effects. The intuition and decision rule on which model to accept will be described in detail later. For comparison, we present the results for pooled OLS, FE and RE estimations together. Although this approach is one of the most popular one in the literature when it comes to estimating happiness equations, there are other alternatives ways. Powdthavee (2009)s work was quite similar to this study, but in addition he used a method of instrumental variables (IV) which involved using another variable to instrument for income. Happiness equations may suffer from the problem of simultaneity, whereby the causal link between happiness and income runs both ways. To address this, he used data on the proportion of household members whose payslip has been shown to the interviewer as the instrument for income. He reasoned that household income is bound to be measured more accurately with a higher proportion of household members showing their payslip. With this direct correlation, as well as reasonably assuming that this proportion has little correlation with happiness, it would allow for an estimation based on an exogenous income effect. Besides his work, other work (eg. Frijters et al. 2004, Gardner Oswald 2007) has attempted to address the endogeneity effect more directly using different types of exogeneous income effects. Another line of thinking interprets the happiness scores as ordinal rather than cardinal. Here, simple OLS estimation would be inadequate. One solution to this would be to use ordered latent response models. Winkelmann (2004) was one example of this in which he performed an ordered probit regression with multiple random effects on subjective well-being data in Germany. To date, there is no statistical software package that could implement a fixed effects ordered probit regression. An alternative to this would be to convert the happiness scoring scale into a (0,1) dummy, thereby roughly cutting the sample into half, and then estimate by conditional logit regression, as attempted by Winkelmann Winkelmann (1998) and later Powdthavee (2009). However, their work combined with Ferrer-i-Carbonell Frijters (2004) seems to suggest that it makes no difference qualitatively whether to assume cardinality or ordinality on the happiness scores. There is no one perfect model that can address all the problems. We believe that the FE RE approach, not only simple, is also elegant and easier to understand. Coefficient estimates can be interpreted easily and the approach also addresses the most important of problems in the estimation, especially that of unobserved heterogeneity bias. Although bias in happiness equations come from many different sources, it is our belief that this source is one of the major ones and is easily removed using simple techniques. Data We use data from the British Household Panel Survey (BHPS), a widely used data source for empirical studies in the UK. The BHPS surveys a nationally representative sample of the UK population aged 16 and above. The survey interviews both individual respondents and households as a whole every year in waves since 1991. To date has been 18 waves in total. Survey questions are comprehensive and they include income, marital status, employment status, health, opinions on social attitudes and so on. The data set is also an unbalanced panel; there is entry into and exit from the panel. Data can be obtained through the UK Data Archive website. Our dependent variable, happiness, uses data on the question of individual life satisfaction. From Wave 6 onwards, the survey included a question which asks respondents to rate how satisfied they are with their lives from a rating of 1 (very dissatisfied) to 7 (very satisfied). This question is strategically located at the end of the survey after respondents had been asked about their household and individual responses in order to avoid any framing effects of a particular event dominating responses to the LS question. For ease of representation, we now refer to happiness as life satisfaction (LS). For income, we use data on the total household net income, deflated by consumer price index and equivalised using the Modified-OECD equivalence scale. The initial value is worked out through responses in the Household Finance section which includes question on sources and amount of incomes received in a year. Inflation would seriously distort our estimation and so is accounted for. Equivalisation involves dividing the total household net income by a value worked out according to an equivalence scale. For example, a household with two adults would have their total household income divided by 1.5. The more adults are there in the household, the higher this value would be. Children would add relatively less to the value than adults. This method would provide an equivalent household income variable, which would account for the fact that different household sizes enjoy different standards of living on the same level of income per household member. Due to economies of scale in consumption, a household with three adults would typically have needs more than triple than that of a single member household. Equivalisation would make comparisons between households a lot fairer or more accurate. Lastly, we use the log form. We use data on the years 2002-2006 (Waves 12-16). There are in total [unconfirmed] respondents with [unconfirmed] observations that have nonmissing information on LS. Descriptive statistics are provided in the Appendix section. Econometric Method We denote as our dependent variable. We have explanatory (binary and non-binary) variables which includes income, employment status, marital status and so on. There are respondents , where . A simple pooled cross-section model would look like (1) where the first subscript denotes the cross-sectional units, the second denotes the time period and the third denotes the explanatory variables. As mentioned earlier, this simple model does not address the issue of unobserved heterogeneity bias. To see why, we can view the unobserved variables affecting the dependent variable, or the error, as consisting of two parts; a time-constant (the heterogeneity bias) and time-varying component. (2) Thus if we regress by simple pooled OLS, we obtain (3) Here one of the key assumptions for OLS estimation to be unbiased has been violated, since the error term is correlated with . The above model is called a fixed effects model. The variable captures all unobserved, time-constant factors that affect . In our analysis, personality falls under this variable. is the idiosyncratic error that represents other unobserved factors that change over time and affect . The simplest method to eliminate is as follows. First, we write the equation for two years as By subtracting the equation on the first period from the second, we obtain (4) where denotes the change from to . In effect, we have transformed the model in such a way that we are only dealing with relative rather than absolute values. This technique is called first-differencing. We can then proceed to estimate the equation at (4) via OLS. Essentially, the error term here is no longer correlated with , as the time-constant effect has been differenced away or minused out of the equation. However this is only the case if and only if the strict exogeneity assumption holds. This assumption requires that the idiosyncratic error at each time, is uncorrelated with the explanatory variables in every time period. If this holds, then OLS estimation will be unbiased. A more popular transformation technique in the literature is the time-demeaning method. Again, we begin from equation (3), and using (2) we rewrite it as (5) Then we perform the following transformation. First, we average (5) over time, giving (6) where and so on. Next, we subtract (6) from (5) for every time period, giving or (7) where is the time-demeaned value of LS, and so on. Essentially again, has disappeared from the equation. With these new, transformed values, we can then use standard OLS estimation. Conditions for unbiasedness remain the same as in the first-differencing method, including the strict exogeneity assumption. As mentioned earlier, the resulting estimators are called FE estimators. In our analysis, we decided to use FE over first-differencing. It is important to state why we do this. The reasoning is as follows. When , their estimation is fundamentally the same. When , both estimations are still unbiased (and in fact consistent), but they differ in terms of relative efficiency. The crucial point to note here is the degree of serial correlation between the idiosyncratic errors, . When there is no serial correlation, FE is more efficient than first-differencing. We have confidence that we have included sufficient controls for other factors in our happiness equation, so that whatever that is left in the error term should be minimal and serially uncorrelated. In addition, FE is safer in the sense that if the strict exogeneity assumption is somehow violated, the bias tends to zero at the rate whereas the bias in first-differencing does not depend on T. With multiple time periods, FE can exploit this fact and be better than first-differencing. Another reason why FE i s more popular is that it is easier to implement in standard statistical software packages, and is even more so when we have an unbalanced panel. With multiple time periods, the first-differencing transformation requires more computation and is less elegant overall than FE. As mentioned earlier, if is uncorrelated with each explanatory variable in every time period, the transformation in FE will lead to inefficient estimators. We can use a random effects model to address this. We begin from (5), writing it as (8) with an intercept explicitly included. This is so that, without loss of generality, we can make the assumption that has zero mean. The other fundamental assumption is that is uncorrelated with each explanatory variable at every time period, or (9) With (9), the equation at (8) is called a random effects model. If the assumption at (9) holds, even simple cross section OLS estimation will provide us with consistent results. With multiple time periods, pooled OLS can be even better and also still achieve consistency. However, because is in the composite error from (2), then the are serially correlated across time. The correlation between two time periods will be (10) where and . This correlation can be quite substantial, and thus causes standard errors in pooled OLS estimation to be incorrect. To solve this problem, we can use the method of Generalized Least Squares (GLS). First, we transform the data in a way that eliminates serial correlation in the errors. We define a constant as . (11) Then in a similar way to the FE transformation, we quasi-demean the data for each variable, or, (12) where is the quasi-demeaned value of LS, and so on. takes a value between zero and one. As mentioned earlier, estimations on these values produce RE estimators. This transformation basically subtracts a fraction of the time average. That fraction, from (11), depends on , and . We can see here that FE and pooled OLS are in fact a special cases of RE; in FE, and in pooled OLS, . In a way, measures how much of the unobserved effect is kept in the error term. Now that the errors are serially uncorrelated, we can proceed by feasible GLS estimation. This will give us consistent estimators with large N and fixed T, which is suitable for our data set. To summarize, if we believe that personality is an unobserved heterogeneous factor affecting LS then pooled OLS will give us biased estimators. To address this issue, we can use a fixed effects or random effects model. In the former case, we prefer the FE transformation over first-differencing. The choice between FE and RE depends on whether this factor is also correlated with one of our explanatory variables. We think that personality may be correlated with income. If so, then we use the transformation in FE to completely remove it. If this factor is uncorrelated with all explanatory variables at all time periods, then we do a transformation in RE to partially remove it as a complete removal will lead to inefficient estimates. In this scenario, RE is still better or more efficient than pooled OLS because of the serial correlation problem. An additional characteristic that RE has over FE is that RE allows for time-constant explanatory variables in the regression equation. Remember in FE that every variable is time-demeaned; so variables like gender (does not vary) as well as age (varies very little) will not provide us with useful information. In RE, these variables are only quasi-demeaned, so we can still include these variables in our estimation. Estimation Results We produce results for estimation by pooled OLS, FE and RE. Besides our key explanatory income variable, other control variables are included in the regression. They are gender, age, marital status,

Friday, October 25, 2019

Jamestown Essay -- essays research papers

Jamestown   Ã‚  Ã‚  Ã‚  Ã‚  In the sixteenth century, England was one of the most powerful countries in the world. England was also in dire need of money at this time. In an effort to alleviate the country’s financial burdens, King Henry VIII decided to seize land owned by the Catholic Church. Henry then sold the already inhabited land to investors, and its residents were forced out. These people and their descendants would eventually become some of the fortune-seeking colonists that would settle America during England’s try at Imperialism.   Ã‚  Ã‚  Ã‚  Ã‚  In the early 1600’s England need money once again, and this time it decided to by settling the new land to the west of them. Instead of actually funding these colonial expeditions, England would issue charters to joint – stock companies. These companies consisted of wealthy English investors who would all give some money to finance the trips and would share in the riches if they succeeded or lost their money of they failed. Most of the time the benefits of their investments would outweigh the risks. For England, this was a win-win situation. Since England did not pay for the voyages or the colonies themselves, England wouldn’t lose the money if they failed. If the companies succeeded, England was entitled to a percentage of the profits and became its ultimate authority.   Ã‚  Ã‚  Ã‚  Ã‚  England’s first colony was established in 1607 by a joint – stock company, and was named Jamesto...

Wednesday, October 23, 2019

Which theory best explains the development of EU environmental policy?

The successful development of EU international environmental policy has been the subject of much recent study within various disciplines. One promising theory for cross-disciplinary researches of EU environmental policy invokes the concept of international regime. Regime theory might expect to explain a great deal about the development of EU environmental policy in global environmental affairs. It is insightful to consider the EU environmental policy as a regime given that the regime definition most frequently cited is so broad as to certainly include the EU where ‘norms, rules and decision-making procedures in a given area of international relations' (Krasner, 1983, p. 2) are said to be in existence. This sort of theory would enable one to consider the connections between the institutions of the EU and the member states. It may explain the inter-state relationship that lies behind the formation and development of EU international environmental policy. Te positions the EU projects in international affairs are evidently themselves the product of interest mediation and agreed bargaining directed by institutions. This paper will consider the work of both international relations (IR) and international law (IL) scholars to evaluate regime theory as instrument of EU environmental policy, using ozone layer depletion case study as specific example. Main Body International Regime Theory Although international regimes were used much earlier by IL as a means of giving an account of legal regulation in unregulated areas (Connelly and Smith 2002, p. 190), the regime theory has gained significance originally within the discipline of IR. The regime theory was developed to explain stability in the international system despite the absence or decline of domination (Connelly and Smith 2002, p. 202). It is only in the 1990th that regime theory has again become the focal point of legal scholars searching for ways to stimulate international cooperation (Connelly and Smith 2002, p. 10). This requires the organization into a unified pattern of the disciplines of IR and IL, the relationships between them having been one of mutual neglect, as explained by Hurrell and Kingsbury: Regime theorists have tended to neglect the particular status of legal rules, to downplay the links between specific sets of rules and the broader structure of the international legal system, and to underrate the complexity and variety of legal rules, processes, and procedures. On the other hand, theoretical accounts of international . . . law have often paid rather little explicit attention to the political bargaining processes that underpin the emergence of new norms of international . . . law, to the role of power and interest in inter-state negotiations, and to the range of political factors that explain whether states will or will not comply with rules. (1992, p. 12) There is no absolute agreement on what precisely forms an international regime. Goldie, in one of his works in this area, described regimes as: (1) the acceptance, amongst a group of States, of a community of laws and of legal ideas; (2) the mutual respect and recognition accorded by certain States to the unilateral policies of others acting in substantial conformity with their own, enmeshing all the States concerned in a regime with respect to those policies; (3) a common loyalty, among a group of States, to the principle of abstention regarding a common resource. 1962, p. 698) Thomas Gehring (1990) presents a more integrated work in this area, in particular as it better addresses the role of IL in international regime theory. He identifies international regimes as the regulations, developed within the context of a consultation of parties to the regime, governing a specific area of IR. Within this structure, IL is the search for unanimity and agreement on the priorities and plans for international action. Once these are made clear, norms will develop as to how to carry out these priorities and plans, resulting in accepted norms or â€Å"shared expectations† concerning the behaviour of states (Gehring, 1990, p. 37). Certainly, this progress from priority setting to norm gradual development takes time, but it is the regime structure that allows for the process to take place at all. Thus, regimes create the building blocks for the development of norms and rules. Development of EU Environmental Policy and Regime Theory. The influence of EU within environmental affairs cannot be disregarded as the environment in general has to a great extent become a matter of international concern. Of the many international organisations and specialised bodies dealing with environmental issues, the one mostly associated with such work is the European Union. Among other bodies and specialized agencies, the EU is most closely involved in environmental affairs. Regime theory is the most commonly employed theoretical paradigm in the study of EU international environmental politics. The study of the EU focuses upon how the EU affects the prospects of regime-building and how it may create the path of international cooperation. By signing up to agreements on behalf of its member states, the EU increases the scope of a regime by increasing the obligations of states that may in a different way have adopted lower standards. The EU pulls states into commitments. Often, however, the ‘convoy' analogy (Bretherton and Vogler 1997, p. 22) more precisely describes the process, whereby action is delayed by the slowest part of the train. This effect is seen during the ozone negotiations. Despite the attempts of Denmark and Germany to push things forward, the precluding tactics of France and the UK were able to ensure that on many occasions the EU was ‘condemned to immobility' (Jachtenfuchs, 1990, p. 265). Yet, by coordinating the position of (currently) 27 member nations in environmental negotiations, the Commission makes smaller the complexity of negotiations and decreases pressures upon international organisations to perform that function. Approaches informed by regime theory would also help to see the leadership role of the EU as an effort to originate cooperation conditional on the involvement of other parties. Hence the statement of a greenhouse gas decrease target as early as 1990 was planned as a first move in the ‘nice, reciprocate, retaliate' strategy that Connelly and Smith (2002, p. 269 indicated is the necessary to cooperation. Paterson (1996, p. 105) notes, for example, that â€Å"The announcement of the EU target in October 1990 was explicitly designed to influence the outcome of the Second World Climate Conference and to precipitate international negotiations†. Usually, however, IR perspectives tend to overlook the significance of intra-country dynamics to the creation of positions in international agreements. This factor severely restricts their applicability to EU decision-making development. In spite of that, in the ozone case it could be argued a combination of ‘domestic' and international pressures best explain the role of the EU in creating and supporting the regime in question. The EU is as one unit in this case. The four relationships are: one between member states and the EU; between the EU organisations in their internal power efforts; among the boards of directors and eventually between the various boards of directors and interest groups (Matlary, 1997, p. 146). With the EU environmental policy one clearly has a regime within a regime. Models of multi-level governance used to explain the policy development within Europe may be extended to include the international dimension. Viewed from this perspective, EU international environmental negotiations become a site of debate between transnational networks of environment departments from government and regional economic institutions working together with NGOs and sympathetic international organisations (such as UNEP), set against networks including Trade and Industry departments, business lobbies and international organisations which promote the interests of industry (such as UNIDO ( United Nations Industrial Development Organisation)) (Connelly and Smith 2002, p. 36). The interconnected groups operate horizontally and vertically and across national, regional and international levels including state and non-state players alike in strategic unions established on particular issues. Cooperation in Environmental Problems Collaboration is represented by the game, wherein each state follows a dominant strategy that leads to suboptimal payoffs for both. Regime theory presents the EU primarily as a tool. The EU deliberately seeks to change the system, design strategies to do so, and attempts to implement the strategies. To assess the development of EU environmental policy in environmental cooperation, then, two potential roles of the EU must be examined: the EU as tool and the EU as independent advocate. The EU helps states overcome the complexity of issues to arrive at coordination equilibrium. States usually remain concerned that others will exploit them, and the EU is needed to increase confidence in compliance. As independent actor, the EU is expected to play a significant role in environmental cooperation. Increased autonomy of the EU on some environmental issues and the increased needs of states to rely on them for collaboration and coordination allow those organizations with unified leadership and significant resources to have independent effects. Ozone: The First Global Challenge The development of the regime intended to limit the release into the atmosphere of ozone-depleting chemicals is in many ways a case of EU-US relations. The key turning points in the development of the process of negotiating from a framework convention at Vienna through to legally imposing an obligation protocol commitments at Montreal, London and Copenhagen reflect changes in the negotiating position of the EU and the US (Connelly and Smith 2002, p. 230). The development of ozone polices can be traced back to 1977. The ‘can ban' established in the US put the US in conditions to push for a global ban on CFCs. Process of negotiating moved very gradually at first against strong European opposition to cuts in CFCs, despite a Council resolution in March 1980 restricting the use of CFCs, reacting to American pressure and increasing public concern over the ozone problems. The supporters of controls (the US, Canada, the Nordic states, Austria and Switzerland), met together in 1984 to create the ‘ Toronto group'. The EU initially indicated that no controls were necessary. However, eventually it admitted that a production capacity cap may be required and presented a draft protocol that included their 1980 measures. The offered 30 per cent reduction was without difficulty achievable because use was already declining (Connelly and Smith 2002, p. 200) and in essence served to fix the status quo (Jachtenfuchs, 1990). The deadlock that resulted between the EU and the Toronto group made certain that only a framework convention could be made at Vienna. This promised intercommunion in research and monitoring and promotion of information-sharing. At the March 1986 assemblyof the EU Council of Ministers, the EU took a position of a 20 per cent CFC production cut. This was partly impelled by the threat of unilateral action by the US to impose trade sanctions against the EU (Connelly and Smith 2002, p. 261). The Montreal Protocol later agreed in September 1987 required cuts of 50 per cent from 1986 levels of production and use of the five principal CFCs by 1999. The figure of a 50 per cent cut was established as a settlement of a dispute by concessions on both sides between the EU's proposed freeze and the US's proposal for a 95 per cent cut. The Protocol contained an interval for the implementation of the Protocol by less developed countries, restrictive measures on trade with non-members and an ozone fund for technology transport. This latter element of the agreement is especially important for the EU for, as Jachtenfuchs (1990, p. 272) states, ‘The success of the EU's environmental diplomacy in this important field will to a large extent depend on how far it is able to provide technical and financial assistance to developing countries'. As a regional economic integration organisation, the EU was granted permission to meet consumption limits together rather than country by country. This was planned to assure some transfers of national CFC production quotas among EU member-states in order to allow commercial producers in Europe to improve production processes cost-effectively. Despite this concession, some European members in the Protocol process believed that they were ‘bullied' into an agreement favourable to US industry, dubbing the Montreal agreement ‘The DuPont Protocol' (Parsons, 1993, p. 61). In spite of that, on 14 October 1988 the Council adopted a law, transforming every aspect of the Protocol into EU legislation. The law came into force instantly in order to emphasise the importance of the issue and to prevent trade distortions which might emerge from non-simultaneous use of the new legislation (Connelly and Smith 2002, p. 269). At the March assembly of the EU Environment Council which took place in 1989, the UK after a long delay joined the rest of the EU in agreeing to phase-out all CFCs ‘as soon as possible but not later than 2000' (Parsons, 1993, p. 47). At the same time France submitted to external pressure to drop its uncompromising position. The London assembly of the members in June 1990 was consequently able to agree that all entirely halogenated CFCs would be phased-out by the year 2000, with successive lessening of 85 per cent in 1997 and 50 per cent in 1995. Some member states have gone beyond the restrictions stated in the international agreements, however. Germany, for instance, has passed legislation stating that CFCs be removed by 1993, halons by 1996, HCFC 22 by 2000 and CT (carbon tetrachloride) and MC (methyl chloroform) by 1992 (Parsons, 1993). On another hand, behind the diplomacy of the negotiations between the states, the case is in a fundamental way one of the competing positions of the chemical companies, chiefly, ICI (in the UK), Du Pont (in the US) and Atochem (in France). Industry agents served formally on European national delegations through the whole of the process. EU industrialists ‘believed that American companies had endorsed CFC controls in order to enter the profitable EU export markets with substitute products that they had secretly developed' (Benedick, 1991, p. 23). The EU followed the industry line and reflected the views of France, Italy and the United Kingdom in its policy. The significance of these commercial considerations is easily noticed in the persistent efforts to define cuts in HFCs and HCFCs (perceived to be the best alternative to CFCs). The EU has found it problematic to come to an agreeable position on reducing the production and consumption of these chemicals because substitute chem icals were not yet easily available. Indecision could also be explained by the fact that some European producers wanted to establish export markets for HCFCs in the less developed ‘south'. The differing commercial interests regarding the ozone issue presented the difficulty the EU faced in its effort to formulate common policy positions in international environmental process of negotiating. This case demonstrates that ozone depletion was one of the first global environmental issues to create a coordinated and consentient international response. Despite remaining weakness in the ozone regime it is regarded to be one of the few tangible successes of EU international environmental policy taking into account that governments took action before certain proof of environmental disaster had occurred. The EU has explicit rules, agreed upon by governments, and provides a framework for the facilitation of ongoing negotiations for the development of rules of law. Regime theory regards EU international environmental policy as a means by which states solve collective environment problems. Regime theory, as well as most current studies of cooperation in international politics, treats the EU as means to an end – as intermediate variables between states' interests and international cooperation. The EU is an independent actor which plays an independent role in changing states' interests – and especially in promoting cooperation. Conclusion The consideration in this paper of the ozone depletion regimes reveals that there is prospect for development in the international legal order. The picture that emerges of EU international environmental policy and politics is a complex and relating to the study of several subject disciplines. It should be noted that there is none predominant theoretical perspectives in international environmental politics adequate to explain this rich complexity. Given the complex reality of environmental cooperation between states and the context within which it develops, explaining policy processes and developments by a single theoretical perspective is an uncertain prospect. Still better understanding of the developments of EU environmental policy in these processes may be fostered by relying on a regime theory.

Tuesday, October 22, 2019

Essay about History Notes for EXAM 1

Essay about History Notes for EXAM 1 Essay about History Notes for EXAM 1 Tuesday 2/9 1865-1877 What is it to be the legal, constitutional and political position of an ex-slave? SOCIAL -property -labor (replaced with Jim Crow) -race relations 3 amendments: Thirteenth Fourteenth Fifteenth- vote w/o agitating for it for a long time Woman (1848- 1919) agitated for the right to vote. Group: Irish Blacks WASP- white Angelo saxton protestant (if you’re not all of them, you’re not white) Fear of the â€Å"battle of the cradle† – fear in rise of the population of blacks There was a promise of equality; without the reality Equality of races dates back to the mid-20th century. Up until then, people ranked races. 1. Equality before opportunity 2. Equality before law 3. Equality before God Thursday 2/12 Blacks in the late 19th century During the time of reconstruction, four civil rights acts were passed The last union was withdrawn from the south after 1877 (this was the end of the reconstruction era) Slaves fled their plantations throughout the south; this was called self-separation Blacks voted (republic) – in congress until 1901 Whited voted democratic Sandwiching- outvoted minority; pre-ordaining what the vote will be Paternalism: the idea that ex-slaves were to be taken care of and controlled by upper class whites; class is the most important factor They were surprised by the smoothness between relations of black and whites (whites being northerners) 1890’s- things fall apart (blacks in the south) Upper class of whites lost political power Rise of the lower class whites: late 19th century. They got tired of being in the position they were in Originally whites and blacks union Upper class used scare tactics to separate the poor whites and blacks by using racism. Segregation laws started to be made after the 1890’s Northern whites were no longer interesting 1895- more white people in the south were being lynched than blacks in the south (shortly that changed as a form of social control) 1893- bad depression from unemployment rates took place in south, which, in turn, created