Sometimes I have to put text on a path

Saturday, June 11, 2011

help, aide, blogger post editor


blogger/blogspot post editor
help in english
http://www.google.com/support/blogger/bin/answer.py?hl=fr&answer=156829
aide en français
http://www.google.com/support/blogger/bin/answer.py?hlrm=fr&answer=156829

try the new HTML5 AJAX views on your own blogger blog,

To try the new HTML5 AJAX views on your own blogger blog, simply add “/view” to the end of the blog URL—for example:
http://ex-ample.blogspot.com/view

---------------
http://buzz.blogger.com/2011/03/fresh-new-perspectives-for-your-blog.html

Robin Ian MacDonald Dunbar, Cambridge->Oxford->London->Liverpool->Oxford since 2007



 "'social brain' et Robin Ian MacDonald Dunbar, Cambridge->Oxford->London->Liverpool->Oxford depuis 2007

Robin Ian MacDonald Dunbar (born June 28, 1947, east Africa) is a British anthropologist and evolutionary biologist, specialising in primate behaviour [1,2].

He is best known for formulating Dunbar's number, roughly 150, a measurement of the 'cognitive limit to the number of individuals with whom any one person can maintain stable relationships'.[3]

Dunbar, son of an engineer, received his early education at Northamptonshire, then Magdalen College, Oxford, where his teachers included Nico Tinbergen. He spent two years as a freelance science writer.[2]

Dunbar's academic and research career includes the University of Bristol,[4] University of Cambridge from 1977 until 1982, and University College London from 1987 until 1994. In 1994, Dunbar became Professor of Evolutionary Psychology at University of Liverpool, but he left Liverpool in 2007 to take up the post of Director of the Institute of Cognitive and Evolutionary Anthropology, University of Oxford.[5][1]

Professor Dunbar is a director of the British Academy Centenary Research Project (BACRP)


References

1. ^ a b 'British Academy Fellows Archive'. British Academy. http://www.britac.ac.uk/fellowship/directory/archive.asp?fellowsID=1242. Retrieved on 2007-12-02.
2. ^ a b c 'Professor Robin Dunbar FBA'. British Humanist Association. http://www.humanism.org.uk/about/people/distinguished-supporters/Professor-Robin-Dunbar-FBA. Retrieved on 2007-12-02.
3. ^ Malcom Gladwell (June 17, 2007). 'Dunbar’s Number'. scottweisbrod. http://www.scottweisbrod.com/index.php/?p=92. Retrieved on 2007-12-02.
4. ^ 'Dominance and reproductive success among female gelada baboons'. March 24, 1977. http://www.nature.com/nature/journal/v266/n5600/abs/266351a0.html. Retrieved on 2007-12-03.
5. ^ 'Prof. Robin Dunbar FBA'. liv.ac.uk. http://www.liv.ac.uk/evolpsyc/dunbar.html. Retrieved on 2007-12-02.
6. ^ 'Faculty of Science'. liv.ac.uk. http://209.85.173.104/search?q=cache:0Lguj1bOUlUJ:www.liv.ac.uk/commsec/pdfs/emeritus_professors,_chairs_and_honorary_graduates.pdf+%22Robin+Ian+MacDonald+Dunbar%22&hl=en&ct=clnk&cd=8&gl=us. Retrieved on 2007-12-02.

Selected publications

* Dunbar. 1984. Reproductive Decisions: An Economic Analysis of Gelada Baboon Social Strategies. Princeton University Press ISBN 0691083606
* Dunbar. 1988. Primate Social Systems. Chapman Hall and Yale University Press ISBN 0801420873
* Dunbar. 1996. The Trouble with Science. Harvard University Press. ISBN 0674910192
* Dunbar (ed.). 1995. Human Reproductive Decisions. Macmillan ISBN 0333620518
* Dunbar. 1997. Grooming, Gossip and the Evolution of Language'. Harvard University Press. ISBN 0674363345
* Runciman, Maynard Smith, & Dunbar (eds.). 1997. Evolution of Culture and Language in Primates and Humans. Oxford University Press.
* Dunbar, Knight, & Power (eds.). 1999. The Evolution of Culture. Edinburgh University Press ISBN 0813527309
* Dunbar & Barrett. 2000. Cousins. BBC Worldwide: London ISBN 0789471558
* Cowlishaw & Dunbar. 2000. Primate Conservation Biology. University of Chicago Press ISBN 0226116360
* Barrett, Dunbar & Lycett. 2002. Human Evolutionary Psychology. London: Palgrave ISBN 069109621X
* Dunbar, Barrett & Lycett. 2005. Evolutionary Psychology, a Beginner's Guide. Oxford: One World Books ISBN 1851683569
* Dunbar. 2004. The Human Story. London: Faber and Faber ISBN 0571191339

External links

* Research profile at the Evolutionary Psychology and Behavioural Ecology Research Group, University of Liverpool.
* Publications list for the Evolutionary Psychology and Behavioural Ecology Research Group.
* 'The Social Brain Hypothesis' by Dunbar (1998).
* The Human Behaviour and Evolution Society
* Video Lecture

--------------

Dunbar's number

is a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is, and how each person relates to every other person.[1] Proponents assert that numbers larger than this generally require more restricted rules, laws, and enforced norms to maintain a stable, cohesive group. No precise value has been proposed for Dunbar's number, but a commonly cited approximation is 150.

Dunbar's number was first proposed by anthropologist Dunbar, who theorized that

'this limit is a direct function of relative neocortex size, and that this in turn limits group size ... the limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.'

On the periphery, the number also includes past colleagues such as high school friends with whom a person would want to reacquaint themselves if they met again.[2]

Research background

Primatologists have noted that, due to their highly social nature, non-human primates have to maintain personal contact with the other members of their social group, usually through grooming. Such social groups function as protective cliques within the physical groups in which the primates live. The number of social group members a primate can track appears to be limited by the volume of the neocortex region of their brain. This suggests that there is a species-specific index of the social group size, computable from the species' mean neocortex volume.

In a 1992 article, Dunbar used the correlation observed for non-human primates to predict a social group size for humans. Using a regression equation on data for 38 primate genera, Dunbar predicted a human 'mean group size' of 148 (casually rounded to 150), a result he considered exploratory due to the large error measure (a 95% confidence interval of 100 to 230).

Dunbar then compared this prediction with observable group sizes for humans. Beginning with the assumption that the current mean size of the human neocortex had developed about 250,000 years BCE, i.e. during the Pleistocene, Dunbar searched the anthropological and ethnographical literature for census-like group size information for various hunter-gatherer societies, the closest existing approximations to how anthropology reconstructs the Pleistocene societies. Dunbar noted that the groups fell into three categories — small, medium and large, equivalent to bands, cultural lineage groups and tribes — with respective size ranges of 30-50, 100-200 and 500-2500 members each.

Dunbar's surveys of village and tribe sizes also appeared to approximate this predicted value, including 150 as the estimated size of a neolithic farming village; 150 as the splitting point of Hutterite settlements; 200 as the upper bound on the number of academics in a discipline's sub-specialization; 150 as the basic unit size of professional armies in Roman antiquity and in modern times since the 16th century; and notions of appropriate company size.

Dunbar has argued that 150 would be the mean group size only for communities with a very high incentive to remain together. For a group of this size to remain cohesive, Dunbar speculated that as much as 42% of the group's time would have to be devoted to social grooming. Correspondingly, only groups under intense survival pressure[citation needed], such as subsistence villages, nomadic tribes, and historical military groupings, have, on average, achieved the 150-member mark. Moreover, Dunbar noted that such groups are almost always physically close: '... we might expect the upper limit on group size to depend on the degree of social dispersal. In dispersed societies, individuals will meet less often and will thus be less familiar with each, so group sizes should be smaller in consequence.' Thus, the 150-member group would occur only because of absolute necessity — i.e., due to intense environmental and economic pressures.

Dunbar, author of Grooming, Gossip, and the Evolution of Language, proposes furthermore that language may have arisen as a 'cheap' means of social grooming, allowing early humans to efficiently maintain social cohesion. Without language, Dunbar speculates, humans would have to expend nearly half their time on social grooming, which would have made productive, cooperative effort nearly impossible. Language may have allowed societies to remain cohesive, while reducing the need for physical and social intimacy.[3]

Dunbar's number has since become of interest in anthropology, evolutionary psychology,[4] statistics, and business management. For example, developers of social software are interested in it, as they need to know the size of social networks their software needs to take into account.

Alternative numbers

Dunbar's number is not derived from systematic observation of the number of relationships that people living in the contemporary world have. As noted above, it comes from extrapolation from nonhuman primates and from inspection of selected documents showing network sizes in selected pre-industrial villages and settlements in less developed countries.

Anthropologist H. Russell Bernard and Peter Killworth and associates have done a variety of field studies in the United States that came up with an estimated mean number of ties - 290 - that is roughly double Dunbar's estimate. The Bernard-Killworth median of 231 is lower, due to upward straggle in the distribution: this is still appreciably larger than Dunbar's estimate. The Bernard-Killworth estimate of the maximum likelihood of the size of a person's social network is based on a number of field studies using different methods in various populations. It is not an average of study averages but a repeated finding.[5][6] Nevertheless, the Bernard-Killworth number has not been popularized as widely as Dunbar's.

Popularization

* Dunbar's number has been most popularized by Malcolm Gladwell's The Tipping Point, where it plays a central role in Gladwell's arguments about the dynamics of social groups.
* In a 1985 paper titled 'Psychology, Ideology, Utopia, & the Commons,' psychologist Dennis Fox proposed the same concept as it is applied to anarchy, politics, and the tragedy of the commons.
* Neo-Tribalists have also used it to support their critiques of modern society.[citation needed]
* Recently, the number has been used in the study of Internet communities, especially MMORPGs such as Ultima Online, and social networking websites such as Facebook[7] and MySpace.[8]
* The Swedish tax authority planned to reorganize its functions in 2007. The number 150 was set as the maximum number of people in an office, referring to Dunbar's research.[9]

References

1. ^ Gladwell, Malcolm (2000). The Tipping Point - How Little Things Make a Big Difference. Little, Brown and Company. pp. 177-181,185-186. ISBN 0-316-34662-4.
2. ^ Carl Bialik (2007-11-16). 'Sorry, You May Have Gone Over Your Limit Of Network Friends'. The Wall Street Journal Online. http://online.wsj.com/article/SB119518271549595364.html?mod=googlenews_wsj. Retrieved on 2007-12-02.
3. ^ Dunbar, Robin (1998). Grooming, Gossip, and the Evolution of Language. Harvard University Press. ISBN 0674363361. http://www.hup.harvard.edu/catalog/DUNGRO.html.
4. ^ Nuno Themudo (2007-03-23). 'Virtual Resistance: Internet-mediated Networks (Dotcauses) and Collective Action Against Neoliberalism' (pg. 36). University of Pittsburg, University Center for International Studies. http://www.ucis.pitt.edu/clas/events/gap_conference/VirtualResistanceInternetMediatedNetworks-Themudo.pdf. Retrieved on 2007-12-02.
5. ^ McCarty,C., Killworth, P.D., Bernard,H.R., Johnsen, E. and Shelley, G. “Comparing Two Methods for Estimating Network Size”, Human Organization 60:28-39. (2000).
6. ^ Bernard, H. Russell, Gene Ann Shelley and Peter Killworth. 1987. 'How Much of a Network does the GSS and RSW Dredge Up?' Social Networks 9: 49-63. H. Russell Bernard. 2006. 'Honoring Peter Killworth's contribution to social network theory.' Paper presented to the University of Southampton, September. http://nersp.osg.ufl.edu/~ufruss/
7. ^ http://www.economist.com/science/displaystory.cfm?story_id=13176775
8. ^ One example is Christopher Allen, 'Dunbar, Altruistic Punishment, and Meta-Moderation'.
9. ^ The Local – Sweden's news in English, July 23, 2007. 'Swedish tax collectors organized by apes'.

Further reading

* Dunbar, R.I.M. (1992) Neocortex size as a constraint on group size in primates, Journal of Human Evolution 22: 469-493.
* Dunbar, R.I.M. (1993), Coevolution of neocortical size, group size and language in humans, Behavioral and Brain Sciences 16 (4): 681-735.
* Edney, J. J. (1981a). Paradoxes on the commons: Scarcity and the problem of equality. Journal of Community Psychology, 9, 3-34.
* Sawaguchi, T., & Kudo, H. (1990), Neocortical development and social structure in primates, Primates 31: 283-290.

* Wong, David (2005) Inside the Monkeysphere, [1], a semi-satirical introduction to Dunbar's Number for the average internet user.

External links

* A pre-publication version of Coevolution of neocortical size, group size and language in humans. (See also Bibliography section there.)
* University of Liverpool Research Intelligence No. 17, August 2003 - 'The ultimate brain teaser' - an article on Dunbar's research.
* Some speculations about a correlation between the monkeysphere and Guild size in online multiplayer role playing games.
* Mospos blog entry - Communities of practice and Dunbar's number
* Life With Alacrity blog entry - Applying Dunbar's number to online gaming, social software, collaboration, trust, security, privacy, and internet tools, by Christopher Allen."

Ex-ample: E-tools matlab and pubmed ; retrieve information from various Web database; read the informations into a MATLAB structures.



Bioinformatics Toolbox includes several get functions that retrieve information from various Web databases. Additionally, with some basic MATLAB programming skills, you can create your own get function to retrieve information from a specific Web database.
The following procedure illustrates how to create a function to retrieve information from the NCBI PubMed database and read the information into a MATLAB structure. The NCBI PubMed database contains biomedical literature citations and abstracts.
The following procedure illustrates how to create a function to retrieve information from the NCBI PubMed database and read the information into a MATLAB structure. The NCBI PubMed database contains biomedical literature citations and abstracts.

Creating the getpubmed Function

The following procedure shows you how to create a function named getpubmed using the MATLAB Editor. This function will retrieve citation and abstract information from PubMed literature searches and write the data to a MATLAB structure.
Specifically, this function will take one or more search terms, submit them to the PubMed database for a search, then return a MATLAB structure or structure array, with each structure containing information for an article found by the search. The returned information will include a PubMed identifier, publication date, title, abstract, authors, and citation.
The function will also include property name/property value pairs that let the user of the function limit the search by publication date and limit the number of records returned.
  1. From MATLAB, open the MATLAB Editor by selecting File > New > M-File.
  2. Define the getpubmed function, its input arguments, and return values by typing:
    function pmstruct = getpubmed(searchterm,varargin)
    % GETPUBMED Search PubMed database & write results to MATLAB structure
  3. Add code to do some basic error checking for the required input SEARCHTERM.
    % Error checking for required input SEARCHTERM
    if(nargin<1)
        error('GETPUBMED:NotEnoughInputArguments',...
              'SEARCHTERM is missing.');
    end
  4. Create variables for the two property name/property value pairs, and set their default values.
    % Set default settings for property name/value pairs,
    % 'NUMBEROFRECORDS' and 'DATEOFPUBLICATION'
    maxnum = 50; % NUMBEROFRECORDS default is 50
    pubdate = ''; % DATEOFPUBLICATION default is an empty string
  5. Add code to parse the two property name/property value pairs if provided as input.
    % Parsing the property name/value pairs
    num_argin = numel(varargin);
    for n = 1:2:num_argin
        arg = varargin{n};
        switch lower(arg)
    
            % If NUMBEROFRECORDS is passed, set MAXNUM
            case 'numberofrecords'
                maxnum = varargin{n+1};
    
            % If DATEOFPUBLICATION is passed, set PUBDATE
            case 'dateofpublication'
                pubdate = varargin{n+1};          
    
        end
    end
  6. You access the PubMed database through a search URL, which submits a search term and options, and then returns the search results in a specified format. This search URL is comprised of a base URL and defined parameters. Create a variable containing the base URL of the PubMed database on the NCBI Web site.
    % Create base URL for PubMed db site
    baseSearchURL = 'http://www.ncbi.nlm.nih.gov/sites/entrez?cmd=search';
  7. Create variables to contain five defined parameters that the getpubmed function will use, namely, db (database), term (search term), report (report type, such as MEDLINE®), format (format type, such as text), and dispmax (maximum number of records to display).
    % Set db parameter to pubmed
    dbOpt = '&db=pubmed';
    
    % Set term parameter to SEARCHTERM and PUBDATE
    % (Default PUBDATE is '')
    termOpt = ['&term=',searchterm,'+AND+',pubdate];
    
    % Set report parameter to medline
    reportOpt = '&report=medline';
    
    % Set format parameter to text
    formatOpt = '&format=text';
    
    % Set dispmax to MAXNUM
    % (Default MAXNUM is 50)
    maxOpt = ['&dispmax=',num2str(maxnum)];
  8. Create a variable containing the search URL from the variables created in the previous steps.
    % Create search URL
    searchURL = [baseSearchURL,dbOpt,termOpt,reportOpt,formatOpt,maxOpt];
  9. Use the urlread function to submit the search URL, retrieve the search results, and return the results (as text in the MEDLINE report type) inmedlineText, a character array.
    medlineText = urlread(searchURL);
  10. Use the MATLAB regexp function and regular expressions to parse and extract the information in medlineText into hits, a cell array, where each cell contains the MEDLINE-formatted text for one article. The first input is the character array to search, the second input is a search expression, which tells the regexpfunction to find all records that start with PMID-, while the third input, 'match', tells the regexp function to return the actual records, rather than the positions of the records.
    hits = regexp(medlineText,'PMID-.*?(?=PMID|
    $)','match');
  11. Instantiate the pmstruct structure returned by getpubmed to contain six fields.
    pmstruct = struct('PubMedID','','PublicationDate','','Title','',...
                 'Abstract','','Authors','','Citation','');
  12. Use the MATLAB regexp function and regular expressions to loop through each article in hits and extract the PubMed ID, publication date, title, abstract, authors, and citation. Place this information in the pmstruct structure array.
    for n = 1:numel(hits)
        pmstruct(n).PubMedID = regexp(hits{n},'(?<=PMID- ).*?(?=\n)','match', 'once');
        pmstruct(n).PublicationDate = regexp(hits{n},'(?<=DP  - ).*?(?=\n)','match', 'once');
        pmstruct(n).Title = regexp(hits{n},'(?<=TI  - ).*?(?=PG  -|AB  -)','match', 'once');
        pmstruct(n).Abstract = regexp(hits{n},'(?<=AB  - ).*?(?=AD  -)','match', 'once');
        pmstruct(n).Authors = regexp(hits{n},'(?<=AU  - ).*?(?=\n)','match');
        pmstruct(n).Citation = regexp(hits{n},'(?<=SO  - ).*?(?=\n)','match', 'once');
    end
  13. Select File > Save As.
    When you are done, your M-file should look similar to the getpubmed.m file included with the Bioinformatics Toolbox software. The samplegetpubmed.m file, including help, is located at:
    matlabroot\toolbox\bioinfo\biodemos\getpubmed.m
Note The notation matlabroot is the MATLAB root directory, which is the directory where the MATLAB software is installed on your system.

assurance and web 2.0 , e-insurance, Financial Services industry, and "social brain"




In 2008, global insurance premiums grew by 3.4%  to reach $4.3 trillion. The financial crisis (since sept. 2008; an highly improbable. event?) has shown that the insurance sector is sufficiently capitalised: the majority of insurance companies had enough capital to absorb this impact and only a very small number turned to government for support.
Modeling and analysis of financial markets and of risks are often based on the Gaussian distribution but 50 years ago, Benoît Mandelbrot discovered  that changes in prices do not follow this distribution: changes in prices are rather modeled better by Lévy alpha-stable distributions.

An increasing variety of outcomes are being identified to have heavy tail distributions, including income distributions, financial returns, insurance payouts, reference links on the web, etc... A particular subclass of heavy tail distributions are power-laws.
For example of a power-law, the scale of change (volatility), depends on the length of the time interval to a power a bit more than 1/2.

Power-law tail behavior and the summation scheme of Levy-stable distributions ( alpha- stable distribution) is the basis for their frequent use as models when fat tails above a Gaussian distribution are observed. However, recent studies suggest that financial asset returns exhibit tail exponents well above the Levy-stable regime (0 < alpha < = 2 ). A paper ( http://ideas.repec.org/p/wpa/wuwpem/0305003.htm) illustrates that widely used tail index estimates (log-log linear regression and Hill) can give exponents well above the asymptotic limit for alpha close to 2, resulting in overestimation of the tail exponent in finite samples. The reported value of the tail exponent alpha around 3 may very well indicate a Levy-stable distribution with alpha around 1.8.

One of the most important subclass of heavy tail distributions are power-laws, which means that the probability density function is a power. One of the most important properties of power-laws is its scale invariance. The universality of power laws (with a particular scaling exponent) has an origin in the dynamical processes (self-organized systems) that autogenerate the power-law relation. Risk is not only a tail distribution but also the consequences of "social brain" (i.e. Richardson's Law for the severity of violent social conflicts). 
For info on  "social brain", see the work of Robin I.M. Dunbar: 
http://ex-ample.blogspot.com/2011/06/robin-ian-macdonald-dunbar-cambridge.html




The terms long-range dependent, self-similar and heavy-tailed... cover a range of tools from different disciplines that may be used in the important science of determining the probability of rare events, which is the basis of insurance industry.

I think that e-insurance is not only online insurance. By  "integrating" of studies (econometrics, statistics, simulation on large corpus, social aspects...) and sharing of encrypted media (cloud computing; see: http://ex-ample.blogspot.com/2011/06/example-exemple-boxnet-html5-sharing.html),  insurance companies can leverage on Web 2.0 technology  to online evaluate financial informations and deliver insurance products to the people, and to the networks of social media platform who need protection against risks.

Rem: if statistics are too complex (as Churchill already said: "There are lies, damn lies - and statistics."), this problem is in fact simple. I try to explain this behavior with an example: if we consider the sizes of files transferred from a web-server, then the distribution is heavy-tailed, that is, there are a very large number of small files transferred but, crucially, the "small" number of very large files transferred remains a major component of the volume downloaded. 

comment commencer AdSense sur son blogger blogspot? en français







pour savoir le sens de ces paramètres, voir:

Google Adsense (en francais); comment gagner de l'argent avec AdSense. Les astuces.


Google Adsense est un des moyens de gagner de l’argent (très peu) sur internet. Il est sans doute le + facile à mettre en œuvre surtout via blogger. Mais il ne faut pas rêver et il me semble qu’il n’est pas le plus évident pour avoir des revenus corrects. Le principe de google Adsense est la publicité adaptative sur vos pages (la pub est adapté à la thématique de la page). J’aime assez ce système pour mes blog/sites qui ont des ensembles de thèmes très différents.
J’ai mis en place Google Adsense surtout pour voir comment fonctionnait ce système qui immerge internet.

La publicité via Google Adsense repose sur un petit code javascrip qui contient 2 choses :
1) des valeurs
google_ad_client = "pub-0251820679627741";
google_feedback = "on";
google_max_num_ads = "4";
google_ad_width = 336;
google_ad_height = 280;
google_ad_format = "336x280_as";
google_image_size = "336x280";
google_ad_type = "text,flash,html";
google_color_bg = 'ffffff';
google_color_text = '000000';
google_color_link = '265E15';
google_color_border = 'ffffff';
google_color_url = '265E15';

2) le code qui est celui –ci :
-------------------------------------------
Les paramètres importants pour un peu maitriser Adsense (ils sont affichés dans l’interface AdSense en vous connectant sur votre compte) sont :
a) Pages vues : le nombre de pages vues est un indicateur quantitatif de votre blog qui peut être atteint soit par les moteurs de recherche, ou via les liens dans les commentaires d’autres blogs, ou les liens dans les listes d’autres blogs, les forums…
b) Clics : ce nombre indique combien de fois les utilisateurs ont juste cliqué sur vos annonces.
c) CTR pages : c’est le RATIO entre le nombre de clics sur le nombre de pages vues. Il est exprimé en %. On a typiquement un ratio normal qui tourne autour de 1%. Certains MFA (Made For Adsense) se vantent d’atteindre les 10% ! Ils ont un contenu minimum avec souvent une intégration texte/publicité piégeante et donc dans une situation illégale par rapport aux clauses Google. A l’inverse ce ratio peut descendre particulièrement bas si les publicités sont mal placées ou mal ciblées.
d) CPC : le « coût par clic » est obtenu en divisant le prix des annonces cliquées par le nombre de clics. Lorsque l’on s’intéresse à un clic, on s’aperçoit que les prix sont très variables : de 1 centimes à plus de 1 euro… En fait on ne peut pas contrôler le CPC ce serait trop simple et il dépend  du type d’annonces qui s’affiche sur votre site. Il semble que ce soit les annonces des banques, assurances, etc… qui paient le plus...
e) RPM Pages : c’est le « revenu pour 1000 pages ». C’est un indicateur quantitatif à suivre car si vous visez une rémunération de 30euros/mois par exemple, cela vous donne le nombre de visiteurs à atteindre (il faut connaitre le nombre de pages vues par chaque visiteur en moyenne).

Tous ces indicateurs quantitatifs sont donnés par jour, semaine ou mois selon l’échelle de temps que vous choisissez.
Revenus = Cout moyen pour 1 clic x Nombre de Clics

On peut donc agir surtout sur le nombre de clics car celui-ci dépend de 2 paramètres:
i) Nombre de visiteurs : pour augmenter le nombre de pages vues et donc le nombre le clic, il faut augmenter le nombre de visiteurs.
ii) Placement des annonces : en lisant les FAQ Google vous trouverez quelques informations assez pertinentes sur le placement.

Revenus = K x Cout_moyen_pour_1_clic x Nombre_de_visiteurs x Indicateur_pertinence_placement_annonces
Avec K paramètre linéaire.
Ces facteur sont cumulatifs et si on les améliore i.e. chacun d’entre eux de 10% alors on aura 1,10×1,10×1,10=1.3 soit 30% de gain.

Rem : il faut aussi prendre en compte le type de recherche via Google qui conduit les gens sur votre blog et voir si l’article qu’ils vont consulter répond à leurs requêtes.


automatic versus basic manual registration



In medical imaging a frequent task has become
the registration of images from a subject taken with different imaging modalities,
where the term modalities here refers to imaging techniques such as Computed Tomography (CT), Magnetic Resonance Tomography (MRT) and Positron Emission Tomography (PET).
The challenge in inter-modality registration lies in the fact that e.g in CT images’bright’ regions are not necessarily bright regions in MRT images of the same subject.
exemple:
an affine registration, i.e. it determines an optimal transformation with respect to translation, rotation, anisotrope scaling, and shearing.
Closely related to registration is the task of image fusion, i.e. the simultaneous visualization of two
registered image datasets.
————Basic Manual Registration
play with software (i.e. amira) for a better alignment of the CT and MRT data, but it’s still not perfect…
———–Automatic Registration
an automatic registration is via optimization of a quality function.
For registration of datasets from different imaging modalities, in amira, the Normalized Mutual Information is the best suited quality function. In short, it favors an alignment which ’maps similar gray values to similar
gray values’. A hierarchical strategy is applied, starting at a coarse resampling of the datasets, and
proceeding to finer resolutions later on.
————Registration Using Landmarks
You should be able to load files, interact with the 3D viewer, and be
familiar with the 2-viewer layout and the viewer toggles.
We will transform two 3D objects into each other by first setting landmarks on their surfaces and then
defining a mapping between the landmark sets. As a result we shall see a rigid transformation and a
warping which deforms one of the objects to match it with the other. The steps are:
1. Displaying data sets in two viewers.
2. Creating a landmark set.
3. Alignment via a rigid transformation.
4. Warping two image volumes.

list of google blog; liste des blog de google


http://www.google.com/intl/en/press/blog-directory.html#tab3

8 ways: How to post a Source Code in Blogger, blogspot or wordpress, or another blog? Comment publier un code source (HTML) sur son blog?





 Whenever I tried to put an HTML code in my post, it was showing the result. 
So what to Do?


--------------try this 
http://www.allblogtools.com/html-character-encoder/
(en JQuery 1.4.4)
Example. If you posted this code

<body>
<p>http://ex-ample.blogspot.com</p>
</body>
the browser will convert it to 
http://ex-ample.blogspot.com


-------------or 7 ways:
To work well javascript in blogger post should be in one line.


0) blogger post editor:
If you use Edit HTML, especially to add tables and other advanced HTML to your posts, you should find that the editor has a number of enhancements to make the experience less frustrating.  
Select  "post Options" and "Show HTML literally"
The default, “Interpret typed HTML,” matches the current post editor’s behavior: typing “bold” into the editor would look like this in your post: bold. If you change the setting to “Show HTML literally” instead, you’ll get: bold.


1) Google Code Prettify: Google offer a JavaScript module and CSS file too.
link: http://code.google.com/p/google-code-prettify

2) javascript
SyntaxHighlighter could be a solution. It solves your problem for make a page with look nice source code.
SyntaxHighlighter: SyntaxHighlighter is a fully functional self-contained code syntax highlighter developed in JavaScript.
link: http://alexgorbatchev.com/SyntaxHighlighter/
demo: http://alexgorbatchev.com/SyntaxHighlighter/manual/demo
version: 3.0.83
Integration: http://alexgorbatchev.com/SyntaxHighlighter/integration.html
Used by: Apache, Mozilla, Wordpress, Bug Labs...
To add this feature on your blogger blog you must follow the following steps:

  1. Download SyntaxHighlighter here
  2. Unzip and upload the following files to your webspace (googlepages is a great place)
  3. SyntaxHighlighter.css
  4. shCore.js’
  5. shBrushCpp.js (or whatever code you wish to use on your blog)
  6. Go to your Dashboard/Layout/Edit HTML
  7. Backup Your Template
  8. Add the following code (see below)  right after the tag...




3) WordPress Source Code Plugin: Wordpress have created a shortcode. This shortcode preserves its formatting and even provides syntax highlighting for certain languages just with a simple wrap like below. 


4) Put your code between the "textarea" html tag.





5) remplace less-than sign and greater-than sign  by other ascii symbols like  ‹ and  ›,

Single left-pointing angle quotation mark and Single right-pointing angle quotation mark :
textarea  rows="10" cols="30"
ADD CODE HERE
/textarea
but if you copy this "HTML code", it doesn't work.

6) many javascript web applications that will format source code text into html for inserting into your blog:

Friday, June 10, 2011

list of imaging software (neuroscience)


  3D Slicer (MIT AI Lab; Surgical Planning Lab, Brigham and Women's Hospital)
  3DViewnix (The University of Pennsylvania)
   ACTIV 2000 (Neuroradiology Dpt, C.H.U. de Bicêtre)
 AFNI
 AIR 5.0 (Automated Image Registration)
  AMIDE (Amide's a Medical Image Data Examiner)
  Amira* (Visual Concepts GmbH)
  analySIS® (Soft Imaging System)
  Analyze* (Mayo Clinic)
 Anatomist (Neurospin, I2BM, CEA)
  Atrophy Simulation Package (SBIA Radiology, University of Pennsylvania)
  Autoaligner (Bitplane Inc.)
  AutoSPM (Imagilys)
   b3d (Center for Neuroscience, University of California, Davis)
 BAMM (University of Cambridge, King's College London and the Wellcome Trust)
  bioelectromagnetism (matlab tools for eeg/meg/mri)
  BIRN Human Imaging Database (HID) (Biomedical Informatics Research Network)
  Blox (Kennedy Krieger Institute & Johns Hopkins Hospital)
 Brain Atlas for Functional Imaging (Theime Medical Publishers)
  BrainGraph Editor 1.0 Beta (The BrainGraph Editor 1.0 Beta is a JAVA application designed to create taxonomies or hierarchies.)
 Brain Image (Stanford Psychiatry Neuroimaging Laboratory)
  BrainInfo (University of Washington, Seattle)
 BrainMaps.org (High-Resolution Brain Maps and Brain Atlases)
 BRAINS (Brain Research: Analysis of Images, Networks, and Systems) (Iowa Mental Helath Clinical Research Center)
  BrainStorm (University of Southern California; CNRS LENA Paris; Los Alamos National Lab.)
 BrainVISA (IFR 49 Paris/I2BM CEA)
 BrainVoyager (Brain Innovation B.V.)
  Brede Toolbox (IMM, Techical University of Denmark)
 btrack (National Center for Microscopy and Imaging Research)
  BYU2Vox (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
   Camino (Department of Computer Science, University College London)
 cardviews (Center for Morphometric Analysis, Massachusetts General Hospital)
CARET (Washington University in St. Louis School of Medicine)
  CATNAP (Johns Hopkins University School of Medicine)
  CellProfiler cell image analysis software (Whitehead Institute & MIT)
  COMKAT: Compartment Model Kinetic Analysis Tool (University Hospitals of Cleveland)
 Conexus (Center for Neuroscience, University of California Davis)
 Corner_Cube (University of Minnesota)
 DC Harvester* (fMRI Data Center, Dartmouth College)
  DCMTK (OFFIS DICOM Toolkit)
  DCSearch (fMRI Data Center, Dartmouth College)
  DCViewer (fMRI Data Center, Dartmouth College)
 Dend (National Center for Microscopy and Imaging Research)
  DICOMscope (DICOM Viewer)
  DicomWorks (Universities of Lille and Lyon, France)
  diffusion_smoothing_tool (Draper Lab & MGH)
  diffusion TENSOR Visualizer (Image Computing & Analysis Lab., Radiology, The Univ. of Tokyo Hospital)
  DPTools (Neuroradiology Dpt, C.H.U. de Bicêtre)
  DTI Gradient Table Creator (F.M. Kirby Research Center, Kennedy Krieger Institute, Johns Hopkins University)
  DtiStudio (Laboratory of Brain Anatomical MRI, Johns Hopkins Radiology)
  DTI Track 2005 (INRIA Sophia Antipolis, France)
   Edgewarp3D* (The University of Michigan/Visible Human Project)
 EM3D (Uel J. McMahan Laboratory, Stanford University, Dept. of Neurobiology, Dept. of Structural Biology)
  EMS (Expectation-Maximization Segmentation) (Medical Image Computing, Leuven, Belgium)
  EvIdent(r) (National Research Council, Institute for Biodiagnostics)
  ezDICOM (University of Nottingham)
   FACT (Interdisciplinary MRI/MRS Lab, National Taiwan University)
  FIASCO (Functional Image Analysis Software Computational Olio) (CMU Statistics Department)
  Fiber Tracking (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
  Fiber Viewer (UNC at Chapel Hill, Psychiatry and computer sciences departments)
 Fido (National Center for Microscopy and Imaging Research)
  FilamentTracer (Bitplane Inc.)
 FisWidgets (University of Pittsburgh)
  fMRIstat (Montreal Neurological Institute )
  form*Z* (auto*des*sys)
  Free-D (AMIB, NOPA, INRA Jouy-en-Josas, France)
FreeSurfer (NMR Center, Massachusetts General Hospital)
 FSL - The FMRIB Software Library (FMRIB, Oxford University)
   geWorkbench (Center for Computational Biology and Bioinformatics, Columbia University)
  Gimp* (Peter Mattis and Spencer Kimball)
  gpetview (Gtk-base Analyze image viewer)
  Gradient non-linearity distortion correction (Martinos Center, MGH, Boston)
 Group ICA Toolbox (GIFT and EEGIFT) (The MIND Research Network)
   HAMMER (SBIA, Department of Radiology, Upenn )
  Head Circumference (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
   IHCorr (IDeA Lab, Center for Neuroscience, UC Davis)
 iiV (internet image Viewer) (Cognitive Neuroimaging Unit, VA Medical Center, University of Minnesota, Minneapolis)
  ImageJ (Research Services Branch, NIMH)
  ImageMagick* (ImageMagick Studio LLC)
  Image-Pro Plus 5.0 (Media Cybernetics, Inc.)
  ImageTrak (Fluorescence image visualization and analysis for Macintosh OS X)
  Imaris (Bitplane Inc.)
  ImarisColoc (Bitplane Inc.)
  Imaris InPress (Bitplane Inc.)
  ImarisMeasurementPro (Bitplane Inc.)
  ImarisTrack (Bitplane Inc.)
  ImarisXT (Bitplane Inc.)
  Imconverter (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
  Imread (University of Colorado Health Sciences Center)
  InsightSNAP (Penn Image Computing and Science Lab, University of Pennsylvania, CS dept UNC Chapel Hill)
  Intensity Rescaler (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
  Intramodal registration
  IrfanView (by Irfan Skiljan)
  ISI-Distance - Measure for Spike Train Synchrony (Institute for Nonlinear Science, University of California San Diego)
  ITK (Insight Segmentation and Registration Toolkit)
   JDTI (Duke University Medical Center)
 Jim (Xinapse Systems)
  JIV (Java Image Viewer) (A 3D Image Data Visualization and Comparison Tool)
  JViewer (A Java-based 2D and 3D image viewer)
   L-Measure (Krasnow Institute, George Mason university)
  L-Neuron (Krasnow Institute, George Mason University)
 LONI Debabeler (The LONI Debabeler manages the conversion of imaging data between multiple file fomats)
 LONI De-Identification Debablet (The LONI Debablet de-identifies medical image files.)
 LONI ICE (Generates seed points for image processing applications)
 LONI Inspector (The LONI Inspector is an application for displaying, searching, comparing, and exporting metadata.)
  LONI Pipeline (Laboratory of Neuro Imaging, UCLA)
 LONI Visualization Environment (LOVE)
  LORETA (low resolution brain electromagnetic tomography) (The KEY Institute for Brain-Mind Research, University Hospital of Psychiatry, Zurich, Switzerland)
 Lyngby (Matlab functional neuroimaging analysis toolbox)
  match-colors (Center for Neuroscience, University of California, Davis)
  MATITK (Call ITK from MATLAB)
 MedINRIA (Asclepios Research Team, INRIA Sophia Antipolis, France)
  MEDx* (Sensor Systems, Inc.)
  MeVisLab* (MeVis)
 MINC - core (Medical Image NetCDF)
  MINC - EMMA (A MATLAB interface for MINC)
  MINC - mni_autoreg (A highly customisable Linear and Non-Linear registration Package)
  MINC - N3 (An automated tool for correction of intensity nonuniformity in MRI data)
  MINC - volume_io (A simplified API for the MINC file format)
 MIPAV (Medical Image Processing, Analysis and Visualization - NIH)
 MIView (gbooksoft.com)
  MOUSE BIRN ATLASING TOOLKIT (MBAT) 2.0 Beta (This is a collaborative effort of six laboratories. See other information section below for detail.)
 Mouse Brain Atlas Web References (The Mouse Brain Library)
  mri3dX* (Aston University School of Life and Health Science)
  MRIcro (University of South Carolina)
  mri_toolbox (matlab functions for Analyze 7.5)
 MRIVIEW (Biophysics Group (P-21), Los Alamos National Laboratory )
  MRI Watcher (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
  NeuroLens (A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital)
  Neurolucida (System for neuron tracing, brain mapping and neuroImaging)
 NeuroServ (The MITRE Corporation)
 NeuroTerrain Atlas Server (Laboratory for Bioimaging & Anatomical Informatics, Dept. Neurobio. & Anat., Drexel U. Coll. of Med.)
 NeuroTerrain NetOStat Atlas Browser (+ NT-SDK) (Laboratory for Bioimaging & Anatomical Informatics, Dept. Neurobio. & Anat., Drexel U. Coll. of Med.)
  NIH Image (Research Services Branch, NIMH)
  NIS (NeuroImaging Statistics) (University of Pittsburgh)
  Non-linear normalization of MRI brain scans
  Northern Eclipse 6.0
 NPAIRS (University of Minnesota)
  NVM (Neuromorphometrics)
  Olfactory Glomerular Response Mapping (University of California, Irvine)
  OsiriX
  Partial Least Squares GUI for PET, fMRI & EEG/MEG (Rotman Res Inst - Baycrest Centre, Univ of Toronto)
 PMOD (PMOD Technologies)
  PV-Wave* (Visual Numerics)
   RAVENS (Regional volumetric analysis of brain images)
  Reconstruct (Boston University and Medical College of Georgia)
 RView
   scanSTAT (Mark Cohen)
  Scion Image* (Scion Corporation)
 seg (Center for Neuroscience, University of California, Davis)
  ShapeLogic (Sami Badawi)
  SHIVA (Laboratory of Neuro Imaging, UCLA)
  siViewer (Soft Imaging System)
 Skandha4 and Brain Mapper (University of Washington)
 SnPM - Statistical Nonparametric Mapping (Department of Biostatistics, University of Michigan)
 SPM5 (Wellcome Department of Imaging Neuroscience, 12 Queen Square, London WC1N 3AR, UK.)
 StackVis (Center for Neuroscience, University of California, Davis)
  STASSIS (International Center for Neurological Restoration)
  Statistically-based Simulation of Deformations (SBIA, Department of Radiology, University of Pennsylvania)
  Stereo Investigator (Stereology System for brightfield, fluorescence and confocal microscopy)
  Stimulate (CMRR - University of Minnesota)
 STRFPAK (Theunissen Lab and Gallant Lab, UC-Berkeley)
  stroketool (Digital Image Solutions)
  stroketool-CT (Digital Image Solutions)
 SuMS (Washington University School of Medicine)
 SureFit (Washington University in St. Louis School of Medicine)
 Surface-Based Atlases (Washington University School of Medicine)
  SurfRelax (Software for surface analysis; Biomedicon/New York University)
 Synu (National Center for Microscopy and Imaging Research)
  Talairach Daemon (Research Imaging Center, UTHSC San Antonio)
  TetSplit (SBIA, Department of Radiology, Upenn)
  TOPPCAT (Duke University Medical Center)
   Valmet (University of North Carolina at Chapel Hill, Psychiatry and computer sciences departments)
 VA_SLICER (University of Minnesota)
  Videoscribbler (Live video stereology overlay for Macintosh)
 ViPAR (Image Analysis and Communications Lab (IACL), Johns Hopkins University )
  VOLUME-ONE (VOLUME-ONE developers group)
VoxBo (Center for Functional Neuroimaging, University of Pennsylvania)
 Voxtrace (National Center for Microscopy and Imaging Research)
  VTK CISG Registration Toolkit (CISG Guy´s Hospital London, King´s College London)
  VVNT (Medical Imaging Solutions)
  Wavelet Analysis of Image Registrattion (WAIR)
WFU_BPM (Advanced Neuroscience Imaging Research Core, Wake Forest University Baptist Medical Center )
 WFU_PickAtlas (Advanced Neuroscience Imaging Research Core, Wake Forest University Baptist Medical Center)
   XNAT (Washington University School of Medicine)
  XnView (Gougelet Pierre)
  xv* (John Bradley)
 xvol (Center for Morphometric Analysis, Massachusetts General Hospital)
   ZFIQ (Center for Biomedical Informatics, TMHRI-Weill Cornell)

----------
ref.
http://www.cma.mgh.harvard.edu/iatr/display.php?spec=all