Thursday, December 29, 2011

Sunday, December 25, 2011

Saturday, December 24, 2011

Warren Buffett at Tungaloy Fukushima Japan

Recently Warren Buffett's Berkshire Hathaway purchased about 71% of a manufacturing division of Tungaloy Corp. Japan.
Mr. Buffett's impressions of opportunities at Tungaloy in this short video.
Posted November 30 2011

Thursday, December 22, 2011

Warren Buffett Interview
Visit to Japan after Earthquake
Never Give Up Fukushima!
Tungaloy Corporation Japan
Investment in Japan after Earthquake.
Fukushima as a Location for Investment.

Nikkei Japan Report December 2011

News Maker Interviews

Mr. Makoto Yoda, President of GS Yuasa Corporation (Broadcast in May. 2010)
Present and Future of Battery Technology.  Hybrid Trains, Electric Cars, Market Share Automotive

Mr. Yoshiharu Hoshino, CEO of Hoshino Resort Co., Ltd. (Broadcast in Jun. 2010)
Tourism is an Export. Making Japanese Resorts Attractive to Overseas Vistors

Mr. Luo Yiwen, President & CEO of Laox Co., Ltd. (Broadcast in Aug. 2010)
Business Alliance between Sunnying China and Laox Japan. 
Chinese tourists buy Japanese Retail Goods. A bridge between Japan and China

December 20 2011

Wednesday, December 14, 2011

Merry Christmas 2010 Vintage Community Channel

A Fun Christmas Card Video for You!

Pulsate from Audiotool
Free download at the Chrome Web Store
Version 1.0 December 2011

Friday, December 9, 2011

Pacific Ocean Park 1958 - 1974 
a Southern California Cultural Icon

Classic footage from POP from YouTube

Uploaded to YouTube by

Uploaded to YouTube by

Information about this video from YouTube post:

Nice 3 minute video of Pacific Ocean Park, aka POP. Located in Santa Monica / Venice California. This was taken from a 1959 documentary called "Where The Mountains Meet The Sea". 

Rides shown are Sea Serpent Roller Coaster, Sky Ride, Sea Circus, Banana Boat Ride, Midway, Double Ferris Wheels, Diving Bells, and many others in the background. 

POP was finally taken from us in the early 1970's to make way for, well nothing..It's just beach now. I stood where it was last summer and closed my eyes and could hear the laughter and fun that was there one time. You would not even know it was there now, except for just a few signs stating "No Swimming - Possible Underwater Obstructions". 

Visit for the growing fan base of what was a great amusement park. 

Also, check out a great site from Jeffrey Stanton with alot of pictures and a complete history of this magnificent amusement park.


Posted by YouTube Channel westcoastpaeb. Standard YouTube License. Uploaded to YouTube on Jul 22, 2010. Two radio jingles (early 60's?) for Pacific Ocean Park P.O.P amusement park. These were added as unlisted bonus tracks on the surf instrumental LP comp Surfer's Mood vol 2 released about 15 years ago.

 The Hidden History of Pacific Ocean Park
A Documentary  Part 1

Part one of Emmy nominated documentary produced and directed by Anthea Raymond and Beverly Jones. 
Posted by YouTube Channel acicchio Standard YouTube License.
Parts 2 and 3 are available on YouTube.

North of famous Venice Beach, Ocean Park, California, was also founded by Abbot Kinney.
Like Venice, Ocean Park started as a place of ballrooms, amusement, piers, delicatessens, and street fairs.

 In the 1950s, it was home to the Lawrence Welk show. A decade later, Jim Morrison and the Doors were regulars at "The Cheetah." Today Ocean Park is central to Santa Monica's booming creative industry.

That's not really a surprise since the place (Santa Monica) has long been popular with creatives, from Charlie Chaplin and Mary Pickford, to painter Richard Diebenkorn, Oliver Stone, and Arnold Schwarzenegger.

Tuesday, December 6, 2011

NIKKEI Japan Report in English

NIKKEI Japan Report is produced by NIKKEI (Nihon Keizai Shimbun) and is an English-language, economic news program broadcast worldwide, focusing on "the Japanese economy's current state and future possibilities".

December 4 2011

Innovations in Textiles  |   Traditional Japanese Musical Instruments Updated  |   Takeda Pharmaceutical

Monday, November 28, 2011

William Gilbert Exemplary Scientist from 400+ Years Ago

Good Science by William Gilbert 1544 to 1603

English physician, physicist, natural philosopher and author of De Magnete (On the Magnet and Magnetic Bodies) and of “On the Great Magnet the Earth.”

Wonderful account of scientific thinking and experiments from the distant past.
Among his conclusions were that the Earth's core was made of iron. And he correctly observed and identified as important that when magnets are cut in two, each forms a new magnet with north and south poles. In physics we say that magnetic monopoles do not exist (under earthly conditions.)  An exception might be during the very early stages of the big bang. But I digress.

...but back to Dr. Gilbert in the year 1600.

A few impressive bits from the quotation below....

Notice Gilbert clearly describes the distinction between the magnetic north pole and the geodetic or geographic north pole. This is called the “variation of the location,” that is, the difference in direction between true north and magnetic north. A “must know” for navigation by magnetic compass.

An iron bar suspended from a cross-woven silk cord becomes an elegant experiment under Gilbert.

QUOTATION from Chapter 12 Book 1 of De Magnete (with a little editing)

A long piece of Iron, even though not excited by a loadstone, settles itself toward North and South.

Every good and perfect piece of iron, if drawn out in length, points North and South, just as the loadstone or iron rubbed with a magnetical body does; a thing that our famous philosophers have little understood, who have sweated in vain to set forth the magnetick virtues and the causes of the friendship of iron for the stone.

You may experiment with either large or small iron works, and either in air or in water. A straight piece of iron six feet long of the thickness of your finger is suspended (in the way described in the foregoing chapter) in exact equipoise by a strong and slender silken cord.

But the cord should be cross-woven of several silk filaments, not twisted simply in one way; and it should be in a small chamber with all doors and windows closed, that the wind may not enter, nor the air of the room be in any way disturbed.

For which reason it is not expedient that the trial should be made on windy days, or while a storm is brewing.

For thus it freely follows its bent, and slowly moves until at length, as it rests, it points with its ends North and South, just as iron touched with a loadstone does in shadow-clocks, and in compasses, and in the mariners' compass. You will be able, if curious enough, to balance all at the same time by fine threads a number of small rods, or iron wires, or long pins with which women knit stockings; you will see that all of them at the same time are in accord, unless there be some error in this delicate operation: for unless you prepare everything fitly and skilfully, the labor will be void.

Make trial of this thing in water also, which is done both more certainly and more easily. Let an iron wire two or three digits long, more or less, be passed through a round cork, so that it may just float upon water; and as soon as you have committed it to the waves, it turns upon its own center, and one end tends to the North, the other to the South; the causes of which you will afterwards find in the laws of the direction.

This too you should understand, and hold firmly in memory, that as a strong loadstone, and iron touched with the same, do not invariably point exactly to the true pole but to the point of the variation. So does a weaker loadstone, and so does the iron, which directs itself by its own forces only, not by those impressed by the stone.

And so every ore of iron, and all bodies naturally endowed with something of the iron nature, and prepared, turn to the same point of the horizon, according to the place of the variation in that particular region (if there be any variation therein), and there abide and rest.

-William Gilbert De Magnete published in 1600!

Thursday, November 24, 2011

Welcome NIKKEI Japan Report to the Synthetic Information Blog Family

Take an in-depth look at the Japanese Economy (and be amazed.)

NIKKEI Japan Report sets the standard of excellence for English Language News from Japan.

We will periodically add new and old NIKKEI Japan Report broadcasts as they become available.

Featured interviews with 
Mr. Masahiro Sakane, Chairman of Komatsu

Mr. Masatoshi Ito, President of Ajinomoto

Tokyo Stock Exchange President and CEO Mr. Atsushi Saito

Program hostess: NIKKEI Japan Report correspondent Makiko Utsuda 

In Report No. 31 The recovery of business and humanity in the aftermath of the great tsunami of 2011. These real life stories of heroic people who come back from disaster will touch your emotions and inspire hope.

NIKKEI Japan Report No. 31

NIKKEI Japan Report No. 29

Sunday, October 30, 2011

Disable Text Shadow Blur Effects in Windows Vista

Recent problem in Windows Vista
Blurry text in all applications including Chrome Browser, IE9, Firefox, IBM Lotus Symphony, and others.
The text display was blurred by a greyish shadow behind the main text.
Made it very difficult to read and was just generally annoying.
The problem is not in the browser or other windows applications.
It is a setting in Vista performance options.

Here's the fix:

Fix for annoying text shadow effect in windows Vista.

Control Panel
Advanced system settings
Performance settings
uncheck everything except
Smooth edges of screen fonts and Use visual styles on window and buttons
Or just click adjust for best performance.

Shadows go away in all applications and browsers under windows Vista.

Sunday, October 23, 2011

Why Do People Believe Nonsense? and Why It's Not Always Their Fault.

DRAFT version NOVEMBER 21, 2011

 Psychological Manipulation and Extra-rational Persuasion

Most people are familiar with rational discussion and persuasion though straightforward reasoning. However, there are extra-rational means of persuasion and behavior modification that may be described as psychological manipulation. Psychological manipulation uses an array of sophisticated methods and means of influence to get people to believe things and take actions that are not based on rational thought processes.

Many people fall victim to this type of manipulation every day.  Human vulnerability to extra-rational persuasion provides a partial answer to the often asked question: Why do people believe nonsense?

It may be difficult to read some of the material presented here, but if you understand the techniques of psychological manipulation, you can protect yourself from being victimized.

The material below is a collection of important information on psychological manipulation that everyone should know about. Much of the information presented here, but not all, is based on published literature and some excellent wikipedia articles which are excerpted liberally in the following. 

Many may consider the use of such techniques to be unethical.  Much harm has been done to people, families, and communities who have been victimized by unethical practitioners of psychological manipulation.  In the extreme, dangerous psychopathic individuals use tools of psychological manipulation to choose and then attack their victims.

 The information below will empower you to protect yourself and others. 
Do not use it for evil. Ok? Good. 

First symptoms, warning signs, and red flags.

Warning signs of manipulation:

Behavioral changes
Uncharacteristic opinion changes
Uncharacteristic statements
Promotion of ideologically aligned persons, organizations, literature.
Assumption of uncharacteristic behaviors, style of dress, inappropriate displays.

More on psychological manipulation and control in references below. Includes books by Simon and Braiker.  Links to wiki articles. These have been exerpted and quoted more or less verbatim in much of the following.

Psychological Manipulation and Control

Psychological manipulation is a type of social influence that aims to change the perception or behavior of others through underhanded, deceptive, or even abusive tactics. By advancing the interests of the manipulator, often at the other's expense, such methods could be considered exploitative, abusive, devious, and deceptive.

Social influence is not necessarily negative. For example, doctors can try to persuade patients to change unhealthy habits. Social influence is generally perceived to be harmless when it respects the right of the influenced to accept or reject it, and is not unduly coercive.

Depending on the context and motivations, social influence may constitute underhanded manipulation.

Requirements for successful manipulation

According to George K. Simon, A successful psychological manipulator will remain covert while:

Concealing aggressive intentions and behaviors.

Learning the psychological vulnerabilities of the victim.

Assessing what tactics are likely to be the most effective.

Psychological traits of the manipulator

Having a sufficient level of ruthlessness to have no qualms about causing harm to the victim if necessary.

Covert personality (relational aggressive or passive aggressive).

Ranging to psychopathology.

How manipulators control their victims

According to Braiker

Braiker identified the following basic ways that manipulators control their victims:

positive reinforcement - includes praise, superficial charm, superficial sympathy (crocodile tears), excessive apologizing; money, approval, gifts; attention, facial expressions such as a forced laugh or smile; public recognition.
negative reinforcement - includes nagging, yelling, the silent treatment, intimidation, threats, swearing, emotional blackmail, the guilt trap, sulking, crying, and playing the victim.

intermittent or partial reinforcement - Partial or intermittent negative reinforcement can create an effective climate of fear and doubt. Partial or intermittent positive reinforcement can encourage the victim to persist - for example in most forms of gambling, the gambler is likely to win now and again but still lose money overall.

Punishment traumatic one-trial learning - using verbal abuse, explosive anger, or other intimidating behavior to establish dominance or superiority; even one incident of such behavior can condition or train victims to avoid upsetting, confronting or contradicting the manipulator.

I would include speech content censorship, self-censorship, politically correct speech, and related forms of communication control as forms of psychological manipulation.

According to Simon

Simon identified the following manipulative techniques:

Lying: It is hard to tell if somebody is lying at the time they do it although often the truth may be apparent later when it is too late. One way to minimize the chances of being lied to is to understand that some personality types (particularly psychopaths) are experts at the art of lying and cheating, doing it frequently, and often in subtle ways.

Lying by omission: This is a very subtle form of lying by withholding a significant amount of the truth. This technique is also used in propaganda.

Denial: Manipulator refuses to admit that he or she has done something wrong.

Rationalization: An excuse made by the manipulator for inappropriate behavior. Rationalization is closely related to spin.

Minimization: This is a type of denial coupled with rationalization. The manipulator asserts that his or her behavior is not as harmful or irresponsible as someone else was suggesting, for example saying that a taunt or insult was only a joke.

Selective inattention or selective attention: Manipulator refuses to pay attention to anything that may distract from his or her agenda, saying things like "I don't want to hear it".

Diversion: Manipulator not giving a straight answer to a straight question and instead being diversionary, steering the conversation onto another topic.

Evasion: Similar to diversion but giving irrelevant, rambling, vague responses, weasel words.
Covert intimidation: Manipulator throwing the victim onto the defensive by using veiled (subtle, indirect or implied) threats.

Guilt tripping: A special kind of intimidation tactic. A manipulator suggests to the conscientious victim that he or she does not care enough, is too selfish or has it easy. This usually results in the victim feeling bad, keeping them in a self-doubting, anxious and submissive position.

Shaming: Manipulator uses sarcasm and put-downs to increase fear and self-doubt in the victim. Manipulators use this tactic to make others feel unworthy and therefore defer to them. Shaming tactics can be very subtle such as a fierce look or glance, unpleasant tone of voice, rhetorical comments, subtle sarcasm. Manipulators can make one feel ashamed for even daring to challenge them. It is an effective way to foster a sense of inadequacy in the victim.

Playing the victim role ("poor me"): Manipulator portrays him- or herself as a victim of circumstance or of someone else's behavior in order to gain pity, sympathy or evoke compassion and thereby get something from another. Caring and conscientious people cannot stand to see anyone suffering and the manipulator often finds it easy to play on sympathy to get cooperation.

Vilifying the victim: More than any other, this tactic is a powerful means of putting the victim on the defensive while simultaneously masking the aggressive intent of the manipulator.

Playing the servant role: Cloaking a self-serving agenda in guise of a service to a more noble cause, for example saying he is acting in a certain way for "obedience" and "service" to political party, agenda, or ideology. Invoking external authority.

Seduction: Manipulator uses charm, praise, flattery or overtly supporting others in order to get them to lower their defenses and give their trust and loyalty to him or her.

The pied-piper seduces by charming music and manner. Hypnotizes his victims and leads them to their doom.

Projecting the blame (blaming others): Manipulator scapegoats in often subtle, hard to detect ways.

Feigning innocence: Manipulator tries to suggest that any harm done was unintentional, or blankly denies the accusation. Manipulator may put on a look of surprise or indignation. This tactic makes the victim question his or her own judgment and possibly his own sanity.

Feigning confusion: Manipulator tries to play dumb by pretending he or she does not know what you are talking about or is confused about an important issue brought to his attention.

Brandishing anger: A widely used technique of control and intimidation. Lack of civility is an act. Manipulator uses anger to brandish sufficient emotional intensity and rage to shock the victim into submission. The manipulator is not actually angry, he or she just puts on an act. He just wants what he wants and gets "angry" when denied.

Vulnerabilities are exploited by manipulators

According to Braiker, manipulators exploit the following vulnerabilities (buttons) that may exist in victims:

the "disease to please"
addiction to earning the approval and acceptance of others
Emotophobia (fear of negative emotion)
lack of assertiveness and ability to say no
blurry sense of identity (with soft personal boundaries)
low self-reliance
external locus of control

According to Simon, manipulators exploit the following vulnerabilities that may exist in victims:

naïveté - victim finds it too hard to accept the idea that some people are cunning, devious and ruthless or is "in denial" if he or she is being victimized.

over-conscientiousness - victim is too willing to give manipulator the benefit of the doubt and see their side of things in which they blame the victim.

low self-confidence - victim is self-doubting, lacking in confidence and assertiveness, likely to go on the defensive too easily.

over-intellectualization - victim tries too hard to understand and believes the manipulator has some understandable reason to be hurtful.

emotional dependency - victim has a submissive or dependent personality. The more emotionally dependent the victim is, the more vulnerable he or she is to being exploited and manipulated.

Manipulators generally take the time to scope out the characteristics and vulnerabilities of their victim.

According to Kantor the following are vulnerable to psychopathic manipulators:

too trusting - people who are honest often assume that everyone else is honest. They commit themselves to people they hardly know without checking credentials, etc. They rarely question so-called experts. Trust but verify.

too altruistic - the opposite of psychopathic; too honest, too fair, too empathetic

too impressionable - overly seduced by charmers. For example, they might vote for the phony politician who kisses babies.

too naïve - cannot believe there are dishonest people in the world or if there were they would not be allowed to operate.

too masochistic - lack of self-respect and unconsciously let psychopaths take advantage of them. They think they deserve it out of a sense of guilt.

too narcissistic - narcissists are prone to falling for unmerited flattery.

too greedy - the greedy and dishonest may fall prey to a psychopath who can easily entice them to act in an immoral way.

too immature - has impaired judgment and believes the exaggerated advertising claims.

too materialistic - easy prey for loan sharks or get-rich-quick schemes

too dependent - dependent people need to be loved and are therefore gullible and liable to say yes to something to which they should say no.

too lonely - lonely people may accept any offer of human contact. A psychopathic stranger may offer human companionship for a price.

too impulsive - make snap decisions about, for example, what to buy or whom to marry without consulting others.

too frugal - cannot say no to a bargain even if they know the reason why it is so cheap
the elderly - the elderly can become fatigued and less capable of multi-tasking. When hearing a sales pitch they are less likely to consider that it could be a con. They are prone to giving money to someone with a hard-luck story. See elder abuse.

Motivations of manipulators are often Evil

Manipulators have an array of possible motivations, including:
The their need to advance their own purposes and personal gain at virtually any cost to others.
A strong need to attain feelings of power and superiority in relationships with others.
A desire and psychological need to feel in control (aka. control freakery).
To compensate feelings of inferiority by gaining a feeling of power over others in order to raise self-esteem.

A collection of motivations that make for a dangerous person. 
Can manipulators be psychopaths?

Basic manipulative strategy of a psychopath

According to Hare and Babiak,[4] psychopaths are always on the lookout for individuals to scam or swindle. The psychopathic approach includes three phases:

1. Assessment phase
Some psychopaths are opportunistic, aggressive predators who will take advantage of almost anyone they meet, while others are more patient, waiting for the perfect, innocent victim to cross their path. In each case, the psychopath is constantly sizing up the potential usefulness of an individual as a source of money, power, sex, or influence. Some psychopaths enjoy a challenge while others prey on people who are vulnerable. During the assessment phase, the psychopath is able to determine a potential victim’s weak points and will use those weak points to seduce.

2. Manipulation phase
Once the psychopath has identified a victim, the manipulation phase begins. During the manipulation phase, a psychopath may create a persona or mask, specifically designed to ‘work’ for his or her target. A psychopath will lie to gain the trust of their victim. Psychopaths' lack of empathy and guilt allows them to lie with impunity; they do not see the value of telling the truth unless it will help get them what they want.

As interaction with the victim proceeds, the psychopath carefully assesses the victim's persona. The victim's persona gives the psychopath a picture of the traits and characteristics valued in the victim. The victim's persona may also reveal, to an astute observer, insecurities or weaknesses the victim wishes to minimize or hide from view. As an ardent student of human behavior, the psychopath will then gently test the inner strengths and needs that are part of the victim's private self and eventually build a personal relationship with the victim.
The persona of the psychopath - the “personality” the victim is bonding with - does not really exist. It is built on lies, carefully woven together to entrap the victim. It is a mask, one of many, custom-made by the psychopath to fit the victim's particular psychological needs and expectations. The victimization is predatory in nature; it often leads to severe financial, physical or emotional harm for the individual. Healthy, real relationships are built on mutual respect and trust; they are based on sharing honest thoughts and feelings. The victim's mistaken belief that the psychopathic bond has any of these characteristics is the reason it is so successful.

3. Abandonment phase
The abandonment phase begins when the psychopath decides that his or her victim is no longer useful. The psychopath abandons his or her victim and moves on to someone else. In the case of romantic relationships, a psychopath will usually seal a relationship with their next target before abandoning his or her current victim. Sometimes, the psychopath has three individuals on whom he or she is running game: the one who has been recently abandoned, who is being toyed with and kept in the picture in case the other two do not work out; the one who is currently being played and is about to be abandoned; and the third, who is being groomed by the psychopath, in anticipation of abandoning the current "mark".


Braiker, Harriet B. (2004). Who's Pulling Your Strings ? How to Break The Cycle of Manipulation. ISBN 0071446729.
Simon, George K (1996). In Sheep's Clothing: Understanding and Dealing with Manipulative People. ISBN 978-0965169608. (reference for the entire section
Kantor, Martin (2006). The Psychopathology of Everyday Life. ISBN 978-0275987985.
Robert, Hare; Paul, Babiak (2006). Snakes in Suits: When Psychopaths Go to Work. ISBN 978-0061147890.

and wikipedia articles on psychological manipulation

Sunday, September 4, 2011

Blog features and navigation

  • Blog posts in reverse chronological order, of course.
  • Click title tabs above to view updates of selected posts.
  • Share buttons for twitter, facebook, etc. located at the bottom of each post.
  • Blog search window on the right top.

Friday, September 2, 2011

More "climate science" controversy

Draft version date 09/10/2011 revA

1.0 Spencer and Braswell paper published in journal Remote Sensing. 
Here's a link to a pdf of the paper by Spencer and Braswell as accepted for publication by the peer reviewed journal Remote Sensing.

1.1 Journal Editor Prof. Wolfgang Wagner resigns over peer reviewed publication.

1.2 Blog posts by Roy Spencer, Ph.D. and AGW skeptic
Blog posts by Roy Spenser, Ph.D. AGW skeptic responds to Dessler, Science Magazine (2010.)

2.0 Editor-in-chief resigns in protest over publication of paper --in his own journal?
News story out of reports that Prof. Wolfgang Wagner of Vienna University of Technology (a minor league school) the editor-in-chief of the academic journal Remote Sensing is resigning in protest over publication of a paper in his own journal.  How did this paper get published? Turns out, just like every other paper. After passing through peer review ( panel of three referees) at Remote Sensing (ed. Wagner) the paper by Spenser and Braswell was approved for publication, and was duly   published in July 2011.  Standard procedure for academic publications. We note that scientific journals levy "page charges" to support themselves. Page charges often run into hundreds of dollars per page and are usually paid by the author, or the author's institution. Scientific journals are not charities. Presumably Remote Sensing accepted the cash.

Usually scientific controversies like this are kind of  boring, but this one is astonishing.  Personally, as an author of more than one hundred scientific publications in peer reviewed journals, the story, as reported, seems to me to be incredible. Prof. Wagner's reported behavior and statements seem strange. 

What is going on? we may ask. Was Prof. Wagner being pressured? What could explain his behavior? First, some background on academic journal publications in the scientific world.

To review the process....The paper in question was published by Remote Sensing (which I consider a minor academic journal.) Publication date July 2011 (see link at top of page.) It was published only after successfully passing through the peer review process imposed by the editor of Remote Sensing. The editor and editorial staff undoubtedly realized the paper would be controversial because the author's conclusions run counter to the beliefs of Anthropogenic Global Warming (AGWA) supporters. The editors of Remote Sensing approved Spenser and Braswell for publication with their eyes open.

2.1 Responsibilities of a Journal Editor and the Peer Review Process
More background on peer review and editorial responsibilities and style. What are the duties of the editor-in-chief of an academic journal?

The editor-in-chief (editor) has the duty to assign each paper to qualified reviewers. Further, the editor has considerable power to hold-up, question, or even reject papers himself. If a paper has major scientific or scholarly flaws, the editor has the responsibility to reject the paper... not allow it to be published in his journal.

Often an editor will give a paper's authors opportunity to respond to criticism or even permit flaws to be corrected by the authors. At every step in the process the editor takes responsibility for the quality of the papers ultimately published in his journal.

Academic journals often deal with submissions of controversial papers. Many editors actually err on the side of allowing controversial views to be heard.
Essentially, the editor can reject or approve papers as he sees fit.

Why did Spencer and Braswell choose to submit their paper to Remote Sensing? Academic journals abound. The author's choice of journal is kind of a trade off of options and benefits. Authors want their paper to get the right audience for their stuff, they also may want to publish in prestigious journals that are highly selective and have large audiences.  Importantly, authors want their paper to be reviewed in a fair and impartial manner.  In an ideal world the editorial process of scientific journals would be impartial, objective, thorough, unbiased, and brisk.  Journals develop a reputation in the community, some are known to be biased, some are known to be objective.

It seems likely that Dr. Spenser's chose to submit his paper to the journal Remote Sensing because he believed he would get a fair and unbiased peer review there.      

2.2 It seems the editor of the journal Remote Sensing is somehow protesting his own decision to publish.
Flash forward two months to September 2, 2011, we have the highly publicized resignation by Dr. Wolfgang Wagner from his position as editor-in-chief of Remote Sensing. His resignation was reported to be a protest of the publication of the paper by Spenser et al.  

Prof. Wagner claims to have belatedly recognized flaws in the paper that were missed by Remote Sensing's earlier peer review process. In effect, the editor of Remote Sensing is resigning in protest of his own decision as editor-in-chief to publish a paper that had passed his own peer review system. 

This story is astonishing. Does Prof. Wagner really believe his own peer review system approves publication of bad papers? How many other bad papers has Remote Sensing published? He doesn't say.  

Of course, the Guardian may have to story wrong, may have misquoted Dr. Wagner, or may be biased itself.  We will have to wait and see. The Guardian news story has some serious fallout for the Journal Remote Sensing and for anyone who published there. 

2.3 Fallout: Is Remote Sensing Radioactive?
Prof. Wagner asserts that the peer review process at Remote Sensing is so deeply flawed as to allow bad science to be published. If this is so, then it seems we must conclude that All papers published in Remote Sensing during the tenure of Prof. Wagner may have been poorly reviewed and possibly erroneous.

Inescapably,  Prof. Wagner's actions and accusations raise questions about the scholarly and scientific quality of the journal Remote Sensing and of all papers published therein.

Is there more to this story?  As it stands, it is difficult to believe Prof. Wagner's statements. We can also ask, how do the many authors of publications in Remote Sensing respond to charges that the peer review process at RS is flawed, sloppy, and passed bad papers for publication?  I doubt those authors would concur with Prof. Wagner. Or are we to believe Remote Sensing's peer review system was flawed and incompetent when it accepted the Spenser paper, and was great for all other papers? Unlikely.

Questions abound. Comments welcome.

3.0 BTW here is what Dr. Spenser says about AGW and observed climate variations:

"So what do we deny, if anything? Well, what *I* deny is that we can say with any level of certainty how much of our recent warmth is due to humanity’s greenhouse gas emissions versus natural climate variability."   

This quote from Dr. Spenser's blog post at

BBC posts it's own version of the Guardian story:

4.0 Is "Climate Science" good science?

BBC refers to "mainstream climate scientists" in the above linked article.  This BBC article appears to be an example of flawed journalism. The author uses a sneaky rhetorical construction that implicitly forwards the conclusion. That is, Scientist A is good because he's "mainstream" and Scientist B is bad because he's not.  Hardly journalistic impartiality. More like a kid glove hatchet job.

We can ask, what are the credentials of "climate scientists," and the field of "Climate Science" itself? What have they really accomplished scientifically?

As an experimental science, there have been substantial accomplishments. Improved measurement technologies,  substantial accumulation of new data, and application of sophisticated measurement technology to measure ancient climatic conditions.  Fair enough, good stuff. 

What about theory and modeling? The theoretical tools used in climate science were actually developed in other fields, mainly in physics and its branches including geophysics, physics of fluids, statistical and thermal physics, and many others. The numerical methods for modeling, data base and data display, likewise were developed by other disciplines. 

Upon superficial examination, nothing appears particularly novel about the tools and methods used in climate science. Nothing wrong with that, but the work is derivative not original, just applications of pre-existing science and methods to climates. No doubt champions of the field of climate science can point to many important innovations in the theory of climates. Perhaps someone might list a few of climate science's most important theoretical innovations in the comments bar.

We observe that "Climate Science" is itself a minor scientific discipline known mainly for its extravagant claims,  lack of transparency,  astonishing results, immunity to criticism, cover-ups, vitriolic attacks against critics, press releases, and political agendas.  Mainstream science views this pattern of behavior with concern. 

Telling perhaps is a Google search for the term "climate science controversy." That search yielded 3,230,000 entries earlier today. That's alot of controversy.

We further observe that the field of climate science displays many of the warning signs of Bad Science. We will discuss this more fully, but for now we can ask the question: Is "Climate Science" good science?  The answer is not a priori obvious.

5.0 Editor of Nature approves the paper Kirkby et al. for publication.
The highly prestigious British scientific journal Nature stands firm as an editorial defender of open scientific inquiry. August 2011: Nature published another paper that is generating hysteria among "mainstream climate scientists."  It turns out there are major "uncertainties in climate modeling."  Who knew? Important physical processes governing the climate are not understood by climate scientists. Really?  Does that mean that mainstream climate models are imperfect, and may generate erroneous predictions? 

 It seems the prestigious editors of Nature are saying that climate models contain major uncertainties in physics and their predictions are uncertain. 

What the editor of Nature is saying is....

Climate science does not have a quantitative physics-based theory of cloud formation. What? It seems that Climate Science can't explain why clouds form.  The editor further observes that this shortcoming of theory is a significant problem for climate models.

Below is the editor's summary (our italics) of the paper Kirkby et al., Nature 476, 429-433 25 August 2011.   

Cloud cover at CERN

A substantial source of cloud condensation nuclei in the atmospheric boundary layer is thought to originate from the nucleation of trace sulphuric acid vapour. 

Despite extensive research, we still lack a quantitative understanding of the nucleation mechanism and the possible role of cosmic rays, creating one of the largest uncertainties in atmospheric models and climate predictions.

Jasper Kirkby and colleagues present the first results from the CLOUD experiment at CERN, which studies nucleation and other ion-aerosol cloud interactions under precisely controlled conditions. They find that atmospherically relevant ammonia mixing ratios of 100 parts per trillion by volume increase the nucleation rate of sulphuric acid particles by more than a factor of 100 to 1,000. They also find that ion-induced binary nucleation of H2SO4–H2O can occur in the mid-troposphere, but is negligible in the boundary layer and so additional species are necessary. Even with the large enhancements in rate caused by ammonia and ions, they conclude that atmospheric concentrations of ammonia and sulphuric acid are insufficient to account for observed boundary layer nucleation.

Below is a link to listing of the CLOUD paper by Kirkby et al.

Monday, August 29, 2011

Wind Speed Discrepancy Hurricane Irene

Hurricane Irene brushes Atlantic Coast 
Wind Speed Controversy.

We observed, along with many others, a substantial discrepancy between wind speed readings reported by ground based weather stations and speeds attributed to NOAA as seen on TV News.

As a regular user out here in Palm Springs CA, I can confirm viewing many ground based Atlantic Coast weather stations on Sunday August 28. What I saw was:  wind speeds reported by hundreds of automated ground based stations. The speeds were in the 10-35mph range. None of those I checked had 60-80mph readings. 

The wind direction indicators followed the swirling cloud cover.  I was very surprised at such low wind speed readings from weather stations in the heart of a reported hurricane. Hurricanes would be expected to generate at least 60mph-90mph wind speed readings. 

The next surprise came when I listened to TV news reporting of 60-80mph winds (and higher) at the same time and place I was looking at weather station readings of 10-30mph.  BTW "rapid fire" automated weather stations update readings in real time, so we were not looking at old data. 

This huge discrepancy is a serious problem. 
Was the wind speed actually 10-30mph or 60-80mph or 80-90 mph?? 

NOAA must respond to this problem, they owe the public an explanation. 

Further support for the low wind speed readings comes from video reports on the ground.  Anecdotal evidence from live TV video did not look like 60-90mph wind conditions. More like 10-30mph winds and often reporters commented that the winds were not very strong. Live video did show high surf and some flooding, but  high wind conditions were not at all evident. 

We have no explanation of this apparent discrepancy between ground based stations and the NOAA reports from TV News.  

This is a serious issue for observational meteorology. Why are the NOAA figures 30-40mph higher than the data from ground based weather stations?  So far we have no satisfactory explanation. Please comment if you have any relevant information.  

How do you use

To use weatherunderground, go to the site:
Enter any US city and state. When the city page comes up, scroll to the bottom for a list of weather stations. You can see local variations in wind speed, wind direction, temperature, etc. Or click on wundermap for a google maps overlay showing stations as icons having wind direction and wind speed in "flag format" 3 stripes = 30mph wind speed. The wind direction is indicated by the direction the "flagpole" points.

Here are some screen shots showing radar images of IRENE as she came ashore in North Carolina August 27th. Ground based weather stations are also displayed.  Wind speeds are 30mph, just count the flags on the icon.  We also viewed real time data in numerical format on the same website. Go to the link below and scroll down to get to the images. 
Link to some images from

Check out here:
Link to Weatherunderground page for Manhattan NYC.
Just type in your city and state on the site home page and bring up a local map and station readings in your area. Click "wundermap" for an overlay on google maps and real time display of radar images.

So far we have no explanation for this substantial discrepancy.

Comments welcome -  

Tuesday, July 5, 2011

What is Modeling?


What is modeling, anyhow? Good question. Generally, in the physical sciences and elsewhere, "Modeling" is a term having a specific meaning. Here is a simple definition.

Modeling is a procedure for numerical fitting and interpolation of existing sets of observational data by means of continuous functions having a collection of adjustable parameters. 

1.0 Models cannot predict anything in a causal sense.

The central aim of modeling is to provide a simplified analytic function or set of functions that match discrete data points and interpolate between them.  Models, therefore, do not predict anything in a causal sense.  Models simply generate sets of numbers that may be compared to sets of observations.

 In this discussion, we view data as a collection of discrete points embedded in an abstract continuum parameter space. Independent variables might include time, physical location, incident solar radiation flux, etc. Dependent variables are variables that can be identified with data. Examples of dependent variables are local temperatures, or the non-thermodynamic quantity "global average temperature" we hear about.  

A model is simply a function that maps independent variables to sets of numbers that may be compared to sets of observations, i.e. data sets.  

The model generates output. Model output consists of sets of values of the dependent variables. These numbers are the stuff the model generates. We say the observational data is "modeled" by sets of numbers generated by the model.  

1.1 Models of physical systems need not contain any physics.
Instead they contain hidden variables and adjustable parameters.

Besides independent variables, models contain a set of hidden variables. Hidden variables are usually of two kinds, fixed parameters and adjustable parameters. They are used to  formulate the functions that generate the output variables of the model.  

Fixed parameters  come from underlying laws of physics or other solidly trusted sources. Their values are taken as given. 

Adjustable parameters are hidden variables whose values can be specified  arbitrarily.   For a given set of specified values of the adjustable parameters, a specific model is obtained. Different models are easily obtained by changing the values of the adjustable parameter set.   

Notice there is no requirement that models obey the laws of physics. Rather, models are sets of functions that generate numbers that may be compared to observational data sets.

Modelers try to optimize their models by judicious choice of the set of adjustable parameters, by removing unnecessary adjustable parameters, etc. 
How do we know when the model is optimized?  One way is to validate it by comparison to a data set.

1.2 What is a validated model?
To validate a model the modeler first needs a data set of observations to model. This data set is necessarily a pre-existing set of observational data. This data set is sometimes called the base data set.  

Here's how the validation process goes....

To validate the model, the modeler goes through a tweeking process where various values of the adjustable parameters are tested, and model outputs are compared to the base data set. The comparison is usually made quantitative by some "goodness of fit" measure. Goodness of fit is a number or set of numbers that measure how well the model output emulates the actual observed data.  For example, the sum of mean square differences between model variables and the base data set could be a goodness of fit parameter.  The smaller the better. This fitting procedure is usually done numerically, but can be done by eye in simple cases. 

So far, we have the model output that is restricted be "close to" existing data, because that data is what we are trying to fit. Such models are very useful for data analysis. It is nice to have continuous curves  that fit discrete data points. If nothing else, it helps us visually examine data sets, spot trends, gain intuition about the data.  All great stuff.

Notice there is a range of independent variables that is comparable to the range of the independent variables of the base data set. The goodness of fit is done in this restricted range. That's where the existing data is.  

If you are given values of the population of California for each census year,  you will have a time dependent data set.  However, you will have no data for the year 2040. So it is not possible to fit the model to the year 2040.  Hence it is not possible to validate the model for this year.  To make progress, we would fit the population model, say a straight line, to existing data. The range of the time variable would be restricted to the existing data. 

Once a satisfactory set of values for the adjustable parameters has been found, the model may be considered validated within the range of the data set. Models are not considered valid outside their range of validation. 

When models are used for extrapolation, the extrapolation must be re-validated as new data becomes available. In this way, past extrapolations can be invalidated and identified as such.

1.3 Model differential equations and pseudo-causality.
Modelers often spice up the mix by invoking sets of model differential equations that may be solved numerically to propagate the model into the future. Thus models may contain time dependent differential equations having derivatives emulating causal behavior.  

Such model equations may have some physics in them, but inevitably they leave out important physical processes.  Hence, they are not truly causal because they do not obey the causality of the underlying laws of physics. Such time dependent  models may be termed pseudo-causal to distinguish them from the fully causal laws of physics.  More on causality later.

Numerical models that solve truncated sets of fluid equations such as General Circulation Models (GCMs) are examples of pseudo-causal models. Extrapolations of GCM's are not guaranteed to agree with future observations.  Rather the opposite, all extrapolations must diverge from future observations. These models are only approximately causal.

GCMs and other models require the same disclaimer as stock brokers:
   "Past performance is not a guarantee of future accuracy."

1.4 Can models provide "too good" a fit to the base data?

If a model has enough adjustable parameters it can fit any data set with great accurcy, e.g. John von Neumann's Elephant.  Excessively large sets of adjustable parameters provide deceptively pretty looking data plots. Actually it is considered bad practice to fit the data with too many parameters.  Over parameterized models have many problems, they tend to have more unreliable extrapolations, have derivatives that fluctuate between data points, exhibit rapidly growing instabilities. 

Paradoxically, models that produce impressive agreement with base data sets, tend to fail badly in extrapolation. 

If the fit to the basis data set is too good, it probably means the modeler has used too many adjustable parameters. A good modeler will find a minimal set of basis functions and a minimal set of adjustable parameters that skillfully fit the base data set to a reasonable accuracy and so minimize the amount of arbitrariness in the model. This will also tend to slow the rate of divergence upon extrapolation.  

1.5 What are the basis functions of models?

Models make use of a set of basis functions. For example, the functions X, X^2, X^3, X^4, ... are all divergent functions that are used in polynomial fits, also called non-linear regressions. The problem is, such functions tend to +/- infinity in the limit of large values of the independent variable X, and do so more rapidly for higher powers of X. The basis  functions are unbounded, and extrapolations always diverge.  

One approach is to choose bounded functions for the basis set. Periodic functions {C, sin(X), cos(X), sin(2X), cos(2X), ...} where C is the constant function, would be an example of a set of bounded basis functions. At least extrapolations of bounded functions will not diverge to infinity. Comforting. 

1.6 Periodic phenomena make modelers look good.

 Many natural phenomena are periodic or approximately periodic. If a time series data set repeats itself on a regular basis then it can be modeled accurately with a small collection of periodic functions, sines and cosines. We do not have to solve the orbital dynamics equations in real time to predict with great accuracy that the sun will come up tomorrow.  

Complex systems may also display quasi-periodic behavior. So-called non-linear phenomena may repeat with a slowly changing frequency and amplitude.  Simple periodic models tend to do very well in extrapolation over multiple periods into the future. Moreover, periodic models do not diverge upon extrapolation. They simply assert that the future is going to be a repeat of the past. 

When models extrapolate non-periodically, it's a red flag. Extrapolations of aperiodic (i.e. non-periodic) models are much more likely to be invalid, as discussed here.

1.7 The Climate can be Cooling and Warming at the Same Time. 
Climate, Weather, and Multiple Timescales.

When discussing climate and weather, it is very important to be specific as to the timescale of change. Earthly phenomena described as "Climate" and "Weather" take place over a astonishingly wide range of possible timescales.  In general, we can be talking about minutes, hours, days, months, years, decades, centuries, millennia, tens of thousands years, hundreds of thousands of years, millions of years, and longer.  

For example, the Vostok ice core data discussed in a previous post provides evidence for periodic climate cycles on time scales of thousands of years up to hundreds of thousands of years, but little information on the hundred year and shorter timescales, and little information about millions of years and longer. From the Vostok data it is clear that the earth is undergoing a many thousands of years long warming cycle, and in roughly 5000 years will begin a cooling cycle leading to another ice age. 

Such cyclic phenomena on these long timescales are likely to repeat because they have done so in the past over many cycles for hundreds of thousands and millions of years.  One can reliably predict that the earth will begin a cooling cycle and a repeat of the ice age cycle in a few thousand years.

What about the timescales ranging from one year to one thousand years?  On these  timescales hourly variations of the weather and seasonal changes are averaged out, and one can look for trends and cycles having periods of a few years to a thousand years.   These timescales are the shortest timescales that can be treated as climate change timescales.  On these shorter timescales, the distinction between climate and weather becomes less obvious and more arbitrary.  

Because of this multiple timescale property of climate and weather, it is possible for the climate and weather to be warming on a shorter timescale and be cooling on a longer timescale. 

Paradoxically it is entirely reasonable for the climate to be warming and cooling at the same time.  More correctly, it is entirely possible for the climate to be cooling on the decade timescale, and simultaneously warming on the thousand year timescale, because decade long cooling trends may average-out over the thousand year timescale. 

There is much more that may be said about multiple timescale analysis of  weather-climate phenomena.  

For now, remember this: Climate-Weather changes on a hierarchy of timescales. 

It is meaningless to claim the climate is warming without clearly understanding the timescale of the phenomenon and where it fits into the larger hierarchy of climate timescales.

2.0 Extrapolation of models is inherently unreliable.
What about extrapolation? Often, modelers are asked to extrapolate their models beyond the validated range of independent variables. Into the unknown future, or elsewhere. These extrapolations are notoriously unreliable for several reasons, among them are (1) the fact that models do not obey causality, (2) they may not properly conserve invariants of the underlying physical system, and (3) are often mathematically unstable and exhibit divergent behavior in the limit of large dependent variable, (4) non-linear regression fits used in climate modeling are especially prone to instability. Such models would inevitably “predict” catastrophic values of the dependent variables as an artifact of their instability. 

Of course, no actual predicting is going on in such models, merely extrapolation of the model beyond its validated domain.

2.1 What's the difference between models and simulations?
Often the distinctions between models and simulations may not be very important. Both might give us cool looking numerical output, including 3D movies. Cool, but is it real? That is, are we seeing just pretty pictures or does the display rigorously reproduce the full physics of a real system? 

Sometimes the distinctions between models and simulations are important.  In the scientific community two broad types of numerical computations are distinguished. They are Models and Simulations. So what's the difference? Both use computers right? Yes, but....

The main difference is simulations solve the fundamental equations of the physical system in a (more or less) fundamentally rigorous fashion.  Models by distinction, do not have to obey this standard of rigor, they can be greatly simplified versions of the problem, or might not even contain any real physics at all. 

For example, one of the most widely used types of models involve fitting of experimental data to sets of continuous functions. Variously curve fitting, linear regression, non-linear regression, are techniques that generate models of the data by simply fitting existing data with adjustable functions.  No physics needed at all. Just fitting. But often very useful.

So, models are open ended and can be more or less anything that accomplishes the purpose.  

Models can be seductive. "They look so real" but models cannot be as real as real reality(!)  

This brings us to this issue of causality. It can be said models as a class do not obey the causality implicit in the complete fundamental physics equations of the system. This limitation is important to recognize.

2.2 Simulations obey causality, models do not.
If a model were to include the real physics of the complete system, it would be a simulation, not a model.  Simulations obey causality.  Simulations usually consist of sets of time dependent coupled partial differential equations, PDEs, subject to realistic boundary/ initial conditions. Simulations are numerically solvable rigorous formulations of underlying physical laws. 

Here's an example of a simulation.
Simulations are often used to examine the evolution of temperature in fluid systems.  If the temperature is non-uniform, then the system is far from true thermodynamic equilibrium.  However, fluids very often satisfy the requirements for local thermodynamic equilibrium. This simply means that a  local temperature can be defined in the medium. This temperature is represented a scalar field that varies continuously with location and time. 

Such systems will exhibit thermal transport, a characteristic of atmospheres and oceans. Often problems of thermal transport can be well described by relatively simple sets of fully causal partial differential equations. 

If robust numerical solvers exist then the complete equations can be solved very accurately by a simulation code. The output of the simulation code would then reliably predict the time evolution of a real system. That is, a good simulation will predict the future of the system. 

Of course, care must be taken that the numerical tools give us the right answer. As long as the solver is accurate, the simulation is guaranteed to follow the same physics of causality as the real system.  The output of a good  simulation code is like a numerical experiment. It mirrors reality including the future (if done right.)  

2.3 Subtle aspects of causality in physics lie beyond the scope of this discussion. But it's very interesting so... a few highlights.

In practice, most simulation codes solve formulations of the fluid equations and related field equations of classical physics.  In these cases the simple classical definition of causality is obeyed. 

Quantum mechanics experts know that quantum mechanical systems have a probabilistic nature. When quantum effects are important, some aspects of causality are lost.  However, even in quantum systems, the fundamental probability amplitudes, or wave functions of quantum theory, themselves obey differential equations that "propagate" these functions forward in time in a causal manner.  Roughly speaking, the wave functions evolve continuously and causally in time such that the statistical properties of quantum systems, expectation values of observable single and multi-particle operators, revert to classical causality in the limit of "large quantum numbers."  

Even classical systems can exhibit stochastic or chaotic behavior in some situations. For example, the so-called butterfly effect. The task of simulating many-particle systems subject to stochastic or chaotic behavior is challenging. However, for the important case of many-particle systems having sufficiently many degrees of freedom, chaotic effects often tend to be "washed-out" by other effects.  Perhaps this is an over simplification.  

A related and absolutely fascinating phenomenon of continuous fluid systems is the possibility of self-organization.  The microscopic behavior of self-organizing systems can conspire to generate large scale organized flows. The jet stream in the earth's atmosphere is an example of such an organized flow, sometimes called a zonal flow. The jet stream is a vast high speed wind current in the upper atmosphere that can persist and move around as an organized entity. The color bands in Jupiter's atmosphere and the great red spot appear to be such zonal flows. Simulating the formation and evolution of such large scale organized flows is a challenging problem addressed in various atmospheric and oceanic simulation codes.  Amazing stuff.

Now we are getting into specialized stuff that is way beyond the scope of this brief discussion. For more on this, consult the extensive popular literature.  

Now let's summarize our conclusions about models,  modeling, and the inherent unreliability of extrapolation. 

2.4 Summary and Conclusions about Models.

In most fields of physics, models are considered useful tools for data analysis, but their known limitations and range of validity are widely appreciated. There are just too many ways for extrapolations of models to go wrong. 

Models do not obey causality nor can they properly "predict" anything in the causal sense. Models provide sets of numbers that can be compared to sets of observational data. 

Models are not simulations. Models may contain: 1) none of the physics, 2) some of the physics, but not 3) all of the physics of the system.  

Extrapolation of a model inevitably takes the model outside it validated domain.  When extrapolation is necessary, it must be done conservatively and cautiously. Further, extrapolations must be validated against new data as it becomes available. Conservative extrapolations are more likely to be validated by future observations.

3.0 Is the methodology of climate modeling inherently unreliable?

Now that we are familiar with the inherent limitations of models in general, an important question can be asked about the methodology of climate modeling.  Are climate models being extrapolated beyond their domain of validity? It certainly seems to be the case, climate model extrapolations are often found to be in disagreement with new data that does not fit the extrapolated model.  There is extensive literature available on this subject. 

We are concerned with a more fundamental issue. It seems non-causality is a property of the methodology of climate modeling. Climate models don't contain all of the relevant physics. In a fundamental sense, such models cannot reliably predict the future of the real climate.  

We can also observe that it is incorrect that weight is given to inherently unreliable extrapolations of climate models. Especially troubling are extrapolations of such models beyond the known range of their mathematical validity. 

Of course, most everyone in the hard sciences knows all of this. So my question might be reformulated as: 

Why are extrapolations of climate models given weight, when the methodology is known to be inherently unreliable in extrapolation? 

Models are not infallible and climate models are not infallible.  Models are  known to be unreliable when extrapolated beyond their validated range. 

Maybe that's enough for the moment. Responses welcome. A little dialog is a good, but let's keep it on the top two levels of the Graham hierarchy.