week 10 follow up

I did not think I covered the questions correctly in my first post. I apologize if I offend, no intent at all.

I was stuck on the ideas of fixing the glaring mistakes and errors which more than a few dozen of the articles I have read point out how much they tried with their own work, but other articles and other methods are insufficient.

You have to know what you are looking for in either case for both to work. If you do not know what to look for or refuse to see evidence because it does not fit; these are morbidity and mortality (M&M) type problems. Do you have any idea how many licensed doctors went into an M&M completely convinced they were the best surgeon in the world; only to come out with of the conference without a license or knowing their arrogance killed a patient. You have to know and accept what you are looking for. This weekend when I saw the code in the reading about Charlemagne, I saw the code. I saw the code appear and it took me a bit to say “wait, hold everything, I saw the code”. Tracking the code backwards I found the code in the most unlikely of places. Either way the scholar has to know what they need to be looking for, discounting the variables because they miss is an SOP in both ways to perform public research.

I have been stuck on that for a while now. I started seeing said trend in articles for the last about decade.

Both positions

Position A: Probability sampling represents

and

Position B: Nonprobability (or purposive) sampling 

Have serious negatives which the other way of performing the task does not “fix”. Both sides have too many statistical negatives to discount.

This weekend I did some research on a thing I am working on. The details would take about 700 words to just give the briefest outline. Rome was not founded in a day; the city was more likely based on the physical evidence left behind was no less than about 100k people when the twins Romulus and Remus were born. More likely 1.1-5 a huge metropolitan city. It was one of the back up capitals for the Old Kingdom of Egypt and the 18th dynasty (since they are the same family; just 600 years plus between 2600-2100 bce and 1530-1330 bce.). Showing evidence that Egypt used to have the name x is not difficult, showing the Kingdom of x on maps today is beyond easy google x name (unimportant for this discussion), showing the code in 3 places in America is also beyond easy. Boston itself has an area which is only about a 5% change from the x name. The x name still  exists in y city (which is of the most critical of important cities for all of Psychology, since Wundt and James were in that city decades apart to learn and form the foundation which became this degree and this class), x name still exists with only about a 2% change from Gaelic to English in another location. However, I found hard physical etymology evidence that Rome the “MAIN” hill of Rome proper the location mentioned twice in the founding legend its name also has the code in it, also has a rather easy ability to translate from (no one knows what language was used day one Rome. Roman/Latin/Italian would not be invented for decades/centuries later) Gaelic into Latin into English. The code is present, and repeated enumerable times for the last 2700 years by hundreds of cultures and fiefdoms. Being able to show the evidence that the old kingdom moved from Egypt to Levant and the Italy, then France, then Britain, then America had my attention this weekend. I have known for years about the x name being the Old Kingdom, of course the x name being British (not English mind you, the two cultures almost could not be more different), and pockets in America.

It took years to assemble the above using both formats of sampling the evidence from the documentation left behind. Using both random information; this weekends idea sequence was pure random. I have also used the selective sampling for the data to go look for the people I needed information from. Dead or alive information is information.

The solution I find which works the best is to use the equation formatting from theoretical physics. Limiting the tools builds in automatic limitations regarding what information is available.

Instead of continuing to use tools which have been proven to not work effectively outside of clinical norms, which humans do not operate normal lives inside clinical settings.

Whale example the usual academic argument on this situation are the facts that you can take 10 million samples of 1qt of water from the most random places in the oceans across the globe and not find a single molecule of evidence that whales exist. For that matter giant squid exist. The only reason why science knows as a matter of the most intense fact that giant squid existed previous to a dead one getting caught in a deep sea fishing net and hauled to the surface was their beaks being found in the bellies of whales. You have to look where whales are when they are to obtain evidence of said presence. However, the A argument of knowledgeable sampling means the researcher has to know what they are looking for; a very arrogant and egotistical position which does not reveal superior results except by accident. The random sampling is just as good because maybe by chance the correct time and place will be obtained to find evidence of whales. However, both have extreme limitations. What would work better is a database and some type of organizational structure algorithm which can point out mathematically where wholes in the research are. Where a sample of b group is needed at c time. The computer can then point out where some evidence has been so over sampled the data is beyond skewed, and where other evidence is so missed it has no to few variables in the mix at all.

Example the for 30 years the Colorado River was sampled and sampled and studied, the Colorado River had been in an x year flood sequence. When the weather patterns changed and less than half the water became available, the communities depending on the flood volumes began to panic. Taking the same amount of water from the river but with less than half of the previous flood volume caused huge cascades of problems for all involved. A super large macro version of the data is needed and not available. That is how fix the issue, rebuild the entire system from zero. Take the tools and apply them so their weaknesses can be countered in the math models.

On this day in history the Space Shuttle Challenger exploded x seconds after launch. Which proved an absolute failure in both aspects of research sampling. The random people refused to believe until it exploded, the looking for people did not think about what they needed to look for to find the evidence. But once the event happened both sides had the evidence needed to know what the problem was. But it took the event to bring both of their “total failure” tools to find and fix the previously unthought of problem to light. 1000 engineers, not one of them remembered that below 45’ rubber is no longer the same type of substance from 46’ – its melting temperature.

After years and years of randomly gathering evidence on the side about Egypt, I began to know what I was looking for. Now when the next random piece of evidence comes, I know where to put it into place. But it requires thinking processes, ignoring the negative people saying nasty things, and a database to filter the variables which exist but you have no idea what to do with them yet.

Another point as the holocaust survivors die out from old age, anti-Semitism has begun to raise its evil head again. There are politicians alive and in office in the western cultures who are semi-openly anti-Semitic, and some are just anti the same thing the Third Reich were against “undesirables”. Same equation, random influence of survivors is diminishing the targeted groups without a strong opposition voice and presence are starting to scream loud again. Same pattern of 1880 Austria is repeating itself. Until I see the whale, I refuse to believe whales exist, is the core of the problem with most of the measurement tools.