text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Day 1 Keynote - Bjarne Stroustrup: C++11 Style
- Date: February 2, 2012 from 9:30AM to 11:00AM
- Day 1
- Speakers: Bjarne Stroustrup
- 205,916 Views
- 72 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”Slides.
Follow the Discussion
Good!!
Always eager to learn from the best. I'm definitely looking forward to watching Bjarne's interesting talk and the other GoingNative 2012 sessions!
Looking forward to this exciting session, Rocks!!
Looking forward to all the sessions. I am based in Manchester UK, must have checked the time in Redmond USA at least 20 times today :) cant wait.
We are gonna party like it is C++98 :P
Where are the live feed links?
Awesome talk!
Where can I access the recorded keynote?
You'll be able to access the recorded keynote and indeed all the sessions right here. Charles said it would take about +1 day to do the encoding and then the downloadable video files will be available.
Where can I download yesterday videos?
It was a great lecture!
But I haven't had the time to watch other speakers. I'll download the 1080p version of talks, since 1080p makes reading the code much a nicer experience.
EDIT: Charles, would it be possible to also publish the PowerPoint or PDF slides?
@undefined:Yes, where are the recorded sessions?
Had to work during almost all talks, so I'm looking forward to here all these presentations - saw a bit of the first day live but will enjoy the recordings of all presentations soon
. BTW: great selection of speakers: Bjarne, Sutter, Alexandrescu,…
@STL: great to here that range-base-for-loops will be in VC11.. though I'm a std::for_each-guy so that's not that big of a deal for me.
PS: looking forward to std::thread-support in VC…
The range-based for-loop is significantly less verbose than std::for_each() (my least favorite STL algorithm).
But using more specific STL algorithms is always a good idea.
The first qsort Example seems to be broken. I guess it goes to show how bad the API really is.
void f(char *arr, int m, ...) {
qsort(arr, m, sizeof(char*), cmpstringp);
}
He probably wanted a char *arr[].
Great talk so far.
btw. this website should support Unicode in names!
Thanks. Fixed for future uses.
A great talk!
I believe the 38 slide should read
shared_ptr<Gadget> p( new Gadget{n} );
instead of
shared_ptr<Gadget> p = new Gadget{n};
The same thing with the 39 slide.
I thought the talk on C++11 was great.
helpful
Can someone enlighten me about the syntax on page 62 (and 63) of slides:
double* f(const vector<double>& v); // read from v return result
double* g(const vector<double>& v); // read from v return result
void user(const vector<double>& some_vec) // note: const
{
double res1, res2;
thread t1 {[&]{ res1 = f(some_vec); }};
thread t2 {[&]{ res2 = g(some_vec); }};
// ...
t1.join();
t2.join();
cout << res1 << ' ' << res2 << '\n';
}
Isn't there a type mismatch between f() return and res1?
I took some sentence from the description of Bjarne description, because I am trying to find ressources, materials tutorials that show how to acheive this. For the previous standart or c++11. Anybody know good reference where i can find this now?
thanks
@undefined: Slides will be released with each session video! Like this one
C
Oh heck I give up.
Cool. Now we can see the invisible graph.
Yes, having to explain an invisible graph was a bit of a challenge :-)
Thanks for the comments; corrections will be applied to future versions of the talk.
It is interesting to ask if the software which supposed to show the graphs and the graph itself (i.e. invisible one) was written in C++. I noticed also some problems with fonts on one or two slides.
According to my experience these are typical problems in all kind of scientific presentations.
It is hard to believe that it is so difficult to avoid those problems with current technology. The same presentation on one computer looks very different on a different computer only because some other fonts are installed on that computer. In theory it is possible to embed the fonts with the presentation unfortunately this method do not work well in many cases (my own experience).
The only real solution is to transform the presentation into pdf or use some other software (or use your own computer but in many cases is not possible).
I saw these problems hundreds times in all kind of conferences and it looks like nobody in the MS Office Team cares about that (since the existence of MS Office).
The question about an easier way to declare getters and setters, anyone else think than Bjarne was just short of saying "I don't want that crap in my language"? =)
Nice. C++ may get bashed a lot, but its creator can certainly deliver a coherent presentation.() );
Shouldn't there at least be a performance difference between
sum=0; for(vector<int>::sizetype i=0;i<v.size();++i){ sum+=v[i]};
sum=0; for_each(v.begin(),v.end(),[&sum](int x){sum +=x;});
Since the first is calling size() during each loop, while the second (I believe) wouldn't constantly be rechecking the size, similarly to defining a vector<int>::sizetype end=v.size; and checking i<end?
I am also curious why there aren't at least some run times or something to back up the claim that there is no discernible difference between "several systems and several compilers"?
On my question about getters and setters in the video, I guess that these should be avoided; public data and object members should simply be declared public, despite what I've seen to be a common practice on many projects, and which seems to be promoted in many object oriented languages.
Ideally, there would be some way to overload the setting of an object or data "property" if the logic needs to be changed or limits imposed. I have created a Property template class in the past as Bjarne suggested, however, there is no elegant way for the parent class to overload the setter in the aggregated Property<T> member, and the syntax of accessing members of the property too often become property.get().member(), rather than property.member(), which is what you want to write.
From a language viewpoint, perhaps something like an overloaded "member access operator" would allow library writers to implement a setter or getter later if needed without changing user code. But without this, we suggest that if we need to change logic around setting or getting a member value, make the property private and recompile - we can easily find and update all the usages of the data member to use the new getter or setter.
So awesome to have Bjarne posting on C9!
Thank you, sir.
C
@undefined:Darren, here are my thoughts on your question. If you created a wrapper class for each public data member you could overload the assignment operator to perform bounds checking (as well as assignment) and perhaps throw an exception if necessary. That would solve the problem of assigning to a property without a setter. Of course, you would also have to overload all other meaningful operators for that property such as the boolean operators. You would have to repeat all this for each property, which in the end may be more trouble than it's worth. I can't really think of another way to do it, but I also haven't touched C++ in awhile so I could be wrong. Anyway, good luck.
I would really love to hear a talk, or read a paper, from Bjarne that discusses when to use OOP, and when to choose Functional or Type programming. For me, finding a balance has always been the most difficult part in software development. There just isn't one right way, but I'd love to hear his thoughts.
If anyone has any links to anything related that would be wonderful.
Nice
For those who are also members of the C++ Software Developers group on LinkedIn, I have started a discussion about what I believe are the most important features of C++ 11, and would love to see feedback and examples of good style that people would like to contribute. See
I watched the video few times.
I feel like we need some "fresh-minds" in defining what programming should look like, replacing Bjarne.
They had their era, time to move on.
My biggest problem with C++ (in big project) are the #includes that square the amount of source to compile (headers are compiled separately for each compilation unit).
Look how long it takes to compile firefox or KDE :-(
I think this is were we pay the cost for [over]using templates and/or inline functions.
Maybe there is something that could be fixed here? Maybe if we break backward compatibility (drop the preprocessor)? It's a pity that those problems were not mentioned here.
@pafinde: That's one of the things that modules seek to solve.
You can see the "invisible" graph in my posted slides.
I wrote a paper for IEEE Computer Magazine with very similar examples. See the January 2012 issue of Computer or my publications page:.
From C++ application development point of view, is there any place for compiler generated iterators in C++ (c# IEnumerable)? Seems like they may be implemented with zero overhead, like lambdas do.
I dont see any difference between the example
and the 'better' example
Both are understandable only if you use declarative parameter names as it is done with
which is equally understandable for me if you write
?
@bog: Thanks for this detailed answer to my comment.
I dont want to start nit-picking here. For sure 99.9999% of all programmers (me included) would use both corner points to define a rectangle. But you could also define it by its center point and any other point.
Or using the second constructor with Point top_left and Box_hw. Even if i would be 99% sure that i know what's meant, if i would readI would take a look at the implementation or read the docs to be sure.
So for me, using declarative parameter names highly improves the readability of interfaces.
After a night thinking about this issue I have to correct my first comment.
Within the meaning of this excellent talk, the examples using Point are the better ones. I was just misled by the different notations for good examples written with parameters and bad examples written without.
The Point example is better, because it implicates the possibility to use units like it is done by the Speed example.
The general point (sic) about the Point example is that sequences of arguments of the same type is prone to transposition of argument values. I consider it a well established fact that this is a significant source of errors. The implication is that we need to look for remedies. Using a more expressive and specific set of types is one approach.
A very good source of infomation..
Bjarne sir, I truly enjoyed, appreciated, and was influenced by your presentation. One thing that comes to mind is the ability for C++ to write performant, yet secure code.
I'm confused at one thing.
I can understand where he says, shared_ptr and unique_ptr,
but where he says why use a pointer, and then shows this code:
I'm pretty sure C++ wouldn't except that?
I've just run a test, and you can scope a variable like in Java now ^_^
it would be like this
Its amazing to see how C++ is catching up with .NET.
I've always been a C++ guy.
Thanks again.
Now? That last f() has worked for about two decades! It's pure C++98.
The "Gadget g {n};" in the original example, simply used the C++11 uniform initializer syntax, but is otherwise identical.
Wow, i must be honoured to get a reply from the man himself.
Thanks for the heads up Bjarne, C++ is really moving up.
So I can just pass a object by value by using rvalue references.
That is soo cool.
Tom
So, why can’t I read an unsigned char from an input stream?
When I try to read from "0", I get 060 and not 0 as expected.
And when I push (unsigned char) 0, I get "\0", not "0" in the output.
(1) Huh? unsigned char c; while(cin>>c) cout<<c<<'\n'; gives exactly what I expect
(2) The value of (unsigned char)0 *is* 0; not the value of the character '0'
Great presentation Bjarne. Honestly I have checked it a few times already
on the expense of not having watched the other videocasts yet...
Too bad the Vector vs Linked-List comparison kind of fell short. In spite of the graph-mishap I got inspired and tested it on a few different machines. For small amounts it was virtually the same but as the sets got larger there was a huge difference.It was fun to see - especially since I remember discussing this a couple of years ago (then I failed to see the larger picture).
Thanks again for the presentation!
To the guy asking about getters and setters using different get and set functions per class and while still keeping function inlining, this should work.
template<class OutType, class StoreType, class Controller>
class Property
{
private:
StoreType data;
public:
operator OutType()
{
return Controller::get(data);
}
OutType operator=(OutType a)
{
Controller::set(data, a);
return Controller::get(data);
}
};
class HPController
{
public:
static int get(int &a)
{
return a;
}
static void set(int &a, int &b)
{
a = b;
}
};
class Man
{
public:
Property<int, int, HPController> HP;
};
void PropertyTest()
{
Man man;
man.HP = 7;
cout << man.HP << "\n";
}
Thanks Bjarne!!!
I knew I wasn't stupid for wanting readable interfaces!! Hehe
@Ray: The problem with that approach becomes when your 'controller' needs to do something a bit more complex and needs the target object's state to decide what to do or needs to notify the target object to do something else upon a change.
In my experience I've found those cases the primary cases where I actually needed getters and setters.
So then in that case the Property class template needs to change to be able to contain a controller object which then holds a reference to the target object ( 'Man', in this case ), and the Controller then can not use static methods.
But then here is the bloat added.
So I like Darren's new proposal best - if they are logically publically available properties just leave them as public member variables.
In the future, when you realize that you need something more complex, either make them private and add getters and setters and modify the client code, or make a decorator that allows the assignment operator to work with them which calls the real getters and setter behind the scenes.
The truth is IOStream treats signed/unsigned chars as characters and not as numbers. Whether this is something to be expected I don't know.
Didn't know I could watch this on the internet.
I will definitely watch this as soon as I get off.
My suggestion is that when you write a char (short for "character") to an character I/O stream, you should expect to see that character on the output device. It takes quite some alternative learning to expect otherwise.
PS The "c" in cout, stands for "character"
PPS "If everything else fails read the manual"
I was trying to use his units code that was on the slide around 24:00, but the syntax he uses for the following doesn't seem to work with gcc 4.6.2 and -std=c++0x
using Speed = Value<Unit<1,0,-1>>;
I've never seen this use of the using directive before. Anybody know what is up with this?
@Luke: gcc 4.7 introduces support for template aliases. I have only 4.6.1 installed... I'll need to upgrade I guess
With gcc 4.7, the following works:
Speed sp1 = Value<Unit<1,0,-1>>(100); // 100 meters / second
But this does not (operator/ is not defined):
Speed sp1 = Value<Unit<1,0,0>>(100) / Value<Unit<0,0,1>>(1);
I guess he left out the part which would define all the arithmetic operators.
Yes, about two pages of things like this
template<class U1, class U2>
Value<typename Unit_plus<U1,U2>::type>
operator*(Value<U1> x, Value<U2> y)
{
return Value<typename Unit_plus<U1,U2>::type>(x.val*y.val);
}
and this
template<class U1, class U2>
struct Unit_plus {
typedef Unit<U1::m+U2::m,
U1::kg+U2::kg,
U1::s+U2::s
> type;
};
You can make that prettier in C++11, but I was using an old compiler (then), so I used old-fashioned, but effective, metaprogramming.
I like this one.
!bind comes from boost.
"Using !bind(pred, _1) in the first call to stable_partition() in the definition of the gather() function template (around minute 56 of the video) won't compile, will it? (Unless the wrapper object returned from bind() overloads operator!, which I don't think it does.)"
- from decades, code was been easy to learn first, because you were just need a few terms for programming. And you were done all things with that.
- Now, you need , you must use Interfaces , typedef specifics, classes globally existing in a namespace (like .NET) and you must know how you name it.
Yes, a box it's ok . This is a simple box. But, Box_hw ? how do you spell it ? You need now what you want to do but name it !
Is it more difficult for programmers ? No . Is it more difficult to remember the names ? No
It it always difficult to remember for beginners. But, if you are a beginner, engineer, you just need to remember all classes. For example, even google couldn't help you if you want a bicycle and you don't know how you spell a bicycle.
Now, differences between engineers : a few people know all classes ? well, but it's not very realistic.
Second, i love when i can be a C++ programmer since i know how to program in Java. That is in a good spirit.
Third, i love when he said "who does the delete ?" . Many bugs come from the bad documentation or a left program.
And else about Copy ? Not copy ? well, you can choice. You need to choice and need to say in the documentation that you can copy or not (thread safe ?).
After, it explains you should have use a Vector and not a List to insert incrementally your data because true OO type is a chained list. That is the difference and a time consuming with .NET List insertion. But, it's implementation dependent. You should know the implementation now.
Low level is should be not use : use standard templates instead. That's very C++ !
Remove this comment
Remove this threadclose | http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Keynote-Bjarne-Stroustrup-Cpp11-Style?format=smooth | CC-MAIN-2013-20 | en | refinedweb |
iPcTest Struct Reference
This is a test property class. More...
#include <propclass/test.h>
Inheritance diagram for iPcTest:
Detailed Description
This is a test property class.
This property class can send out the following messages to the behaviour (add prefix 'cel.parameter.' to get the ID for parameters):
- cel.misc.test.print: a message has been printed (message)
This property class supports the following actions (add prefix 'cel.action.' to get the ID of the action and add prefix 'cel.parameter.' to get the ID of the parameter):
- Print: parameters 'message' (string).
This property class supports the following properties (add prefix 'cel.property.' to get the ID of the property:
- counter (long, read/write): how many times something has been printed.
- max (long, read/write): maximum length of what was printed.
Definition at line 43 of file test.h.
Member Function Documentation
Print a message.
The documentation for this struct was generated from the following file:
Generated for CEL: Crystal Entity Layer 1.4.1 by doxygen 1.7.1 | http://crystalspace3d.org/cel/docs/online/api-1.4.1/structiPcTest.html | CC-MAIN-2013-20 | en | refinedweb |
XSNamespaceItem
The constructor to be used when a grammar pool contains all needed info.
The constructor to be used when the XSModel must represent all components in the union of an existing XSModel and a newly-created Grammar(s) from the GrammarResolver.
[annotations]: a set of annotations.
Convenience method.
Returns a top-level attribute declaration.
null
Returns a top-level attribute group definition.
[schema components]: a list of top-level components, i.e.
element declarations, attribute declarations, etc.
ELEMENT_DECLARATION
TYPE_DEFINITION
objectType
Returns a list of top-level component declarations that are defined within the specified namespace, i.e. element declarations, attribute declarations, etc.
namespace
Returns a top-level element declaration.
Returns a top-level model group definition.
A set of namespace schema information information items ( of type XSNamespaceItem), one for each namespace name which appears as the target namespace of any schema component in the schema used for that assessment, and one for absent if any schema component in the schema had no target namespace.
For more information see schema information.
Returns a list of all namespaces that belong to this schema. The value null is not a valid namespace name, but if there are components that don't have a target namespace, null is included in this list.
Returns a top-level notation declaration.
Returns a top-level simple or complex type definition.
XSTypeDefinition
Get the XSObject (i.e.
XSElementDeclaration) that corresponds to to a schema grammar component (i.e. SchemaElementDecl)
Optional.
Return a component given a component type and a unique Id. May not be supported for all component types.
[friend]
[protected] | http://docs.oracle.com/cd/E18050_01/tuxedo/docs11gr1/xmlparser/html/apiDocs/classXSModel.html | CC-MAIN-2013-20 | en | refinedweb |
as it violates type aliasing. Many compilers implement, as a non-standard language extension, the ability to read inactive members of a union.
#include <iostream> union S { std::int32_t n; // occupies 4 bytes std::uint16_t s[2]; // occupies 4 bytes std::uint8_t c; // occupies 1 byte }; // the whole union occupies 4 bytes int main() { S s = {0x12345678}; // initalizes the first member, s.n is now the active member // at this point, reading from s.s or s.c is UB std::cout << std::hex << "s.n = " << s.n << '\n'; s.s[0] = 0x0011; // s.s is now the active member // at this point, reading from n or c is UB but most compilers define this std::cout << "s.c is now " << +s.c << '\n' // 11 or 00, depending on platform << "s.n is now " << s.n << '\n'; // 12340011 or 00115678 }
Each member is allocated as if it is the only member of the class, which is why
s.c in the example above aliases the first byte of
s.s[0].
If members of a union are classes with user-defined constructors and destructors, to switch the active member, explicit destructor and placement new are generally needed:
#include <iostream> #include <string> #include <vector> union S { std::string str; std::vector<int> vec; ~S() {} // needs to know which member is active, only possible in union-like class }; // the whole union occupies max(sizeof(string), sizeof(vector<int>) int main() { S s = {"Hello, world"}; // at this point, reading from s.vec is UB std::cout << "s.str = " << s.str << '\n'; s.str.~basic_string<char>(); new (&s.vec) std::vector<int>; // now, s.vec is the active member of the union s.vec.push_back(10); std::cout << s.vec.size() << '\n'; s.vec.~vector<int>(); }
If two union members are standard-layout types, it's well-defined to examine their common subsequence on any compiler.
[edit] Anonymous unions
An unnamed union definition that does not define any objects is an anonymous union definition.
Anonymous unions have further restrictions: they cannot have member functions, cannot have static data members, and all their non-static data members must be public.
Members of an anonymous union are injected in the enclosing scope (and must not conflict with other names declared there).
int main() { union { int a; const char* p; }; a = 1; p = "Jennifer"; }
Namespace-scope anonymous unions must be static.
[edit] Union-like classes
A union-like class is any class with at least one anonymous union as a member. The members of that anonymous union are called variant members. Union-like classes can be used to implement tagged unions.
#include <iostream> // S has one non-static data member (tag), three enumerator members, // and three variant members (c, n, d) struct S { enum {CHAR, INT, DOUBLE} tag; union { char c; int n; double d; }; }; void print_s(const S& s) { switch(s.tag) { case S::CHAR: std::cout << s.c << '\n'; break; case S::INT: std::cout << s.n << '\n'; break; case S::DOUBLE: std::cout << s.d << '\n'; break; } } int main() { S s = {S::CHAR, 'a'}; print_s(s); s.tag = S::INT; s.n = 123; print_s(s); } | http://en.cppreference.com/w/cpp/language/union | CC-MAIN-2013-20 | en | refinedweb |
Deploying and running Django apps on Heroku is really a no-brainer, except for one thing — serving static files via
collectstatic.
I run
collectstatic as usual, using
heroku run command just like how I did it for
syncdb. It worked but I’m getting 404 error when serving static files.
It turns out that running
collectstatic via
heroku run spins up a new dyno and
collectstatic is running in an isolated environment. So, all the collected static files are deployed to that location and only for this new dyno and the dyno attached to our Django app can’t access that. — Heroku support staff
Solution
The dead simple solution would be to run
collectstatic as part of the Procfile before starting the app. We need to “chain” together our Procfile commands for the web process, like so:
OK there you go, no more 404 error when serving static files from your Django app on Heroku. Plus, every time you deploy your app, newly added static files will be collected automatically.
Update
There are a lot of questions about the configurations. Cross check with my settings.py here
Important thing here is your STATICFILES_DIRS. Make sure to include your project_name/app_name/static here. In my case, I have project_name/staticfiles for the STATIC_ROOT. Change STATIC_URL = ‘/static/’ if you want to serve from Heroku, same goes to ADMIN_MEDIA_PREFIX = ‘/static/admin/’
Finally add this to your urls.py in order to serve from Heroku.
urlpatterns += patterns(”,
(r’^static/(?P.*)$’, ‘django.views.static.serve’, {‘document_root’: settings.STATIC_ROOT}),
)
Files could be access as such:
/app/project_name/staticfiles/style.css >.
Pingback: Django Static Files on Heroku — elweb
Pingback: Managing Django Static Files on Heroku | Програмиране
I’ve just been looking into Heroku for hosting myself… I had read that the Cedar filesystem was ephemeral, i.e. wiped whenever the dyno is restarted. I thought that would preclude serving Django’s static files.
But do your commands above automatically run collectstatic when the new dyno is spun up, to reinstate them?
So it would just be user-uploaded files I’d need to run from S3 instead?
Sorry for the late, late reply.
It will run every time you do git push heroku master.
FYI, I’m using django storages & django compressor to serve static files via Amazon S3. In order for compressor to work, you’ll need a temp file system cache on Heroku, thus the solution above.
Hi there, am running into the same problem.
I wonder how you’ve setup your STATIC_URL, STATIC_ROOT and STATICFILES_DIR as I’m almost hitting my head on the wall now
Post has been updated. Check the settings.py
I have a question. I tried your snippet to get static but for me it doesn’t work.
What do you have in settings file? What is STATIC_URL and STATIC_ROOT?
With regards
Kamil
I’ve included a sample of my settings.py. Cross check with yours
Thanks
It’s working now. Founded solution.
So I have:
In Debug:
STATIC_ROOT = ‘static/’
STATIC_URL = ‘/static/’
and in urls.py
urlpatterns = patterns(”,
…####
#### urls urls urls…
)
+ static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
And it’s working great
Bingo!
Even with Kamil’s writeup, I’m not sure what you’re proposing here. Sure, running collectstatic gets all the files in one place.
But have you configured gunicorn to serve that directory separate from the Django staticfiles app?
(If you’re willing to run the staticfiles app with DEBUG on, there’s not even a need to collectstatic; it seems to serve things fine from wherever they are. But that’s “probably insecure”, says the Django docs. So I’m trying to figure out if what you Kamil is describing somehow gets heroku-nginx or gunicorn to serve the collectedstatic directory…)
With more research I’m guessing you’ve each settled on some variant like this in urls.py:
if not settings.DEBUG:
# screw the vague FUD and django doctrine against non-DEBUG static-serving
urlpatterns += patterns(”,
(r’^static/(?P.*)$’, ‘django.views.static.serve’, {‘document_root’ :settings.STATIC_ROOT}),
)
The comments for ‘django.views.static.serve’ include more of the usual admonitions about how one ‘SHOULD NOT’ do this in a ‘production setting’. But these similarly-worded-but-never-explained warnings have echoed across Django comments/docs so much they’re beginning to look like superstitions to me rather than actual best practices.
Yup. Check my updated post.
Thanks for this post! It was very helpful in getting django static files on heroku running.
However, I’ve noticed that every time I deploy a change, even without adding any new static files, collectstatic has to run every time and causes the app restart to be longer. This is sort of annoying because any users on the site would experience a long response if they made a request while this process was happening. Do you have any advice for this problem?
This is because the collectstatic command is part of the Procfile. Quick solution will be creating two versions of Procfile, one with the collectstatic command.
I can’t think of other cleaner solution.
Nice post but I have a problem. I can’t get this done. I’m getting:
unknown specifier: ?P.
My urls.py looks like:
admin.autodiscover()
urlpatterns = patterns(”,
…
…
)
if not settings.DEBUG:
urlpatterns += patterns”,
(r’^static/(?P.*)$’, ‘django.views.static.serve’, {‘document_root’: settings.STATIC_ROOT}),
)
Ok I did it. But still, heroku can’t find my static files
Make sure your STATIC_ROOT & STATICFILES_DIR are pointing to the right place.
Pingback: Heroku – Django app static files – Stack Overflow | Programmer Solution
I replace ’^static/(?P.*)$’ for just ’^static/(.*)$’ and now is working.
Don’t forget to import your settings file in your urls.py.
from yourapp import settings
Pingback: Heroku – Handling static files in Django app
Pingback: Your first Heroku Django app « Fear and Loathing on the Learning Curve » Observations on life, tech and web design from a slightly misanthropic mind.
Hey Mathew,
I followed what you have suggested, but I’m hitting an error: Could not import settings ‘my_django_app/settings.py’
More details here. Would appreciate any help!
Check my answer here
Pingback: Heroku Django: Handling static files. Could not import settings ‘my_django_app/settings.py’
Hi! Thank you for the blog post. However, I encountered this issue when running ‘foreman start’
13:59:40 web.1 | started with pid 2060
13:59:40 web.1 | Usage: manage.py collectstatic [options]
13:59:40 web.1 |
13:59:40 web.1 | Collect static files in a single location.
13:59:40 web.1 |
13:59:40 web.1 | manage.py: error: no such option: –noinput;
13:59:40 web.1 | process terminated
13:59:40 system | sending SIGTERM to all processes
I used this line in my Profile:
web: python manage.py collectstatic –noinput; gunicorn_django –workers=4 –bind=0.0.0.0:$PORT
It seems to be an issue with chaining the commands together on one line…any idea?
Seems like it’s an issue with foreman only. I pushed it to the Heroku repo and the app ran fine..very strange.
Yup, the chaining is meant for Heroku only.
Pingback: Heroku – Handling static files in Django app | PHP Developer Resource
Thanks for the post!! Question– how would I handle serving just a handful of static files on Heroku, and the majority via my main backend on Amazon S3? There are just a few files that I need on Heroku (to be on the same domain name… it’s related to browser compatibility issues), but I still want the rest of my files to be served via S3.
Any suggestions on how to go about this?
Hey Janelle,
I think that is possible. Just don’t use the {{ STATIC_URL }} template tag on the files that you want to serve from Heroku because it points to S3. Instead, use the absolute URL for these files.
For example: /app/lib/python2.7/site-packages/django/contrib/admin/media/css/login.css >
I wrote a bit about how to do this here:
So what’s stopping you from committing the changed static files after running a collectstatic locally? Seems to work for me so far.
In my settings.py I have:
PROJECT_DIR = os.path.dirname(__file__)
Then I use that in settings.py and urls.py to set the static directory… (Rather than a path starting in / as the manual insists on using…)
STATIC_ROOT = os.path.join(PROJECT_DIR, ‘static’)
for example…
Also, the procfile didn’t work. I got it working with:
web: python manage.py collectstatic –noinput; gunicorn -b 0.0.0.0:$PORT -w 4 [project name].wsgi:application
[project name] is the folder containing wsgi.py
The following was enough if I committed the static files:
web: gunicorn -b 0.0.0.0:$PORT -w 4 [project name].wsgi:application
Pingback: Django non-rel on Heroku with less and coffee-script compilation « Be Amity
Pingback: Deployment of static files to Heroku | Web App (B)Log
Thanks,
I wonder how you would handle static files versions, the best I could have come up with is –
Sorry but after updating your urls.py to serve the static files doesn’t that mean that every static file requested will have to be served by both the Dyno and the Django app instead of just the Dyno.
This will push extra load on your app and that is the reason why it is marked as not good for production in the docs. | http://matthewphiong.com/managing-django-static-files-on-heroku | CC-MAIN-2013-20 | en | refinedweb |
updated copyright years
\ paths.fs path file handling 03may97jaw \ Copyright (C) 1995,1996,1997,1998,2000,2003,2004,2005,2006,2007,2008. \ include string.fs [IFUNDEF] +place : +place ( adr len adr ) 2dup >r >r dup c@ char+ + swap move r> r> dup c@ rot + swap c! ; [THEN] [IFUNDEF] place : place ( c-addr1 u c-addr2 ) 2dup c! char+ swap move ; [THEN] Variable fpath ( -- path-addr ) \ gforth Variable ofile Variable tfile : os-cold ( -- ) fpath $init ofile $init tfile $init pathstring 2@ fpath only-path init-included-files ; \ The path Gforth uses for @code{included} and friends. : also-path ( c-addr len path-addr -- ) \ gforth \G add the directory @i{c-addr len} to @i{path-addr}. >r r@ $@len IF \ add separator if necessary s" |" r@ $+! 0 r@ $@ + 1- c! THEN r> $+! ; : clear-path ( path-addr -- ) \ gforth \G Set the path @i{path-addr} to empty. s" " rot $! ; : $@ ; : next-path ( addr u -- addr1 u1 addr2 u2 ) \ addr2 u2 is the first component of the path, addr1 u1 is the rest 0 $split 2swap ; : ; : pathsep? dup [char] / = swap [char] \ = or ; : need/ ofile $@ 1- + c@ pathsep? 0= IF s" /" ofile $+! THEN ; : extractpath ( adr len -- adr len2 ) BEGIN dup WHILE 1- 2dup + c@ pathsep? IF EXIT THEN REPEAT ; : remove~+ ( -- ) ofile $@ s" ~+/" string-prefix? IF ofile 0 3 $del THEN ; : expandtopic ( -- ) \ stack effect correct? - anton \ expands "./" into an absolute name ofile $@ s" ./" string-prefix? IF ofile $@ 1 /string tfile $! includefilename 2@ extractpath ofile $! \ care of / only if there is a directory ofile $@len IF need/ THEN tfile $@ over c@ pathsep? IF 1 /string THEN ofile $+!@ move r> endif endif + nip over - ; \ test cases: \ s" z/../../../a" compact-filename type cr \ s" ../z/../../../a/c" compact-filename type cr \ s" /././//./../..///x/y/../z/.././..//..//a//b/../c" compact-filename type cr : reworkdir ( -- ) remove~+ ofile $@ compact-filename nip ofile $!len ; : open-ofile ( -- fid ior ) \G opens the file whose name is in ofile expandtopic reworkdir ofile $@ r/o open-file ; : check-path ( adr1 len1 adr2 len2 -- fid 0 | 0 ior ) >r >r ofile $! need/ r> r> ofile $+! $! open-ofile dup 0= IF >r ofile $@ r> THEN EXIT ELSE r> -&37 >r path>string BEGIN next-path dup WHILE r> drop 5 pick 5 pick check-path dup 0= IF drop >r 2drop 2drop r> ofile $@ ; | http://www.complang.tuwien.ac.at/viewcvs/cgi-bin/viewcvs.cgi/gforth/kernel/paths.fs?view=auto&rev=1.37&sortby=rev&only_with_tag=MAIN | CC-MAIN-2013-20 | en | refinedweb |
Is there any way of running and compiling with known errors in the code.
Her is my reason. I am using a reference to word 2008 and word 2010, so as the program will work with both versions. Trouble is that if the computer I am using to test the code, only has one installed (naturally) so the program wont compile or run for me to test other parts of the program. There must be a way of ignoring error which wont make any difference to the run time compile program.
Is there any way of running and compiling with known errors in the code.
Compiling? yes, running? No because the program has to be error-free in order to execute the code. There is no point in trying to execute a program that contains compile-time errors. How do you expect the compiler to generate executable code when the source code is crap?
Do you really need references to both versions of word at the same time? If you have the reference to word 2010 just test your program on a computer that has word 2008 installed on it.
Not as easy as that, and it is not CRAP code it is CRAP software that doesn't allow for this to work. On VB6 it would have worked fine.
The reason for the errors is because word 2003 needs to have the declared
Imports Word = Microsoft.Office.Interop.Word to work, but 2007 onwards uses a completely different method and doesn;t recognise this statement, and thus the several hundred of statement that uses the "word" variable. The fact is that the compiled programme would never error because the code would route the programme to the correct version installed.
And I cant test on a computer that has 2010 on it as that will then error on the 2003 part of the code. nAnd in any case it is not so much as testing the programme as adding new code to other parts of the programme. I am at a loos as to what to do.
The only method I see available to me is to have a different database programme for each version of work, which seems ridiculous. But it looks like that is the way it has to be, or go back to VB6!
Couldn't you check the version and then conditionally branch from there? I found an example here: Click Here. Some sample code to look at might be helpful... By the way, what versions are you trying to support? The original post states Word 2008 and Word 2010, but Word 2008 is in Office 2008 for Mac only as far as I know.
The Microsoft.Office.Interop.Word namespace is documented on MSDN for Word 2003 and Word 2010 only, so apparently not distributed with any other versions of Office. That said, the Interop assemblies are available for redistribution. The assemblies for Office 2010 are found here: Click Here, I have no idea what will happen if you install and reference those assemblies on a system that has Word 2007 installed, and whatever code you write would have to be isolated by version and tested on a specific basis.
HKLM\Word.Application.CurVer also has the version number on my system (Office 2010), but I don't know whether that key exists in any/all other versions.
Again, it would be helpful to know what versions you need to support.
Yes, Interesting reading, and it uses VB6, which seems to work fine without the errors. I am trying to use all versions of word, ie 2000,2002,2003,2007 and 2010. But as 2000 and 2002 are too different I have decided to drop them.
I have now managed to convert the errors to warnings by somehow adding the references even though the computer doesn;t have the relevant versions installed, and it seems to work, I will know for sure when I try running the compiled programme on some other machines that only have one version installed, but I think it is going to work.
If you're all set, mark the thread as solved.
Thanks | http://www.daniweb.com/software-development/vbnet/threads/440593/force-compile-with-errors | CC-MAIN-2013-20 | en | refinedweb |
Case
[1] "It's been quite a year," thought Tom Moline." On top of
their normal efforts at hunger advocacy and education on campus,
the twenty students in the Hunger Concerns group were spending the
entire academic year conducting an extensive study of hunger in
sub-Saharan Africa. Tom's girlfriend, Karen Lindstrom, had
proposed the idea after she returned from a semester-abroad program
in Tanzania last spring. With tears of joy and sorrow, she
had described for the group the beauty and suffering of the people
and land. Wracked by AIDS, drought, and political unrest, the
nations in the region are also fighting a losing war against hunger
and malnutrition. While modest gains have been made for the
more than 800 million people in the world that are chronically
malnourished, sub-Saharan Africa is the only region in the world
where the number of hungry people is actually increasing. It
was not hard for Karen to persuade the group to focus attention on
this problem and so they decided to devote one of their two
meetings per month to this study. In the fall, Karen and Tom
led three meetings examining root causes of hunger in various forms
of powerlessness wrought by poverty, war, and drought.
[2] What Tom had not expected was the special attention the
group would give to the potential which biotechnology poses for
improving food security in the region. This came about for
two reasons. One was the participation of Adam Paulsen in the
group. Majoring in economics and management, Adam had spent
last summer as an intern in the Technology Cooperation Division of
Monsanto. Recognized, and often vilified, as a global leader
in the field of agricultural biotechnology, Monsanto has also been
quietly working with agricultural researchers around the world to
genetically modify crops that are important for subsistence
farmers. For example, Monsanto researchers have collaborated
with governmental and non-governmental research organizations to
develop virus-resistant potatoes in Mexico, "golden mustard" rich
in beta-carotene in India, and virus-resistant papaya in Southeast
Asia.
[3] In December, Adam gave a presentation to the group that
focused on the role Monsanto has played in developing
virus-resistant sweet potatoes for Kenya. Sweet potatoes are
grown widely in Kenya and other developing nations because they are
nutritious and can be stored beneath the ground until they need to
be harvested. The problem, however, is that pests and
diseases can reduce yields by up to 80 percent. Following
extensive research and development that began in 1991, the Kenyan
Agricultural Research Institute (KARI) began field tests of
genetically modified sweet potatoes in 2001. Adam concluded
his presentation by emphasizing what an important impact this
genetically modified (GM) crop could have on food security for
subsistence farmers. Even if losses were only cut in half,
that would still represent a huge increase in food for people who
are too poor to buy the food they need.
[4] The second reason the group wound up learning more about the
potential biotechnology poses for increasing food production in
Kenya was because a new member joined the group. Josephine
Omondi, a first-year international student, had read an
announcement about Adam's presentation in the campus newsletter and
knew right away that she had to attend. She was, after all, a
daughter of one of the scientists engaged in biotechnology research
at the KARI laboratories in Nairobi. Struggling with
homesickness, Josephine was eager to be among people that cared
about her country. She was also impressed with the accuracy
of Adam's presentation and struck up an immediate friendship with
him when they discovered they both knew Florence Wambugu, the
Kenyan researcher that had initiated the sweet potato project and
had worked in Monsanto's labs in St. Louis.
[5] Naturally, Josephine had much to offer the group. A
month after Adam's presentation, she provided a summary of other
biotechnology projects in Kenya. In one case, tissue culture
techniques are being employed to develop banana varieties free of
viruses and other diseases that plague small and large-scale banana
plantations. In another case, cloning techniques are being
utilized to produce more hearty and productive chrysanthemum
varieties, a plant that harbors a chemical, pyrethrum, that
functions as a natural insecticide. Kenya grows nearly half
the global supply of pyrethrum, which is converted elsewhere into
environmentally-friendly mosquito repellants and
insecticides.1
[6] Josephine reserved the majority of her remarks, however, for
two projects that involve the development of herbicide- and
insect-resistant varieties of maize (corn). Every year
stem-boring insects and a weed named Striga decimate up to 60
percent of Kenya's maize harvest.2
Nearly 50 percent of the food Kenyans consume is maize, but maize
production is falling. While the population of East Africa
grew by 20 percent from 1989 to 1998, maize harvests actually
declined during this period.3 Josephine
stressed that this is one of the main reasons the number of hungry
people is increasing in her country. As a result, Kenyan
researchers are working in partnership with the International Maize
and Wheat Improvement Center (CIMMYT) to develop corn varieties
that can resist Striga and combat stem-borers. With pride,
Josephine told the group that both projects are showing signs of
success. In January 2002, KARI scientists announced they had
developed maize varieties from a mutant that is naturally resistant
to a herbicide which is highly effective against Striga. In a
cost-effective process, farmers would recoverthe small cost of
seeds coated with the herbicide through yield increases of up to
400 percent.4
[7] On the other front, Josephine announced that significant
progress was also being made between CIMMYT and KARI in efforts to
genetically engineer "Bt" varieties of Kenyan maize that would
incorporate the gene that produces Bacillus thuringiensis, a
natural insecticide that is used widely by organic farmers.
Josephine concluded her remarks by saying how proud she was of her
father and the fact that poor subsistence farmers in Kenya are
starting to benefit from the fruits of biotechnology, long enjoyed
only by farmers in wealthy nations.
[8] A few days after Josephine's presentation, two members of
the Hunger Concerns group asked if they could meet with Tom since
he was serving as the group's coordinator. As an
environmental studies major, Kelly Ernst is an ardent advocate of
organic farming and a strident critic of industrial approaches to
agriculture. As much as she respected Josephine, she
expressed to Tom her deep concerns that Kenya was embarking on a
path that was unwise ecologically and economically. She
wanted to have a chance to tell the group about the ways organic
farming methods can combat the challenges posed by stem-borers and
Striga.
[9] Similarly, Terra Fielding thought it was important that the
Hunger Concerns group be made aware of the biosafety and human
health risks associated with genetically modified (GM) crops.
Like Terra, Tom was also a biology major so he understood her
concerns about the inadvertent creation of herbicide-resistant
"superweeds" and the likelihood that insects would eventually
develop resistance to Bt through prolonged exposure. He also
understood Terra's concern that it would be nearly impossible to
label GM crops produced in Kenya since most food goes directly from
the field to the table. As a result, few Kenyans would be
able to make an informed decision about whether or not to eat
genetically-engineered foods. Convinced that both sets of
concerns were significant, Tom invited Kelly and Terra to give
presentations in February and March.
[10] The wheels came off during the meeting in April,
however. At the end of a discussion Tom was facilitating
about how the group might share with the rest of the college what
they had learned about hunger in sub-Saharan Africa, Kelly Ernst
brought a different matter to the attention of the group: a plea to
join an international campaign by Greenpeace to ban GM crops.
In the murmurs of assent and disapproval that followed, Kelly
pressed ahead. She explained that she had learned about the
campaign through her participation in the Environmental Concerns
group on campus. They had decided to sign on to the campaign
and were now actively encouraging other groups on campus to join
the cause as well. Reiterating her respect for Josephine and
the work of her father in Kenya, Kelly nevertheless stressed that
Kenya could achieve its food security through organic farming
techniques rather than the "magic bullet" of GM crops, which she
argued pose huge risks to the well-being of the planet as well as
the welfare of Kenyans.
[11] Before Tom could open his mouth, Josephine offered a
counter proposal. Angry yet composed, she said she fully
expected the group to vote down Kelly's proposal, but that she
would not be satisfied with that alone. Instead, she
suggested that a fitting conclusion to their study this year would
be for the group to submit an article for the college newspaper
explaining the benefits that responsible use of agricultural
biotechnology poses for achieving food security in sub-Saharan
Africa, particularly in Kenya.
[12] A veritable riot of discussion ensued among the twenty
students. The group appeared to be evenly divided over the
two proposals. Since the meeting had already run well past
its normal ending time, Tom suggested that they think about both
proposals and then come to the next meeting prepared to make a
decision. Everybody seemed grateful for the chance to think
about it for a while, especially Tom and Karen.
II
[13] Three days later, an intense conversation was taking place
at a corner table after dinner in the cafeteria.
[14] "Come on, Adam. You're the one that told us people
are hungry because they are too poor to buy the food they need,"
said Kelly. "I can tell you right now that there is plenty of
food in the world; we just need to distribute it better. If
we quit feeding 60 percent of our grain in this country to animals,
there would be plenty of food for everyone."
[15] "That may be true, Kelly, but we don't live in some ideal
world where we can wave a magic wand and make food land on the
tables of people in Africa. A decent food distribution
infrastructure doesn't exist within most of the countries.
Moreover, most people in sub-Saharan Africa are so poor they
couldn't afford to buy our grain. And even if we just gave it
away, all we would do is impoverish local farmers in Africa because
there is no way they could compete with our free food. Until
these countries get on their feet and can trade in the global
marketplace, the best thing we can do for their economic
development is to promote agricultural production in their
countries. Genetically modified crops are just one part of a
mix of strategies that Kenyans are adopting to increase food
supplies. They have to be able to feed themselves."
[16] "Yes, Africans need to feed themselves," said Kelly, "but I
just don't think that they need to follow our high-tech approach to
agriculture. Look at what industrial agriculture has done to
our own country. We're still losing topsoil faster than we
can replenish it. Pesticides and fertilizers are still
fouling our streams and groundwater. Massive monocultures
only make crops more susceptible to plant diseases and pests.
At the same time, these monocultures are destroying
biodiversity. Our industrial approach to agriculture is
living off of biological capital that we are not replacing.
Our system of agriculture is not sustainable. Why in God's
name would we want to see others appropriate it?"
[17] "But that's not what we're talking about," Adam
replied. "The vast majority of farmers in the region are
farming a one hectare plot of land that amounts to less than 2.5
acres. They're not buying tractors. They're not using
fertilizer. They're not buying herbicides. They can't
afford those things. Instead, women and children spend most
of their days weeding between rows, picking bugs off of plants, or
hauling precious water. The cheapest and most important
technology they can afford is improved seed that can survive in
poor soils and resist weeds and pests. You heard Josephine's
report. Think of the positive impact that all of those
projects are going to have for poor farmers in Kenya."
[18] Kelly shook her head. "Come on, Adam. Farmers
have been fighting with the weather, poor soils, and pests
forever. How do you think we survived without modern farming
methods? It can be done. We know how to protect soil
fertility through crop rotations and letting ground rest for a
fallow period. We also know how to intercrop in ways that cut
down on plant diseases and pests. I can show you a great
article in WorldWatch magazine that demonstrates how organic
farmers in Kenya are defeating stem-borers and combating
Striga. In many cases they have cut crop losses down to 5
percent. All without genetic engineering and all the dangers
that come with it."
[19] Finally Karen broke in. "But if that knowledge is so
wide-spread, why are there so many hungry people in Kenya?
I've been to the region. Most farmers I saw already practice
some form of intercropping, but they can't afford to let their land
rest for a fallow period because there are too many mouths to
feed. They're caught in a vicious downward spiral.
Until their yields improve, the soils will continue to become more
degraded and less fertile."
[20] Adam and Kelly both nodded their heads, but for different
reasons. The conversation seemed to end where it began; with
more disagreement than agreement.
III
[21] Later that night, Tom was in the library talking with Terra
about their Entomology exam the next day. It didn't take long
for Terra to make the connections between the material they were
studying and her concerns about Bt crops in Kenya. "Tom, we
both know what has happened with chemical insecticide
applications. After a period of time, the few insects that
have an ability to resist the insecticide survive and
reproduce. Then you wind up with an insecticide that is no
longer effective against pests that are resistant to it. Bt
crops present an even more likely scenario for eventual resistance
because the insecticide is not sprayed on the crop every now and
then. Instead, Bt is manufactured in every cell of the plant
and is constantly present, which means pests are constantly
exposed. While this will have a devastating effect on those
insects that don't have a natural resistance to Bt, eventually
those that do will reproduce and a new class of Bt-resistant
insects will return to munch away on the crop. This would be
devastating for organic farmers because Bt is one of the few
natural insecticides they can use and still claim to be
organic."
[22] "I hear you, Terra. But I know that Bt farmers in the
U.S. are instructed by the seed distributors to plant refuges
around their Bt crops so that some pests will not be exposed to Bt
and will breed with the others that are exposed, thus compromising
the genetic advantage that others may have."
[23] "That's true, Tom, but it's my understanding that farmers
are not planting big enough refuges. The stuff I've read
suggests that if you're planting 100 acres in soybeans, 30 acres
should be left in non-Bt soybeans. But it doesn't appear that
farmers are doing that. And that's here in the States.
How reasonable is it to expect a poor, uneducated farmer in East
Africa to understand the need for a refuge and also to resist the
temptation to plant all of the land in Bt corn in order to raise
the yield?"
[24] As fate would have it, Josephine happened to walk by just
as Terra was posing her question to Tom. In response, she
fired off several questions of her own. "Are you suggesting
Kenyan farmers are less intelligent than U.S. farmers, Terra?
Do you think we cannot teach our farmers how to use these new gifts
in a wise way? Haven't farmers in this country learned from
mistakes they have made? Is it not possible that we too can
learn from any mistakes we make?"
[25] "Josephine, those are good questions. It's just that
we're talking about two very different agricultural
situations. Here you have less than two million farmers
feeding 280 million people. With a high literacy rate, a huge
agricultural extension system, e-mail, and computers, it is
relatively easy to provide farmers with the information they
need. But you said during your presentation that 70 percent
of Kenya's 30 million people are engaged in farming. Do you
really think you can teach all of those people how to properly
utilize Bt crops?"
[26] "First of all, U.S. farmers do not provide all of the food
in this country. Where do you think our morning coffee and
bananas come from? Rich nations import food every day from
developing nations, which have to raise cash crops in order to
import other things they need in order to develop, or to pay debts
to rich nations. You speak in sweeping generalizations.
Obviously not every farmer in Kenya will start planting Bt corn
tomorrow. Obviously my government will recognize the need to
educate farmers about the misuse of Bt and equip them to do
so. We care about the environment and have good policies in
place to protect it. We are not fools, Terra. We are
concerned about the biosafety of Kenya."
[27] Trying to take some of the heat off of Terra, Tom asked a
question he knew she wanted to ask. "What about the dangers
to human health, Josephine? The Europeans are so concerned
they have established a moratorium on all new patents of
genetically-engineered foods and have introduced GM labeling
requirements. While we haven't done that here in the U.S.,
many are concerned about severe allergic reactions that could be
caused by foods made from GM crops. Plus, we just don't know
what will happen over the long term as these genes interact or
mutate. Isn't it wise to be more cautious and go slowly?"
[28] There was nothing slow about Josephine's reply. "Tom,
we are concerned about the health and well-being of our
people. But there is one thing that you people don't
understand. We view risks related to agricultural
biotechnology differently. It is reasonable to be concerned
about the possible allergenicity of GM crops, and we test for
these, but we are not faced primarily with concerns about allergic
reactions in Kenya. We are faced with declining food supplies and
growing numbers of hungry people. As Terra said, our
situations are different. As a result, we view the possible
risks and benefits differently. The people of Kenya should be
able to decide these matters for themselves. We are tired of
other people deciding what is best for us. The colonial era
is over. You people need to get used to it."
[29] With that, Josephine left as suddenly as she had
arrived. Worn out and reflective, both Tom and Terra decided
to return to studying for their exam the next day.
IV
[30] On Friday night, Karen and Tom got together for their
weekly date. They decided to have dinner at a local
restaurant that had fairly private booths. After Karen's
semester in Tanzania last spring, they had learned to cherish the
time they spent together. Eventually they started talking
about the decision the Hunger Concerns group would have to make
next week. After Karen summarized her conversation with Kelly
and Adam, Tom described the exchange he and Terra had with
Josephine.
[31] Karen said, "You know, I realize that these environmental
and health issues are important, but I'm surprised that no one else
seems willing to step back and ask whether anyone should be doing
genetic engineering in the first place. Who are we to mess
with God's creation? What makes us think we can improve on
what God has made?"
[32] "But Karen," Tom replied, "human beings have been mixing
genes ever since we figured out how to breed animals or graft
branches onto apple trees. We didn't know we were engaged in
genetic manipulation, but now we know more about the science of
genetics, and that has led to these new technologies. One of
the reasons we can support six billion people on this planet is
because scientists during the Green Revolution used their God-given
intelligence to develop hybrid stocks of rice, corn, and other
cereal crops that boosted yields significantly. They achieved
most of their success by cross-breeding plants, but that takes a
long time and it is a fairly inexact process. Various
biotechnologies including genetic engineering make it possible for
us to reduce the time it takes to develop new varieties, and they
also enable us to transfer only the genes we want into the host
species. The first Green Revolution passed by Africa, but
this second biotechnology revolution could pay huge dividends for
countries in Africa."
[33] "I understand all of that, Tom. I guess what worries
me is that all of this high science will perpetuate the myth that
we are masters of the universe with some God-given mandate to
transform nature in our image. We have got to quit viewing
nature as a machine that we can take apart and put back
together. Nature is more than the sum of its parts.
This mechanistic mindset has left us with all sorts of major
ecological problems. The only reason hybrid seeds produced so
much food during the Green Revolution is because we poured tons of
fertilizer on them and kept them alive with irrigation water.
And what was the result? We produced lots of grain but also
huge amounts of water pollution and waterlogged soils. We
have more imagination than foresight. And so we wind up
developing another technological fix to get us out of the problem
our last technological innovation produced. Instead, we need
to figure out how to live in harmony with nature. Rather than
be independent, we need to realize our ecological
interdependence. We are made from the dust of the universe
and to the dust of the earth we will return."
[34] "Huh, I wonder if anyone would recognize you as a religion
major, Karen? I agree that our scientific and technological
abilities have outpaced our wisdom in their use, but does that mean
we can't learn from our mistakes? Ultimately, aren't
technologies just means that we put to the service of the ends we
want to pursue? Why can't we use genetic engineering to end
hunger? Why would God give us the brains to map and
manipulate genomes if God didn't think we could use that knowledge
to better care for creation? Scientists are already
developing the next wave of products that will give us inexpensive
ways to vaccinate people in developing nations from debilitating
diseases with foods like bananas that carry the vaccine. We
will also be able to make food more nutritious for those that get
precious little. Aren't those good things, Karen?"
[35] Karen, a bit defensive and edging toward the other side of
the horseshoe-shaped booth, said, "Look Tom, the way we live is
just not sustainable. It scares me to see people in China,
and Mexico, and Kenya all following us down the same unsustainable
road. There has got to be a better way. Kelly is
right. Human beings lived more sustainably in the past than
we do now. We need to learn from indigenous peoples how to
live in harmony with the earth. But instead, we seem to be
tempting them to adopt our expensive and inappropriate
technologies. It just doesn't seem right to encourage
developing nations like Kenya to make huge investments in
biotechnology when less expensive solutions might better address
their needs. I really do have my doubts about the ability to
teach farmers how to use these new seeds wisely. I've been
there, Tom. Farmers trade seeds freely and will always follow
a strategy that will produce the most food in the short-term
because people are hungry now. Eventually, whatever gains are
achieved by biotechnology will be lost as weeds and insects become
resistant or the soils just give out entirely from overuse.
But I am really struggling with this vote next week because I also
know that we should not be making decisions for other people.
They should be making decisions for themselves. Josephine is
my friend. I don't want to insult her. But I really do
think Kenya is heading down the wrong road."
[36] "So how are you going to vote next week, Karen?"
[37] "I don't know, Tom. Maybe I just won't show up.
How are you going to vote?"
Commentary
[38] This commentary offers background information on global
food security, agricultural biotechnology, and genetically modified
organisms before it turns to general concerns about genetically
modified crops and specific ethical questions raised by the
case.
Food Security
[39] The nations of the world made significant gains in social
development during the latter half of the 20th century. Since 1960,
life expectancy has risen by one third in developing nations, child
mortality has been cut in half, the percentage of people who have
access to clean water has more than doubled, and the total
enrollment in primary schools has increased by nearly two- thirds.
Similar progress has been made in achieving a greater measure of
food security. Even though the world's population has more than
doubled since 1960, food production grew at a slightly faster rate
so that today per capita food availability is up 24 percent. More
importantly, the proportion of people who suffer from food
insecurity has been cut in half from 37 percent in 1969 to 18
percent in 1995.5
[40] According to the International Food Policy Research
Institute, the world currently produces enough food to meet the
basic needs for each of the planet's six billion people.
Nevertheless, more than 800 million people suffer from food
insecurity. For various reasons, one out of every eight human
beings on the planet cannot produce or purchase the food they need
to lead healthy, productive lives. One out of every three
preschool-age children in developing nations is either malnourished
or severely underweight.6 Of these, 14 million children
become blind each year due to Vitamin A deficiency. Every day,
40,000 people die of illnesses related to their poor
diets.7
[41] Food security is particularly dire in sub-Saharan Africa. It
is the only region in the world where hunger has been increasing
rather than decreasing. Since 1970, the number ofmalnourished
people has increased as the amount of food produced per person has
declined.8
According to the United Nations Development Programme, half of the
673 million people living in sub-Saharan Africa at the beginning of
the 21st century are living in absolute poverty on less than $1 a
day.9 Not
surprisingly, one third of the people are undernourished. In the
eastern portion of this region, nearly half of the children suffer
from stunted growth as a result of their inadequate diets, and that
percentage is increasing.10 In Kenya, 23 percent of
children under the age of five suffer from
malnutrition.11
[42] Several factors contribute to food insecurity in sub-Saharan
Africa. Drought, inadequate water supplies, and crop losses to
pests and disease have devastating impacts on the amount of food
that is available. Less obvious factors, however, often have a
greater impact on food supply. Too frequently, governments in the
region spend valuable resources on weapons, which are then used in
civil or regional conflicts that displace people and reduce food
production. In addition, many governments-hamstrung by
international debt obligations-have pursued economic development
strategies that bypass subsistence farmers and focus on the
production of cash crops for export. As a result, a few countries
produce significant amounts of food, but it is shipped to wealthier
nations and is not available for local consumption. Storage and
transportation limitations also result in inefficient distribution
of surpluses when they are produced within nations in the
region.12
[43] Poverty is another significant factor. Globally, the gap
between the rich and the poor is enormous. For example, the $1,010
average annual purchasing power of a Kenyan pales in comparison
with the $31,910 available to a citizen of the United
States.13 Poor
people in developing nations typically spend 50-80 percent of their
incomes for food, in comparison to the 10-15 percent that people
spend in the United States or the European Union.14 Thus, while food may be
available for purchase, fluctuating market conditions often drive
prices up to unaffordable levels. In addition, poverty limits the
amount of resources a farmer can purchase to "improve" his or her
land and increase yields. Instead, soils are worked without rest in
order to produce food for people who already have too little to
eat.
[44] One way to deal with diminished food supplies or high prices
is through the ability to grow your own food. Over 70 percent of
the people living in sub-Saharan Africa are subsistence farmers,
but the amount of land available per person has been declining over
the last thirty years. While the concentration of land in the hands
of a few for export cropping plays an important role in this
problem, the primary problem is population growth in the region. As
population has grown, less arable land and food is available per
person. In 1970, Asia, Latin America, and Africa all had similar
population growth rates. Since then Asia has cut its rate of growth
by 25 percent, and Latin America has cut its rate by 20
percent.15 In
contrast, sub-Saharan Africa still has a very high population
growth rate, a high fertility rate, and an age structure where 44
percent of its population is under the age of fifteen. As a result,
the United Nations projects that the region's population will more
than double by 2050, even after taking into account the devastating
impact that AIDS will continue to have on many
countries.16
[45] Local food production will need to increase substantially in
the next few decades in order to meet the 133 percent projected
growth of the population in sub-Saharan Africa. Currently, food aid
donations from donor countries only represent 1.1 percent of the
food supply. The region produces 83 percent of its own food and
imports the rest.17 Given the limited financial
resources of these nations, increasing imports is not a viable
strategy for the future. Instead, greater efforts must be made to
stimulate agricultural production within the region, particularly
among subsistence farmers. Unlike Asia, however, increased
production will not likely be achieved through the irrigation of
fields and the application of fertilizer. Most farmers in the
region are simply too poor to afford these expensive inputs.
Instead, the main effort has been to improve the least expensive
input: seeds.
[46] A great deal of public and private research is focused on
developing new crop varieties that are resistant to drought, pests,
and disease and are also hearty enough to thrive in poor
soils.18 While
the vast majority of this research utilizes traditional
plant-breeding methods, nations like Kenya and South Africa are
actively researching ways that the appropriate use of biotechnology
can also increase agricultural yields. These nations, and a growing
list of others, agree with a recent statement by the United Nations
Food and Agriculture Organization:
[47]…. It [genetic engineering] could lead to
higher yields on marginal lands in countries that today cannot grow
enough food to feed their people. 19
Agricultural Biotechnology
[48] The United Nations Convention on Biological Diversity (CBD)
defines biotechnology as "any technological application that uses
biological systems, living organisms, or derivatives thereof, to
make or modify products or processes for specific
use."20 The
modification of living organisms is not an entirely new
development, however. Human beings have been grafting branches onto
fruit trees and breeding animals for desired traits since the
advent of agriculture 10,000 years ago. Recent advances in the
fields of molecular biology and genetics, however, considerably
magnify the power of human beings to understand and transform
living organisms.
[49] The cells of every living thing contain genes that determine
the function and appearance of the organism. Each cell contains
thousands of genes. Remarkably, there is very little difference in
the estimated number of genes in plant cells (26,000) and human
cells (30,000). Within each cell, clusters of these genes are
grouped together in long chains called chromosomes. Working in
isolation or in combination, these genes and chromosomes determine
the appearance, composition, and functions of an organism. The
complete list of genes and chromosomes in a particular species is
called the genome.21
[50] Like their predecessors, plant breeders and other
agricultural scientists are making use of this rapidly growing body
of knowledge to manipulate the genetic composition of crops and
livestock, albeit with unprecedented powers. Since the case focuses
only on genetically modified crops, this commentary will examine
briefly the use in Africa of the five most common applications of
biotechnology to plant breeding through the use of tissue culture,
marker-assisted selection, genetic engineering, genomics, and
bioinformatics.22
[51] Tissue culture techniques enable researchers to develop whole
plants from a single cell, or a small cluster of cells. After
scientists isolate the cell of a plant that is disease-free or
particularly hearty, they then use cloning techniques to produce
large numbers of these plants in vitro, in a petri dish. When the
plants reach sufficient maturity in the laboratory, they are
transplanted into agricultural settings where farmers can enjoy the
benefits of crops that are more hearty or disease-free. In the
case, Josephine describes accurately Kenyan successes in this area
with regard to bananas and the plants that produce pyrethrum. This
attempt to micro-propagate crops via tissue cultures constitutes
approximately 52 percent of the activities in the 37 African
countries engaged in various forms of biotechnology
research.23
[52] Marker-assisted selection techniques enable researchers to
identify desirable genes in a plant's genome. The identification
and tracking of these genes speeds up the process of conventional
cross-breeding and reduces the number of unwanted genes that are
transferred. The effort to develop insect-resistant maize in Kenya
uses this technology to identify local varieties of maize that have
greater measures of natural resistance to insects and disease.
South Africa, Zimbabwe, Nigeria, and Côte d'Ivoire are all
building laboratories to conduct this form of research.
24
[53] Genetic engineering involves the direct transfer of genetic
material between organisms. Whereas conventional crossbreeding
transfers genetic material in a more indirect and less efficient
manner through the traditional propagation of plants, genetic
engineering enables researchers to transfer specific genes directly
into the genome of a plant in vitro. Originally, scientists used
"gene guns" to shoot genetic material into cells. Increasingly,
researchers are using a naturally occurring plant pathogen,
Agrobacterium tumefaciens, to transfer genes more successfully and
selectively into cells. Eventually, Josephine's father intends to
make use of this technology to "engineer" local varieties of maize
that will include a gene from Bacillus thuringiensis (Bt), a
naturally occurring bacterium that interferes with the digestive
systems of insects that chew or burrow into plants. Recent reports
from South Africa indicate thatsmallholder farmers who have planted
a Bt variety of cotton have experienced "great
success."25
[54] Genomics is the study of how all the genes in an organism
work individually or together to express various traits. The
interaction of multiple genes is highly complex and studies aimed
at discerning these relationships require significant computing
power. Bioinformatics moves this research a step further by taking
this genomic information and exploring the ways it may be relevant
to understanding the gene content and gene order of similar
organisms. For example, researchers recently announced that they
had successfully mapped the genomes of two different rice
varieties.26
This information will likely produce improvements in rice yields,
but researchers drawing on the new discipline of bioinformatics
will also explore similarities between rice and other cereal crops
that have not yet been mapped. Nations like Kenya, however, have
not yet engaged in these two forms of biotechnology research
because of the high cost associated with the required computing
capacity.
Genetically Modified Organisms in Agriculture
[55] The first genetically modified organisms were
developed for industry and medicine, not agriculture. In 1972, a
researcher working for General Electric engineered a microbe that
fed upon spilled crude oil, transforming the oil into a more benign
substance. When a patent was applied for the organism, the case
made its way ultimately to the U.S. Supreme Court, which in 1980
ruled that a patent could be awarded for the modification of a
living organism. One year earlier, scientists had managed to splice
the gene that produces human growth hormone into a bacterium, thus
creating a new way to produce this vital hormone.27
[56] In 1994, Calgene introduced the Flavr-Savr tomato. It was the
first commercially produced, genetically modified food product.
Engineered to stay on the vine longer, develop more flavor, and
last longer on grocery shelves, consumers rejected the product not
primarily because it was genetically modified, but rather because
it was too expensive and did not taste any better than ordinary
tomatoes.28
[57] By 1996, the first generation of genetically modified (GM)
crops was approved for planting in six countries. These crops
included varieties of corn, soybeans, cotton, and canola that had
been engineered to resist pests or to tolerate some herbicides.
Virus resistance was also incorporated into some tomato, potato,
and tobacco varieties.
[58] Farmers in the United States quickly embraced these
genetically modified varieties because they reduced the cost of
pesticide and herbicide applications, and in some cases also
increased yields substantially. In 1996, 3.6 million acres were
planted in GM crops. By 2000 that number had grown to 75 million
acres and constituted 69 percent of the world's production of GM
crops.29
According to the U.S. Department of Agriculture's 2002 spring
survey, 74 percent of the nation's soybeans, 71 percent of cotton,
and 32 percent of the corn crop were planted in genetically
engineered varieties, an increase of approximately 5 percent over
2001 levels.30
[59] Among other developed nations, Canada produced 7 percent of
the world's GM crops in 2000, though Australia, France, and Spain
also had plantings.31 In developing nations, crop
area planted in GM varieties grew by over 50 percent between 1999
and 2000.32
Argentina produced 23 percent of the global total in 2000, along
with China, South Africa, Mexico, and Uruguay.33
[60] In Kenya, no GM crops have been approved for commercial
planting, though the Kenyan Agricultural Research Institute (KARI)
received government permission in 2001 to field test genetically
modified sweet potatoes that had been developed in cooperation with
Monsanto.34 In
addition, funding from the Novartis Foundation for Sustainable
Development is supporting research KARI is conducting in
partnership with the International Maize and Wheat and Improvement
Center (CIMYYT) to develop disease and insect-resistant varieties
of maize, including Bt maize.35 A similar funding
relationship with the Rockefeller Foundation is supporting research
to develop varieties of maize from a mutant type that is naturally
resistant to a herbicide thatis highly effective against Striga, a
weed that devastates much of Kenya's maize crop each
year.36 Striga
infests approximately 2040 million hectares of farmlandin
sub-Saharan Africa and reduces yields for an estimated 100 million
farmers by 20-80 percent.37
General Concerns about Genetically Modified (GM)
Crops
[61] The relatively sudden and significant growth of GM crops
around the world has raised various social, economic, and
environmental concerns. People in developed and developing
countries are concerned about threats these crops may pose to human
health and the environment. In addition, many fear that large
agribusiness corporations will gain even greater financial control
of agriculture and limit the options of small-scale farmers.
Finally, some are also raising theological questions about the
appropriateness of genetic engineering.
[62] Food Safety and Human Health. Some critics of GM foods in the
United States disagree with the government's stance that
genetically engineered food products are "substantially equivalent"
to foods derived from conventional plant breeding. Whereas
traditional plant breeders attempt to achieve expression of genetic
material within a species, genetic engineering enables researchers
to introduce genetic material from other species, families, or even
kingdoms. Because researchers can move genes from one life form
into any other, critics are concerned about creating novel
organisms that have no evolutionary history. Their concern is that
we do not know whatimpact these new products will have on human
health because they have never existed before.38
[63] Proponents of genetically engineered foods argue that genetic
modification is much more precise and less random than the methods
employed in traditional plant breeding. Whereas most genetically
engineered foods have involved the transfer of one or two genes
into the host, traditional crossbreeding results in the transfer of
thousands of genes. Proponents also note that GM crops have not
been proven to harm human health since they were approved for use
in 1996. Because the United States does not require the labeling of
genetically engineered foods, most consumers are not aware that
more than half of the products on most grocery store shelves are
made, at least in part, from products derived from GM crops. To
date, no serious human health problems have been attributed to GM
crops.39
Critics are not as sanguine about this brief track record and argue
that it is not possible to know the health effects of GM crops
because their related food products are not labeled.
[64] The potential allergenicity of genetically modified foods is
a concern that is shared by both critics and proponents of the
technology. It is possible that new genetic material may carry with
it substances that could trigger serious human allergic reactions.
Proponents, however, are more confident than critics that these
potential allergens can be identified in the testing process. As a
case in point, they note that researchers working for Pioneer Seeds
scuttled a project when they discovered that a genetically
engineered varietyof soybeans carried the gene that produces severe
allergic reactions associated with Brazil nuts.40 Critics, however, point to
the StarLink corn controversy as evidence of how potentially
dangerous products can easily slip into the human food supply.
Federal officials had only allowed StarLink corn to be used as an
animal feed because tests were inconclusive with regard to the
dangers it posed for human consumption. In September 2000, however,
StarLink corn was found first in a popular bran of taco shells and
later in other consumer goods. These findings prompted several
product recalls and cost Aventis, the producer of StarLink, over $1
billion.41
[65] More recently the U.S. Department of Agriculture and the Food
and Drug Administration levied a $250,000 fine against ProdiGene
Inc. for allowing genetically engineered corn to contaminate
approximately 500,000 bushels of soybeans. ProdiGene had
genetically engineered the corn to produce a protein that serves as
a pig vaccine. When the test crop failed, ProdiGene plowed under
the GM corn and planted food grade soybeans. When ProdiGene
harvested the soybeans federal inspectors discovered that some of
the genetically engineered corn had grown amidst the soybeans.
Under federal law, genetically engineered substances that have not
been approved for human consumption must be removed from the food
chain. The $250,000 fine helped to reimburse the federal government
for the cost of destroying the contaminated soybeans that were
fortunately all contained in a storage facility in Nebraska.
ProdiGenealso was required to post a $1 million bond in order to
pay for any similar problems in the future.42
[66] Another food safety issue involves the use of marker genes
that are resistant to certain antibiotics. The concern is that
these marker genes, which are transferred in almost all successful
genetic engineering projects, may stimulate the appearance of
bacteria resistant to common antibiotics.43 Proponents acknowledge that
concerns exist and are working on ways to either remove the marker
genes from the finished product, or to develop new and harmless
markers. Proponents also acknowledge that it may be necessary to
eliminate the first generation of antibiotic markers through
regulation.44
[67] Finally, critics also claim that genetic engineering may
lower the nutritional quality of some foods. For example, one
variety of GM soybeans has lowerlevels of isoflavones, which
researchers think may protect women from some forms of
cancer.45
Proponents of genetically modified foods, meanwhile, are busy
trumpeting the "second wave" of GM crops that actually increase the
nutritional value of various foods. For example, Swiss researchers
working in collaboration with the Rockefeller Foundation, have
produced "Golden Rice," a genetically engineered rice that is rich
in beta carotene and will help to combat Vitamin A deficiency in
the developing world.
[68] Biosafety and Environmental Harm. Moving from human health to
environmental safety, many critics of GM crops believe that this
use of agricultural biotechnology promotes an industrialized
approach to agriculture that has produced significant ecological
harm. Kelly summarizes these concerns well in the case. Crops that
have been genetically engineered to be resistant to certain types
of herbicide make it possible for farmers to continue to spray
these chemicals on their fields. In addition, GM crops allow
farmers to continue monocropping practices (planting huge tracts of
land in one crop variety), which actually exacerbate pest and
disease problems and diminish biodiversity. Just as widespread and
excessive use of herbicides led to resistant insects, critics argue
that insects eventually will become resistant to the second wave of
herbicides in GM crops. They believe that farmers need to be
turning to a more sustainable form of agriculture that utilizes
fewer chemicals and incorporates strip and inter-cropping
methodologies that diminish crop losses due to pests and
disease.46
[69] Proponents of GM crops are sympathetic to the monocropping
critique and agree that farmers need to adopt more sustainable
approaches to agriculture, but they argue that there is no reason
why GM crops cannot be incorporated in other planting schemes. In
addition, they suggest that biodiversity can be supported through
GM crops that are developed from varieties that thrive in
particular ecological niches. In contrast to the Green Revolution
where hybrids were taken from one part of the world and planted in
another, GM crops can be tailored to indigenous varieties that have
other desirable properties. On the herbicide front, proponents
argue that GM crops make it possible to use less toxic herbicides
than before, thus lowering the risks to consumers. They also point
to ecological benefits of the newest generation of herbicides which
degrade quickly when exposed to sunlight and do not build up in
groundwater.47
Critics, however, dispute these claims and point to evidence that
herbicides are toxic tonon-target species, harm soil fertility, and
also may have adverse effects on human health.48
[70] Just as critics are convinced that insects will develop
resistance to herbicides, so also are they certain that insects
will develop resistance to Bt crops. Terra makes this point in the
case. It is one thing to spray insecticides on crops at various
times during the growing season; it is another thing for insects to
be constantly exposed to Bt since it is expressed through every
cell in the plant, every hour of the day. While the GM crop will
have a devastating impact on most target insects, some will
eventually survive with a resistance to Bt. Proponents acknowledge
that this is a serious concern. As is the case with herbicides,
however, there are different variants of Bt that may continue to be
effective against partially resistant insects. In addition,
proponents note that the U.S. Environmental Protection Agency now
requires farmers planting Bt crops to plant refuges of non-Bt crops
so that exposed insects can mate with others that have not been
exposed, thus reducing the growth of Bt-resistant insects. These
refuges should equal 20 percent of the cropped area. Critics argue
that this percentage is too low and that regulations do not
sufficiently stipulate where these refuges should be in relation to
Bt crops.49
[71] Critics are also concerned about the impact Bt could have on
non-target species like helpful insects, birds, and bees. In May
1999, researchers at Cornell University published a study
suggesting that Bt pollen was leading to increased mortality among
monarch butterflies. This research ignited a firestorm of
controversy that prompted further studies by critics and proponents
of GM crops. One of the complicating factors is that an uncommon
variety of Bt corn was used in both the laboratory and field tests.
Produced by Novartis, the pollen from this type was 40-50 times
more potent than other Bt corn varieties, but it represented less
than 2 percent of the Bt corn crop in 2000. When other factors were
taken into account, proponents concluded that monarch butterflies
have a much greater chance of being harmed through the application
of conventional insecticides than they do through exposure to Bt
corn pollen. Critics, however, point to other studies that indicate
Bt can adversely harm beneficial insect predators and compromise
soil fertility.50
[72] Both critics and proponents are concerned about unintended
gene flow between GM crops and related plants in the wild. In many
cases it is possible for genes, including transplanted genes, to be
spread through the normal cross-pollination of plants. Whether
assisted by the wind or pollen-carrying insects,
cross-fertilization could result in the creation of
herbicide-resistant superweeds. Proponents of GM crops acknowledge
that this could happen, but they note that the weed would only be
resistant to one type of herbicide, not the many others that are
available to farmers. As a result, they argue that
herbicide-resistant superweeds could be controlled and eliminated
over a period of time. Critics are also concerned, however, that
undesired gene flow could "contaminate" the genetic integrity of
organic crops or indigenous varieties. This would be devastating to
organic farmers who trade on their guarantee to consumers that
organic produce has not been genetically engineered. Proponents
argue that this legitimate concern could be remedied with
relatively simple regulations or guidelines governing the location
of organic and genetically engineered crops. Similarly, they argue
that care must be taken to avoid the spread of genes into
unmodified varieties of the crop.51
[73] Agribusiness and Economic Justice. Shifting to another arena
of concern, many critics fear that GM crops will further expand the
gap between the rich and the poor in both developed and developing
countries. Clearly the first generation of GM crops has been
profit-driven rather than need-based. Crops that are
herbicide-tolerant and insect-resistant have been developed for and
marketed to relatively wealthy, large-scale, industrial
farmers.52 To
date, the benefits from these crops have largely accrued to these
large producers and not to small subsistence farmers or even
consumers. Proponents, however, argue that agricultural
biotechnologies are scale-neutral. Because the technology is in the
seed, expensive and time-consuming inputs are not required. As a
result, small farmers can experience the same benefits as large
farmers. In addition, proponents point to the emerging role public
sector institutions are playing in bringing the benefits of
agricultural biotechnology to developing countries. Partnerships
like those described above between KARI, CIMMYT, and various
governmental and non-governmental funding sources indicate that the
next generation of GM crops should have more direct benefits for
subsistence farmers and consumers in developing nations.
[74] While these partnerships in the public sector are developing,
there is no doubt that major biotech corporations like Monsanto
have grown more powerful as a result of the consolidation that has
taken place in the seed and chemical industries. For example, in
1998, Monsanto purchased DeKalb Genetics Corporation, the second
largest seed corn company in the United States. One year later,
Monsanto merged with Pharmacia & Upjohn, a major pharmaceutical
conglomerate. A similar merger took place between Dow Chemical
Corporation and Pioneer Seeds.53 The result of this
consolidation is the vertical integration of the seed and chemical
industries. Today, a company like Monsanto not only sells chemical
herbicides; it also sells seed for crops that have been genetically
engineered to be resistant to the herbicide. In addition, Monsanto
requires farmers to sign a contract that prohibits them from
cleaning and storing a portion of their GM crop to use as seed for
the following year. All of these factors lead critics to fear that
the only ones who will benefit from GM crops are rich corporations
and wealthy farmers who can afford to pay these fees. Critics in
developing nations are particularly concerned about the prohibition
against keeping a portion of this year's harvest as seed stock for
the next. They see this as a means of making farmers in developing
nations dependent upon expensive seed they need to purchase from
powerful agribusiness corporations.54
[75] Proponents acknowledge these concerns but claim that there is
nothing about them that is unique to GM crops. Every form of
technology has a price, and that cost will always be easier to bear
if one has a greater measure of wealth. They note, however, that
farmers throughout the United States have seen the financial wisdom
in planting GM crops and they see no reason why farmers in
developing nations would not reach the same conclusion if the
circumstances warrant. Proponents also note that subsistence
farmers in developing nations will increasingly have access to free
or inexpensive GM seed that has been produced through partnerships
in the public sector. They also tend to shrug off the prohibition
regarding seed storage because this practice has been largely
abandoned in developed nations that grow primarily hybrid crop
varieties. Harvested hybrid seed can be stored for later planting,
but it is not as productive as the original seed that was purchased
from a dealer. As farmers invest in mechanized agriculture, GM seed
becomes just another cost variable that has to be considered in the
business called agriculture. Critics, however, bemoan the loss of
family farms that has followed the mechanization of
agriculture.
[76] The seed storage issue reflects broader concerns about the
ownership of genetic material. For example, some developing nations
have accused major biotech corporations of committing genetic
"piracy." They claim that employees of these corporations have
collected genetic material in these countries without permission
and then have ferried them back to laboratories in the United
States and Europe where they have been studied, genetically
modified, and patented. In response to these and other concerns
related to intellectual property rights, an international
Convention on Biological Diversity was negotiated in 1992. The
convention legally guarantees that all nations, including
developing countries, have full legal control of "indigenous
germplasm."55
It also enables developing countries to seek remuneration for
commercial products derived from the nation's genetic resources.
Proponents of GM crops affirm the legal protections that the
convention affords developing nations and note that the development
of GM crops has flourished in the United States because of the
strong legal framework that protects intellectual property rights.
At the same time, proponents acknowledge that the payment of
royalties related to these rights or patents can drive up the cost
of GM crops and thus slow down the speed by which this technology
can come to the assistance of subsistence farmers.56
[77] Theological Concerns. In addition to the economic and legal
issues related to patenting genetic information and owning novel
forms of life, some are also raising theological questions about
genetic engineering. One set of concerns revolves around the
commodification of life. Critics suggest that it is not appropriate
for human beings to assert ownership over living organisms and the
processes of life that God has created. This concern has reached a
fever pitch in recent years during debates surrounding cloning
research and the therapeutic potential of human stem cells derived
from embryonic tissue. For many, the sanctity of human life is at
stake. Fears abound that parents will seek to "design" their
children through genetic modification, or that embryonic tissue
will be used as a "factory" to produce "spare parts."
[78] While this debate has raged primarily in the field of medical
research, some critics of GM crops offer similar arguments. In the
case, Karen gives voice to one of these concerns when she suggests
that we need to stop viewing nature as a machine that can be taken
apart and reassembled in other ways. Ecofeminist philosophers and
theologians argue that such a mechanistic mindset allows human
beings to objectify and, therefore, dominate nature in the same way
that women and slaves have been objectified and oppressed. Some
proponents of genetic engineering acknowledge this danger but argue
that the science and techniques of agricultural biotechnology can
increase respect for nature rather than diminish it. As human
beings learn more about the genetic foundations of life, it becomes
clearer how all forms of life are interconnected. For proponents of
GM crops, agricultural biotechnology is just a neutral means that
can be put to the service of either good or ill ends. Critics,
however, warn that those with power always use technologies to
protect their privilege and increase their control.
[79] Another set of theological concerns revolves around the
argument that genetic engineering is "unnatural" because it
transfers genetic material across species boundaries in ways that
do not occur in nature. Researchers are revealing, however, that
"lower" organisms like bacteria do not have the same genetic
stability as "higher" organisms that have evolved very slowly over
time. In bacteria, change often occurs by the spontaneous transfer
of genes from one bacterium to another of a different
species.57
Thus, specie boundaries may not be as fixed as has been previously
thought. Another example can be found in the Pacific Yew tree that
produces taxol, a chemical that is useful in fighting breast
cancer. Recently, researchers discovered that a fungus that often
grows on Yew trees also produces the chemical. Apparently the
fungus gained this ability through a natural transfer of genes
across species and even genera boundaries from the tree to the
fungus.58
[80] Appeals to "natural" foods also run into problems when closer
scrutiny is brought to bear on the history of modern crops. For
example, the vast majority of the grain that is harvested in the
world is the product of modern hybrids. These hybrid crops consist
of varieties that could not cross-breed without human assistance.
In fact, traditional plant breeders have used a variety of
high-tech means to develop these hybrids, including exposure to
low-level radiation and various chemicals in order to generate
desired mutations. After the desired traits are achieved, cloning
techniques have been utilized to develop the plant material and to
bring the new product to markets. None of this could have occurred
"naturally," if by that one means without human intervention, and
yet the products of this work are growing in virtually every farm
field. Given the long history of human intervention in nature via
agriculture, it is hard to draw a clear line between what
constitutes natural and unnatural food.59
[81] This leads to a third, related area of theological concern:
With what authority, and to what extent, should human beings
intervene in the world that God has made? It is clear from Genesis
2 that Adam, the first human creature, is given the task of tending
and keeping the Garden of Eden which God has created. In addition,
Adam is allowed to name the animals that God has made. Does that
mean that human beings should see their role primarily as passive
stewards or caretakers of God's creation? In Genesis 1, human
beings are created in the image of God (imago dei) and are told to
subdue the earth and have dominion over it. Does this mean that
human beings, like God, are also creators of life and have been
given the intelligence to use this gift wisely in the exercise of
human dominion?
[82] Answers to these two questions hinge on what it means to be
created in the image of God. Some argue that human beings are
substantially like God in the sense that we possess qualities we
ascribe to the divine, like the capacity for rational thought,
moral action, or creative activity. These distinctive features
confer a greater degree of sanctity to human life and set us apart
from other creatures-if not above them. Others argue that creation
in the image of God has less to do with being substantially
different from other forms of life, and more to do with the
relationality of God to creation. In contrast to substantialist
views which often set human beings above other creatures, the
relational conception of being created in the image of God seeks to
set humanity in a proper relationship of service and devotion to
other creatures and to God. Modeled after the patterns of
relationship exemplified in Christ, human relationships to nature
are to be characterized by sacrificial love and earthly
service.60
[83] It is not necessary to choose between one of these two
conceptions of what it means to be created in the image of God, but
it is important to see how they function in current debates
surrounding genetic engineering. Proponents of genetic engineering
draw on the substantialist conception when they describe the
technology as simply an outgrowth of the capacities for
intelligence and creativity with which God has endowed human
beings. At the same time, critics draw upon the same substantialist
tradition to protect the sanctity of human life from genetic
manipulation. More attention, however, needs to be given to the
relevance of the relational tradition to debates surrounding
genetic engineering. Is it possible that human beings could wield
this tool not as a means to garner wealth or wield power over
others, but rather as a means to improve the lives of others? Is it
possible to use genetic engineering to feed the hungry, heal the
sick, and otherwise to redeem a broken world? Certainly many
proponents of genetic engineering in the non-profit sector believe
this very strongly.
[84] Finally, another theological issue related to genetic
engineering has to do with the ignorance of human beings as well as
the power of sin and evil. Many critics of genetic engineering
believe that all sorts of mischief and harm could result from the
misuse of this new and powerful technology. In the medical arena,
some forecast an inevitable slide down a slippery slope into a
moral morass where human dignity is assaulted on all sides. In
agriculture, many fear that human ignorance could produce
catastrophic ecological problems as human beings design and release
into the "wild" novel organisms that have no evolutionary
history.
[85] There is no doubt that human technological inventions have
been used intentionally to perpetrate great evil in the world,
particularly in the last century. It is also abundantly clear that
human foresight has not anticipated enormous problems associated,
for example, with the introduction of exotic species in foreign
lands or the disposal of high-level nuclear waste. The question,
however, is whether human beings can learn from these mistakes and
organize their societies so that these dangers are lessened and
problems are averted. Certainly most democratic societies have been
able to regulate various technologies so that harm has been
minimized and good has been produced. Is there reason to believe
that the same cannot be done with regard to genetic
engineering?
Specific Ethical Questions
[86] Beyond this review of general concerns about GM crops and
genetic engineering are specific ethical questions raised by the
case. These questions are organized around the four ecojustice
norms that have been discussed in this volume.
[87] Sufficiency. At the heart of this case is the growing problem
of hunger in sub-Saharan Africa. It is clear that many people in
this region simply do not have enough to eat. In the case, however,
Kelly suggests that the world produces enough food to provide
everyone with an adequate diet. Is she right?
[88] As noted earlier, studies by the International Food Policy
and Research Institute indicate that the world does produce enough
food to provide everyone in the world with a modest diet. Moreover,
the Institute projects that global food production should keep pace
with population growth between 2000-2020. So, technically, Kelly is
right. Currently, there is enough food for everyone-so long as
people would be satisfied by a simple vegetarian diet with very
little meat consumption. The reality, however, is that meat
consumption is on the rise around the world, particularly among
people in developing nations that have subsisted primarily on
vegetarian diets that often lack protein.61 Thus, while it appears that
a balanced vegetarian diet for all might be possible, and even
desirable from a health standpoint, it is not a very realistic
possibility. In addition, Adam raises a series of persuasive
arguments that further challenge Kelly's claim that food just needs
to be distributed better. At a time when donor nations only supply
1.1 percent of the food in sub-Saharan Africa, it is very
unrealistic to think that existing distribution systems could be
"ramped up" to provide the region with the food it needs.
[89] Does that mean, however, that GM crops represent a "magic
bullet" when it comes to increasing food supplies in the region?
Will GM crops end hunger in sub-Saharan Africa? It is important to
note that neither Adam nor Josephine make this claim in the case;
Kelly does. Instead, Adam argues that GM crops should be part of a
"mix" of agricultural strategies that will be employed to increase
food production and reduce hunger in the region. When stem-borers
and Striga decimate up to 60 percent of the annual maize harvest,
herbicide- and insect-resistant varieties could significantly
increase the food supply. One of the problems not mentioned in the
case, however, is that maize production is also very taxing on
soils. This could be remedied, to some extent, by rotating maize
with nitrogen-fixing, leguminous crops.
[90] In the end, the primary drain on soil fertility is the heavy
pressure which population growth puts on agricultural production.
Until population growth declines to levels similar to those in Asia
or Latin America, food insecurity will persist in sub-Saharan
Africa. One of the keys to achieving this goal is reducing the rate
of infant and child mortality. When so many children die in
childhood due to poor diets, parents continue to have several
children with the hope that some will survive to care for them in
their old age. When more children survive childhood, fertility
rates decline. Thus, one of the keys to reducing population growth
is increasing food security for children. Other keys include
reducing maternal mortality, increasing access to a full range of
reproductive health services including modern means of family
planning, increasing educational and literacy levels, and removing
various cultural and legal barriers that constrain the choices of
women and girl children.
[91] A third question raised by the sufficiency norm has to do with
the dangers GM crops might pose to human health. Does Kenya have
adequate policies and institutions in place to test GM crops and
protect the health of its citizens? The short answer to this
question is no. While the nation does have a rather substantial set
of biosafety regulations, government officials have not developed
similar public health regulations. One of the reasons for this is
because Kenya is still in the research stage and does not yet have
any GM crops growing in its fields. Thus, regulations have not yet
been developed because there are no GM food products available for
consumers. Nevertheless, even when products like GM sweet potatoes
or maize do become available, it is likely that Kenya may still not
develop highly restrictive public health regulations. This is
because the Ministry of Health faces what it perceives to be much
more immediate threats to public health from large-scale outbreaks
of malaria, polio, and HIV-AIDS. The potential allergenicity of GM
crops pales in comparison to the real devastation wrought by these
diseases. In addition, it is likely that officials will continue to
focus on more mundane problems that contaminate food products like
inadequate refrigeration or the unsanitary storage and preparation
of food. 62In
the end, people who are hungry tend to assess food safety risks
differently from those who are well fed. Hassan Adamu, Minister of
Agriculture in Nigeria, summarizes this position well in the
following excerpt from an op-ed piece published in The Washington
We do not want to be denied this technology [agricultural
biotechnology] because of a misguided notion that we do not
understand the dangers and future consequences. We
understand…. that they
have the right to impose their values on us. The harsh reality is
that, without the help of agricultural biotechnology, many will not
live.63
[92] Despite Adamu's passionate plea, other leaders in Africa are
not as supportive of genetically modified crops. During the food
emergency that brought over 30 million people in sub-Saharan Africa
to the brink of starvation in 2002, President Levy Mwanawasa of
Zambia rejected a shipment of genetically modified food aid
furnished by the U.N. World Food Programme. Drawing on a report
produced by a team of Zambian scientists, and appealing to the
precautionaryprinciple, Mwanawasa said, "We will rather starve than
give something toxic [to our citizens.]"64 In addition to concerns
about the impact that GM food may have on human health, Mwanawasa
also expressed concern that the GM maize might contaminate Zambia's
local maize production in the future. Given Josephine's ardent
support for agricultural biotechnology in the case, it is important
to note that not all Africans share her confidence about the
benefits of GM crops.
[93] Sustainability. If, however, Kenyans downplay the dangers
posed to human beings by GM crops, how likely is it that the nation
will develop policies and regulatory bodies to address biosafety
and protect the environment?
[94] In fact, Kenya does have serious biosafety policies on the
books. Prompted by the work that Florence Wambugu did on GM sweet
potatoes in collaboration with Monsanto in the early 1990s, these
policies were developed with substantial financial assistance
furnished by the government of the Netherlands, the World Bank, the
U.S. Agency for International Development, and the United Nations
Environment Programme. The Regulations and Guidelines for Biosafety
in Biotechnology in Kenya establish laboratory standards and other
containment safeguards for the handling of genetically modified
organisms. In addition, the regulatory document applies more
rigorous biosafety standards to GM crops than it does to crops that
have not been genetically modified. In general, Kenya's extensive
regulations reflect a very cautious approach to GM
products.65
[95] The problem, however, is that although Kenya has a strong
biosafety policy on paper, the administrative means to implement
and enforce the policy are weak. The National Biosafety Committee
(NBC) was established in 1996 to govern the importation, testing,
and commercial release of genetically modified organisms, but
limited resources have hampered its effectiveness. In 2001, the NBC
employed only one full-time staff person and had to borrow funds to
do its work from Kenya's National Council for Science and
Technology.66
One of the consequences of this inadequate regulatory capacity has
been a delay in conducting field tests on Wambugu's GM sweet
potatoes. Clearly much progress needs to be achieved on this front
before such tests take place on varieties of maize that have been
genetically modified to be insect- or herbicide-resistant. It is
important to note, however, that KARI and CIMMYT are both well
aware of the biosafety dangers related to the development of these
GM crops and are engaged in studies todetermine, for example, the
appropriate size and placement of refuges for Bt varieties of
maize.67
Because much of KARI's work is supported by grants from foreign
donors, necessary biosafety research will be conducted and made
available to the NBC. The problem is that the NBC currently lacks
the resources to make timely decisions after it receives the
data.
[96] Another concern in the case has to do with the ecological
consequences of industrial agriculture. Karen disagrees with Tom's
glowing account of the Green Revolution. While it produced food to
feed more than two billion people during the latter half of the
20th century, it did so only by exacting a heavy ecological
toll.68 It also
had a major impact on the distribution of wealth and income in
developing nations. As a result, Karen is concerned about Tom's
view that GM crops could have a tremendous impact on increasing
food supply in sub-Saharan Africa. Karen fears that GM crops in
Kenya may open the floodgates to industrial agriculture and create
more problems than it solves.
[97] The question, however, is whether this is likely to happen.
With the significant poverty and the small landholdings of the over
70 percent of Kenyans who are subsistence farmers, it is hard to
see how the ecologically damaging practices of the Green Revolution
could have a significant impact in the near future. The cost of
fertilizers, herbicides, or irrigation put these practices out of
reach for most farmers in Kenya. If anything, most of the
ecological degradation of Kenya's agricultural land is due to
intensive cropping and stressed soils. Yield increases from GM
crops might relieve some of this pressure, although much relief is
not likely since food production needs to increase in order to meet
demand.
[98] This raises a third question related to the sustainability
norm. Can organic farming methods achieve the same results as GM
crops? Certainly Kelly believes that this is the case, and there is
some research to support her view. On the Striga front, some
farmers in East Africa have suppressed the weed by planting
leguminous tree crops during the dry season from February to April.
Since Striga is most voracious in fields that have been
consistently planted in maize and thus have depleted soil, the
nitrogen-fixing trees help to replenish the soil in their brief
three months of life before they are pulled up prior to maize
planting. Farmers report reducing Striga infestations by over 90
percent with this method of weed control. A bonus is that
theuprooted, young trees provide a nutritious feed for those
farmers who also have some livestock.69
[99] A similar organic strategy has been employed in Kenya to
combat stem-borers. In this "push-pull" approach, silver leaf
desmodium and molasses grass are grown amidst the maize. These
plants have properties that repel stem-borers toward the edges of
the field where other plants like Napier grass and Sudan grass
attract the bugs and then trap their larvae in sticky substances
produced by the plants. When this method is employed, farmers have
been able to reduce losses to stemborers from 40 percent to less
than 5 percent. In addition, silver leaf desmodium helps to combat
Striga infestation, thus further raising yields.70
[100] Results like these indicate that agroecological methods
associated with organic farming may offer a less expensive and more
sustainable approach to insect and pest control than those achieved
through the expensive development of GM crops and the purchase of
their seed. Agroecology utilizes ecological principles to design
and manage sustainable and resource-conserving agricultural
systems. It draws upon indigenous knowledge and resources to
develop farming strategies that rely on biodiversity and the
synergy among crops, animals, and soils.71 More research in this area
is definitely justified.
[101] It is not clear, however, that agroecological farming
techniques and GM crops need to be viewed as opposing or exclusive
alternatives. Some researchers argue that these organic techniques
are not as effective in different ecological niches in East Africa.
Nor, in some areas, do farmers feel they have the luxury to fallow
their fields during the dry season.72 In these contexts, GM crops
might be able to raise yields where they are desperately needed. It
is also not likely that the seeds for these crops will be very
expensive since they are being produced through research in the
public and non-profit sectors. Still, it is certainly the case that
more serious ecological problems could result from the use of GM
crops in Kenya, and even though donors are currently footing the
bill for most of the research, agricultural biotechnology requires
a more substantial financial investment than agroecological
approaches.
[102] Participation. The source of funding for GM crop research in
Kenya raises an important question related to the participation
norm. Are biotechnology and GM crops being forced on the people of
Kenya?
[103] Given the history of colonialism in Africa, this question is
not unreasonable, but in this case it would not appear warranted.
Kenya's Agricultural Research Institute (KARI) began experimenting
with tissue culture and micropropagation in the 1980s. A few years
later, one of KARI's researchers, Florence Wambugu, was awarded a
three-year post-doctoral fellowship by the U.S. Agency for
International Development to study how sweet potatoes could be
genetically modified to be resistant to feathery mottle virus. Even
though this research was conducted in Monsanto's laboratory
facilities, and the company provided substantial assistance to the
project long after Wambugu's fellowship ended, it is clearly the
case that this groundbreaking work in GM crop research was
initiated by a Kenyan to benefit the people of her
country.73 In
addition, the funding for GM crop research in Kenya has come almost
entirely through public sector institutions rather than private
corporate sources. Even the Novartis funds that support the
insect-resistant maize project are being provided from a foundation
for sustainable development that is legally and financially
separate from the Novartis Corporation. Thus, it does not appear
that transnational biotechnology corporations are manipulating
Kenya, but it is true that the country's openness to biotechnology
and GM crops may open doors to the sale of privately-developed GM
products in the future.
[104] Josephine, however, might turn the colonialism argument
around and apply it to Greenpeace's campaign to ban GM crops.
Specifically, Greenpeace International urges people around the
world to "write to your local and national politicians demanding
that your government ban the growing of genetically engineered
crops in your country."74 Though Josephine does not
pose the question, is this well-intentioned effort to protect the
environment and the health of human beings a form of paternalism or
neocolonialism? Does the Greenpeace campaign exert undue pressure
on the people of Kenya and perhaps provoke a lack of confidence in
Kenyan authorities, or does it merely urge Kenyans to use the
democratic powers at their disposal to express their concerns? It
is not clear how these questions should be answered, but the
participation norm requires reflection about them.
[105] The concern about paternalism also arises with regard to a
set of questions about appropriate technology. Are GM crops an
"appropriate" agricultural technology for the people of Kenya?
Genetic engineering and other forms of agricultural biotechnology
are very sophisticated and expensive. Is such a "high-tech"
approach to agriculture "appropriate" given the status of a
developing nation like Kenya? Is it realistic to expect that
undereducated and impoverished subsistence farmers will have the
capacities and the resources to properly manage GM crops, for
example through the appropriate use of refuges?
[106] In the case, Josephine responds aggressively to concerns
like these when she overhears Terra's conversation with Tom. She
asserts that Kenya will do what it takes to educate farmers about
the proper use of GM crops, and it is true that KARI is designing
farmer-training strategies as a part of the insect-resistant maize
project.75
Compared to other countries in sub-Saharan Africa, Kenya has very
high rates of adult literacy. In 2000, 89 percent of men and 76
percent of women were literate. At the same time, only 26 percent
of boys and 22 percent of girls are enrolled in secondary
education.76
Thus, while literacy is high, the level of education is low. The
hunger and poverty among many Kenyans, however, may be the most
significant impediment to the responsible use of GM crops. In a
situation where hunger is on the rise, how likely is it that
subsistence farmers will plant 20 percent of their fields in non-Bt
maize if they see that the Bt varieties are producing substantially
higher yields?
[107] This is a fair question. The norm of participation supports
people making decisions that affect their lives, but in this case
the immediate threat of hunger and malnutrition may limit the range
of their choices. At the same time, GM crops have the potential to
significantly reduce the amount of time that women and children
spend weeding, picking bugs off of plants, and scaring birds away.
Organic farming methods would require even larger investments of
time. This is time children could use to attend more school or that
women could use to increase their literacy or to engage in other
activities that might increase family income and confer a slightly
greater degree of security and independence. Aspects of the
participation norm cut both ways.
[108] Solidarity. Among other things, the ecojustice norm of
solidarity is concerned about the equitable distribution of the
burdens and benefits associated with GM crops. If problems emerge
in Kenya, who will bear the costs? If GM crops are finally approved
for planting, who will receive most of the benefits?
[109] Thus far, critics argue that the benefits of GM crops in
developed nations have accrued only to biotech corporations through
higher sales and to large-scale farmers through lower production
costs. Moreover, critics claim that the dangers GM crops pose to
human health and biosafety are dumped on consumers who do not fully
understand the risks associated with GM crops and the food products
that are derived from them. It is not clear that the same could be
said for the production of GM crops in Kenya where these crops are
being developed through partnerships in the non-profit and public
sectors. Researchers expect to make these products available at
little cost to farmers and few corporations will earn much money
off the sale of these seeds. Thus, the benefits from GM crops
should accrue to a larger percentage of people in Kenya because 70
percent of the population is engaged in subsistence agriculture.
Like developed nations, however, food safety problems could affect
all consumers and a case could be made that this would be more
severe in a nation like Kenya where it would be very difficult to
adequately label GM crop products that often move directly from the
field to the dinner table.
[110] Another aspect of solidarity involves supporting others in
their struggles. Josephine does not explicitly appeal to this norm
in the case, but some members of the Hunger Concerns group are
probably wondering whether they should just support Josephine's
proposal as a way to show respect to her and to the
self-determination of the Kenyan people. There is much to commend
this stance and, ultimately, it might be ethically preferable. One
of the dangers, however, is that Josephine's colleagues may squelch
their moral qualms and simply "pass the buck" ethically to the
Kenyans. Karen seems close to making this decision, despite her
serious social, ecological, and theological concerns about GM
crops. Friendship requires support and respect, but it also thrives
on honesty.
Conclusion
[111] Tom and Karen face a difficult choice, as do the other
members of the Hunger Concerns group. Next week they will have to
decide if the group should join the Greenpeace campaign to ban GM
crops or whether it wants to submit an article for the campus
newspaper supporting the responsible use of GM crops to bolster
food security in Kenya. While convenient, skipping the meeting
would just dodge the ethical issues at stake. As students consider
these alternatives and others, the goods associated with solidarity
need to be put into dialogue with the harms to ecological
sustainability and human health that could result from the
development of GM crops in Kenya. Similarly, these potential harms
also need to be weighed against the real harms that are the result
of an insufficient food supply. The problem of hunger in
sub-Saharan Africa is only getting worse, not better.
© Orbis Books
Printed by permission.
© December 2003
Journal of Lutheran Ethics (JLE)
Volume 3, Issue 12
1 Florence Wambugu, Modifying Africa: How biotechnology
can benefit the poor and hungry; a case study from Kenya (Nairobi,
Kenya, 2001), pp. 22-44.
2 J. DeVries and G. Toenniessen, Securing the Harvest:
Biotechnology, Breeding and Seed Systems for African Crops (New
York: CABI Publishing, 2001), p. 103.
3 Ibid., p. 101.
4 Susan Mabonga, "Centre finds new way to curb weed,"
Biosafety News, (Nairobi), No. 28, January 2002, pp. 1, 3.
5 Klaus M. Leisinger, et al., Six Billion and Counting:
Population and Food Security in the 21st Century (Washington, DC:
International Food Policy Research Institute, 2002), pp. 4-6. I am
indebted to Todd Benson, an old friend and staff member at the
International Food Policy Research Institute, for better
understanding issues related to food security in sub-Saharan
Africa.
6 Ibid, p. 57.
7 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds of
Contention: World Hunger and the Global Controversy over GM Crops
(Baltimore: The Johns Hopkins University Press, 2001), p. 61.
8 Klaus M. Leisinger, et al., Six Billion and Counting, p.
8.
9 Ibid, p. x. Globally, the World Bank estimates that 1.3
billion people are trying to survive on $1 a day. Another two
billion people are trying to get by on only $2 a day. Half of the
world's population is trying to live on $2 a day or less.
10 J. DeVries and G. Toenniessen, Securing the Harvest:
Biotechnology, Breeding and Seed Systems for African Crops (New
York: CABI Publishing, 2001), pp. 30-31.
11 The World Bank Group, "Kenya at a Glance," accessed
on-line April 9, 2002:.
12 J. DeVries and G. Toenniessen, Securing the Harvest, p.
29. See also, Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 59-67. I am indebted to Gary Toenniessen at the
Rockefeller Foundation for his wise counsel as I began to research
ethical implications of genetically modified crops in sub-Saharan
Africa.
13 Population Reference Bureau, 2001 World Population Data
Sheet, book edition (Washington, DC: Population Reference Bureau,
2001), pp. 3-4. I am indebted to Dick Hoehn at Bread for the World
Institute for helping me better understand the root causes of
hunger in sub-Saharan Africa.
14 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 106-107.
15 Ibid.
16 Population Reference Bureau, 2001 World Population Data
Sheet, p. 2.
17 J. DeVries and G. Toenniessen, Securing the Harvest, p.
33.
18 Ibid, p. 7, 21.
19 Food and Agriculture Organization, Statement on
Biotechnology, accessed on-line April 9, 2002:.
20 United Nations Environment Programme, Secretariat of
the Convention on Biological Diversity, accessed on-line April 9,
2002:.
21 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 33.
22 J. DeVries and G. Toenniessen, Securing the Harvest,
pp. 59-66.
23 Ibid, p. 67.
24 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project,, IRMA Project
Document, No. 4, September 2001, pp. 1-12.
25 J. DeVries and G. Toenniessen, Securing the Harvest, p.
65.
26 Nicholas Wade, "Experts Say They Have Key to Rice
Genes," The New York Times, accessed on-line April 5, 2002: (registration
required).
27 Daniel Charles, Lords of the Harvest: Biotech, Big
Money, and the Future of Food (Cambridge, MA: Perseus Publishing,
2001), p. 10
28 Ibid, p. 139.
29 Per Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor
Country Perspectives on Biotechnology," in The Future of Food:
Biotechnology Markets and Policies in an International Setting, P.
Pardey, ed., (Washington, DC: International Food Policy Research
Institute, 2001), pp. 34-35. See also, Bill Lambrecht, Dinner at
the New Gene Café: How Genetic Engineering is Changing What
We Eat, How We Live, and the Global Politics of Food (New York: St,
Martin's Press, 2001), p. 7.
30 Philip Brasher, "American Farmers Planting More Biotech
Crops This Year Despite International Resistance," accessed on line
March 29, 2002:.
31 Robert L. Paarlberg, The Politics of Precaution:
Genetically Modified Crops in Developing Countries (Baltimore: The
Johns Hopkins University Press, 2001), p. 3.
32 Per Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor
Country Perspectives on Biotechnology," in The Future of Food, p.
34.
33 Robert L. Paarlberg, The Politics of Precaution, p.
3.
34 J. DeVries and G. Toenniessen, Securing the Harvest, p.
68. I am indebted to Jill Montgomery, director of Technology
Cooperation at Monsanto, for better understanding how Monsanto has
assisted biotechnology research and subsistence agriculture in
Kenya.
35 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 1-12.
36 Susan Mabonga, "Centre finds new way to curb weed,"
Biosafety News, (Nairobi), No. 28, January 2002, pgs. 1, 3.
37 Debbie Weiss, "New Witchweed-fighting method, developed
by CIMMYT and Weismann Institute, to become public in July," Today
in AgBioView, July 10, 2002, accessed on line July 12, 2002:.
38 Miguel A. Altieri, Genetic Engineering in Agriculture:
The Myths, Environmental Risks, and Alternatives (Oakland, CA: Food
First/Institute for Food and Development Policy, 2001), pp. 16-17.
Concerns about the dangers GM crops could pose to human and
ecological health lead many critics to invoke the "precautionary
principle" in their arguments. For more information about this
important concept, see sections of the case and commentary for the
preceding case, "Chlorine Sunrise?"
39 Daniel Charles, Lords of the Harvest, pp. 303-304.
40 Bill Lambrecht, Dinner at the New Gene Café, pp.
46-47.
41 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 90.
42 Environmental News Service, "ProdiGene Fined for
Biotechnology Blunders," accessed on-line December 10, 2002:.
43 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 19.
44 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 140-141.
45 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 19.
46 Ibid, p. 20.
47 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, p. 44-45.
48 Miguel A. Altieri, i>Genetic Engineering in
Agriculture, pp. 22-23.
49 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 45-46 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 26-29.
50 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 47-49 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 29-31. See also Daniel Charles,
Lords of the Harvest, pp. 247-248; Bill Lambrecht, Dinner at the
New Gene Café, pp. 78-82; and Alan McHughen, Pandora's
Picnic Basket: The Potential and Hazards of Genetically Modified
Foods (New York: Oxford University Press, 2000), p. 190.
51 See Per Pinstrup-Andersen and Ebbe Schiøler,
Seeds of Contention, p. 49-50 and Miguel A. Altieri, Genetic
Engineering in Agriculture, pp. 23-25. Controversy erupted in 2002
after the prestigious scientific journal, Nature, published a study
by scientists claiming that gene flow had occurred between GM maize
and indigenous varieties of maize in Mexico. Since Mexico is the
birthplace of maize, this study ignited alarm and produced a
backlash against GM crops. In the spring of 2002, however, Nature
announced that it should not have published the study because the
study's methodology was flawed. See Carol Kaesuk Yoon, "Journal
Raises Doubts on Biotech Study," The New York Times, April 5, 2002,
accessed on-line April 5, 2002: (registration
required).05CORN.html
52 Miguel A. Altieri, Genetic Engineering in Agriculture,
p. 4.
53 Bill Lambrecht, Dinner at the New Gene Café, pp.
113-123.
54 Opposition reached a fevered pitch when the Delta and
Pine Land Company announced that they had developed a "technology
protection system" that would render seeds sterile. The company
pointed out that this would end concerns about the creation of
superweeds through undesired gene flow, but opponents dubbed the
technology as "the terminator" and viewed it as a diabolical means
to make farmers entirely dependent on seed companies for their most
valuable input, seed. When Monsanto considered purchasing Delta and
Pine Land in 1999, Monsanto bowed to public pressure and declared
that it would not market the new seed technology if it acquired the
company. In the end, it did not. See Bill Lambrecht, Dinner at the
New Gene Café, pp. 113-123.
55 Robert L. Paarlberg, The Politics of Precaution, pp.
16-17.
56 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 123-126.
57 Ibid, pp. 33-34.
58 Richard Manning, Food's Frontier: The Next Green
Revolution (New York: North Point Press, 2000), p. 195.
59 Ibid, 194. See also, Per Pinstrup-Andersen and Ebbe
Schiøler, Seeds of Contention, pp. 80-81.
60 See Douglas John Hall, Imaging God: Dominion as
Stewardship (Grand Rapids: Eerdmans Publishing Company, 1986), pp.
89-116; and The Steward: A Biblical Symbol Come of Age (Grand
Rapids: Eerdmans Publishing Company, 1990).
61 Per Pinstrup-Andersen and Ebbe Schiøler, Seeds
of Contention, pp. 73-75.
62 Robert L. Paarlberg, The Politics of Precaution, pp.
58-59.
63 Hassan Adamu, "We'll feed our people as we see fit,"
The Washington Post, (September 11, 2000), p. A23; cited by Per
Pinstrup-Andersen and Marc J. Cohen, "Rich and Poor Country
Perspectives on Biotechnology," in The Future of Food, p. 20.
64 James Lamont, "U.N. Withdraws Maize Food Aid From
Zambia," Financial Times (Johannesburg), December 10, 2002.
Reprinted in Today in AgBioView, accessed on-line December 11,
2002:.
65 Robert L. Paarlberg, The Politics of Precaution, pp.
50-54.
66 Ibid.
67 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 15-16.
68 For a brief summary, see a section devoted to the rise
of dysfunctional farming in Brian Halweil, "Farming in the Public
Interest," in State of the World 2002 (New York: W.W. Norton &
Co., 2002), pp. 53-57.
69 Brian Halweil, "Biotech, African Corn, and the Vampire
Weed," WorldWatch, (September/October 2001), Vol. 14, No. 5, pp.
28-29.
70 Ibid., p. 29.
71 Miguel A. Altieri, Genetic Engineering in Agriculture,
pp. 35-47.
72 These observations are based on remarks made by
researchers from sub-Saharan Africa, Europe, and the United States
in response to a presentation by Brian Halweil at a conference I
attended in Washington, DC on March 6, 2002. The conference was
sponsored by Bread for the World Institute and was titled,
Agricultural Biotechnology: Can it Help Reduce Hunger in
Africa?
73 Florence Wambugu, Modifying Africa: How biotechnology
can benefit the poor and hungry; a case study from Kenya, (Nairobi,
Kenya, 2001), pp. 16-17; 45-54.
74 Greenpeace International.. Accessed on-line:
April 19, 2002.
75 International Maize and Wheat and Improvement Center
and The Kenya Agricultural Research Institute, Annual Report 2000:
Insect Resistant Maize for Africa (IRMA) Project, pp. 23-33.
76 Population Reference Bureau, "Country Fact Sheet:
Kenya," accessed on-line April 19, 2002: | http://www.elca.org/What-We-Believe/Social-Issues/Journal-of-Lutheran-Ethics/Issues/December-2003/Harvesting-Controversy-Genetic-Engineering-and-Food-Security-in-SubSaharan-Africa.aspx | CC-MAIN-2013-20 | en | refinedweb |
16 July 2009 15:44 [Source: ICIS news]
(Recasts, adding detail in headline and lead)
LONDON (ICIS news)--INEOS’ 320,000 tonne/year polyethylene (PE) plant at ?xml:namespace>
“The plant went down some time over the weekend [11/12 July], as it was well into its high density PE (HDPE) campaign,” said the source. “We expect it to restart on Monday [20 July].”
The plant is a linear low density PE (LLDPE)/HDPE swing unit.
INEOS declared force majeure on one of its HDPE injection grades.
“This is just one particular grade of HDPE injection of many that we produce,” said the source.
PE availability has tightened considerably in
Prices in July increased by €100/tonne ($141/tonne), leaving gross low density PE (LDPE) levels at €1,000/tonne FD (free delivered) NWE (northwest
PE producers in
($1 = €0.71). | http://www.icis.com/Articles/2009/07/16/9233138/ineos-pe-down-at-grangemouth-uk-after-unexpected-outage.html | CC-MAIN-2013-20 | en | refinedweb |
20 October 2009 17:26 [Source: ICIS news]
By Nigel Davis
?xml:namespace>
CEO Ellen Kullman on Tuesday described “a developing recovery that is shaped different by market and geography”.
Not surprisingly, DuPont’s
Central
But the picture is shifting slightly and the outlook is certainly more positive than three months ago.
There are signs of restocking in the automobile-related product chains that are so important to the company. Kullman said in a conference call that the third quarter seemed to represent true demand for DuPont’s industrial polymers and performance elastomers. The sign must be welcome.
All in all, the chemicals giant is seeing its markets stabilise and some early indications of an upturn.
Titanium dioxide, an early cycle indicator, is doing well and DuPont is almost sold out. Other important products for the company, however, are still feeling the squeeze. It could be months before they might be expected to recover.
The company continues to do well in driving costs down and is on target to save $1bn (€670m) in fixed costs this year.
Kullman said $750m of these costs savings can be retained over the longer term. There is a clear commitment to deliver on promises.
What is also important is that DuPont’s product development engine continues to fire on all cylinders.
The company launched 323 new products in the third quarter, bringing the year-to-date total to 1,107. That’s a 50% higher launch rate than last year.
The new products help drive growth in the continued weak market environment and will help lay the foundations for faster growth.
On the demand side, then, there is not a great deal to get excited about. But given the unclear global economic picture, that is hardly surprising.
DuPont is confident enough, nevertheless, to forecast that its earnings per share this year will be at the top of an earlier stated range. A solid foundation has been laid in 2009. The company is holding on to product prices.
It is the degree to which savings have been made and the research engine delivered that have contributed immensely to confidence this year. Yet the market has begun to more forcefully respond.
Erratic as the recovery may be, it appears as if there are more widespread opportunities to grasp.
($1 = €0.67). | http://www.icis.com/Articles/2009/10/20/9256404/insight-stronger-signs-of-market-growth.html | CC-MAIN-2013-20 | en | refinedweb |
LLVM API Documentation
#include <LibCallSemantics.h>
LibCallInfo - Abstract interface to query about library call information. Instances of this class return known information about some set of libcalls.
Definition at line 127 of file LibCallSemantics.h.
Definition at line 133 of file LibCallSemantics.h.
Definition at line 27 of file LibCallSemantics.cpp.
getFunctionInfo - Return the LibCallFunctionInfo object corresponding to the specified function if we have it. If not, return null.
If this is the first time we are querying for this info, lazily construct the StringMap to index it.
Definition at line 44 of file LibCallSemantics.cpp.
References getFunctionInfoArray(), getMap(), llvm::Value::getName(), and llvm::StringMap< ValueTy, AllocatorTy >::lookup().
Referenced by llvm::LibCallAliasAnalysis::getModRefInfo().
getFunctionInfoArray - Return an array of descriptors that describe the set of libcalls represented by this LibCallInfo object. This array is terminated by an entry with a NULL name.
Referenced by getFunctionInfo().
getLocationInfo - Return information about the specified LocationID.
Definition at line 31 of file LibCallSemantics.cpp.
getLocationInfo - Return descriptors for the locations referenced by this set of libcalls.
Definition at line 155 of file LibCallSemantics.h. | http://llvm.org/docs/doxygen/html/classllvm_1_1LibCallInfo.html | CC-MAIN-2013-20 | en | refinedweb |
Spring Integration's JPA (Java Persistence API) module provides components for performing various database operations using JPA. The following components are provided:
Updating Outbound Gateway
Retrieving Outbound Gateway
These components can be used to perform select, create, update and delete operations on the targeted databases by sending/receiving messages to them.
The JPA Inbound Channel Adapter lets you poll and retrieve (select) data from the database using JPA whereas the JPA Outbound Channel Adapter lets you create, update and delete entities.
Outbound Gateways for JPA can be used to persist entities to the database, yet allowing you to continue with the flow and execute further components downstream. Similarly, you can use an Outbound Gateway to retrieve entities from the database.
For example, you may use the Outbound Gateway, which receives a Message with a user Id as payload on its request channel, to query the database and retrieve the User entity and pass it downstream for further processing.
Recognizing these semantic differences, Spring Integration provides 2 separate JPA Outbound Gateways:
Retrieving Outbound Gateway
Updating Outbound Gateway
Functionality
All JPA components perform their respective JPA operations by using either one of the following:
Entity classes
Java Persistence Query Language (JPQL) for update, select and delete (inserts are not supported by JPQL)
Native Query
Named Query
In the following sections we will describe each of these components in more detail.
The Spring Integration JPA support has been tested using the following persistence providers:
Hibernate
OpenJPA
EclipseLink
When using a persistence provider, please ensure that the provider is compatible with JPA 2.0.
Each of the provided components will use the
o.s.i.jpa.core.JpaExecutor
class which in turn will use an implementation of the
o.s.i.jpa.core.JpaOperations
interface.
JpaOperations operates like a
typical Data Access Object (DAO) and provides methods such as
find,
persist,
executeUpdate etc. For most use cases the provided
default implementation
o.s.i.jpa.core.DefaultJpaOperations
should be sufficient. Nevertheless, you have the option to
optionally specify your own implementation in case you require custom
behavior.
For initializing a
JpaExecutor
you have to use one of 3 available constructors that accept one of:
EntityManagerFactory
EntityManager or
JpaOperations
Java Configuration Example
The following example of a JPA Retrieving Outbound Gateway is configured purely through Java. In typical usage scenarios you will most likely prefer the XML Namespace Support described further below. However, the example illustrates how the classes are wired up. Understanding the inner workings can also be very helpful for debugging or customizing the individual JPA components.
First, we instantiate a
JpaExecutor using an
EntityManager as constructor argument.
The
JpaExecutor is then in return used as
constructor argument for the
o.s.i.jpa.outbound.JpaOutboundGateway
and the
JpaOutboundGateway will be passed as constructor
argument into the
EventDrivenConsumer.
<bean id="jpaExecutor" class="o.s.i.jpa.core.JpaExecutor"> <constructor-arg <property name="entityClass" value="o.s.i.jpa.test.entity.StudentDomain"/> <property name="jpaQuery" value="select s from Student s where s.id = :id"/> <property name="expectSingleResult" value="true"/> <property name="jpaParameters" > <util:list> <bean class="org.springframework.integration.jpa.support.JpaParameter"> <property name="name" value="id"/> <property name="expression" value="payload"/> </bean> </util:list> </property> </bean> <bean id="jpaOutboundGateway" class="o.s.i.jpa.outbound.JpaOutboundGateway"> <constructor-arg <property name="gatewayType" value="RETRIEVING"/> <property name="outputChannel" ref="studentReplyChannel"/> </bean> <bean id="getStudentEndpoint" class="org.springframework.integration.endpoint.EventDrivenConsumer"> <constructor-arg <constructor-arg </bean>
When using XML namespace support, the underlying parser classes will instantiate the relevant Java classes for you. Thus, you typically don't have to deal with the inner workings of the JPA adapter. This section will document the XML Namespace Support provided by the Spring Integration and will show you how to use the XML Namespace Support to configure the Jpa components.
Certain configuration parameters are shared amongst all JPA components and are described below:
auto-startup
Lifecycle attribute signaling if this component should
be started during Application Context startup.
Defaults to
true.
Optional.
id
Identifies the underlying Spring bean definition, which
is an instance of either
EventDrivenConsumer
or
PollingConsumer.
Optional.
entity-manager-factory
The reference to the JPA Entity Manager Factory
that will be used by the adapter to create the
EntityManager.
Either this attribute or the entity-manager attribute
or the jpa-operations attribute must be provided.
entity-manager
The reference to the JPA Entity Manager that will be used by the component. Either this attribute or the enity-manager-factory attribute or the jpa-operations attribute must be provided.
<bean id="entityManager" class="org.springframework.orm.jpa.support.SharedEntityManagerBean"> <property name="entityManagerFactory" ref="entityManagerFactoryBean" /> </bean>
jpa-operations
Reference to a bean implementing the
JpaOperations interface. In rare cases
it might be advisable to provide your own implementation
of the
JpaOperations interface, instead
of relying on the default implementation
org.springframework.integration.jpa.core.DefaultJpaOperations.
As
JpaOperations wraps the necessary
datasource; the JPA Entity Manager or JPA Entity Manager Factory
must not be provided, if the jpa-operations
attribute is used.
entity-class
The fully qualified name of the entity class. The exact semantics of this attribute vary, depending on whether we are performing a persist/update operation or whether we are retrieving objects from the database.
When retrieving data, you can specify the entity-class attribute to indicate that you would like to retrieve objects of this type from the database. In that case you must not define any of the query attributes ( jpa-query, native-query or named-query )
When persisting data, the entity-class attribute will indicate the type of object to persist. If not specified (for persist operations) the entity class will be automatically retrieved from the Message's payload.
jpa-query
Defines the JPA query (Java Persistence Query Language) to be used.
native-query
Defines the native SQL query to be used.
named-query
This attribute refers to a named query. A named query can either be defined in Native SQL or JPAQL but the underlying JPA persistence provider handles that distinction internally.
For providing parameters, the parameter XML sub-element can be used. It provides a mechanism to provide parameters for the queries that are either based on the Java Persistence Query Language (JPQL) or native SQL queries. Parameters can also be provided for Named Queries.
Expression based Parameters
<int-jpa:parameter
Value based Parameters
<int-jpa:parameter
Positional Parameters
<int-jpa:parameter <int-jpa:parameter
All JPA operations like Insert, Update and Delete require a transaction to be active whenever they are performed. For Inbound Channel Adapters there is nothing special to be done, it is similar to the way we configure transaction managers with pollers used with other inbound channel adapters.The xml snippet below shows a sample where a transaction manager is configured with the poller used with an Inbound Channel Adapter.
<int-jpa:inbound-channel-adapter <int:poller <int:transactional </int:poller> </int-jpa:inbound-channel-adapter>
However, it may be necessary to specifically start a transaction when using an Outbound Channel Adapter/Gateway. If a DirectChannel is an input channel for the outbound adapter/gateway, and if transaction is active in the current thread of execution, the JPA operation will be performed in the same transaction context. We can also configure to execute this JPA operation in a new transaction as below.
<int-jpa:outbound-gateway <int-jpa:parameter <int-jpa:parameter <int-jpa:transactional </int-jpa:outbound-gateway>
As we can see above, the transactional sub element of the outbound gateway/adapter will be used to specify the transaction attributes. It is optional to define this child element if you have DirectChannel as an input channel to the adapter and you want the adapter to execute the operations in the same transaction context as the caller. If, however, you are using an ExecutorChannel, it is required to have the transactional sub element as the invoking client's transaction context is not propagated.
An Inbound Channel Adapter is used to execute a select query over the
database using JPA QL and return the result. The message payload will be either a single
entity or a
List of entities. Below is a sample xml snippet that shows
a sample usage of inbound-channel-adapter.
<int-jpa:inbound-channel-adapter
<int:poller <int:transactional </int:poller> </int-jpa:inbound-channel-adapter><int:poller <int:transactional </int:poller> </int-jpa:inbound-channel-adapter>
<int-jpa:inbound-channel-adapter <int:poller </int-jpa:inbound-channel-adapter>> <int:poller </int-jpa:inbound-channel-adapter>
The JPA Outbound channel adapter allows you to accept messages over a request channel. The payload can either be used as the entity to be persisted, or used along with the headers in parameter expressions for a defined JPQL query to be executed. In the following sub sections we shall see what those possible ways of performing these operations are.
The XML snippet below shows how we can use the Outbound Channel Adapter to persist an entity to the database.
<int-jpa:outbound-channel-adapter
As we can see above these 4 attributes of the outbound-channel-adapter are all we need to configure it to accept entities over the input channel and process them to PERSIST,MERGE or DELETE it from the underlying data source.
We have seen in the above sub section how to perform a PERSIST action using an entity We will now see how to use the outbound channel adapter which uses JPA QL (Java Persistence API Query Language)
<int-jpa:outbound-channel-adapter
<int-jpa:parameter<int-jpa:parameter
<int-jpa:parameter </int-jpa:outbound-channel-adapter><int-jpa:parameter </int-jpa:outbound-channel-adapter>
The parameter sub element accepts an attribute name which corresponds to the named parameter specified in the provided JPA QL (point 2 in the above mentioned sample). The value of the parameter can either be static or can be derived using an expression. The static value and the expression to derive the value is specified using the value and the expression attributes respectively. These attributes are mutually exclusive.
If the value attribute is specified we can provide an optional
type attribute. The value of this attribute is the fully qualified name of the class
whose value is represented by the value attribute. By default
the type is assumed to be a
java.lang.String.
<int-jpa:outbound-channel-adapter ... > <int-jpa:parameter <int-jpa:parameter </int-jpa:outbound-channel-adapter>
As seen in the above snippet, it is perfectly valid to use multiple parameter sub elements within an outbound channel adapter
tag and derive some parameters using expressions and some with static value. However, care should
be taken not to specify the same parameter name multiple times, and, provide one parameter sub element for
each named parameter specified in the JPA query. For example, we are specifying two parameters
level and name where level attribute is a static value of type
java.lang.Integer, where as the name attribute is derived from the payload of the message
In this section we will see how to use native queries to perform the operations using JPA outbound channel adapter. Using native queries is similar to using JPA QL, except that the query specified here is a native database query. By choosing native queries we lose the database vendor independence which we get using JPA QL.
One of the things we can achieve using native queries is to perform database inserts, which is not possible using JPA QL (To perform inserts we send JPA entities to the channel adapter as we have seen earlier). Below is a small xml fragment that demonstrates the use of native query to insert values in a table. Please note that we have only mentioned the important attributes below. All other attributes like channel, entity-manager and the parameter sub element has the same semantics as when we use JPA QL.
<int-jpa:outbound-channel-adapter
We will now see how to use named queries after seeing using entity, JPA QL and native query in previous sub sections. Using named query is also very similar to using JPA QL or a native query, except that we specify a named query instead of a query. Before we go further and see the xml fragment for the declaration of the outbound-channel-adapter, we will see how named JPA named queries are defined.
In our case, if we have an entity called
Student, then we have the following in the class to define
two named queries selectStudent and updateStudent. Below is a way to define
named queries using annotations
@Entity @Table(name="Student") @NamedQueries({ @NamedQuery(name="selectStudent", query="select s from Student s where s.lastName = 'Last One'"), @NamedQuery(name="updateStudent", query="update Student s set s.lastName = :lastName, lastUpdated = :lastUpdated where s.id in (select max(a.id) from Student a)") }) public class Student { ...
You can alternatively use the orm.xml to define named queries as seen below
<entity-mappings ...> ... <named-query <query>select s from Student s where s.lastName = 'Last One'</query> </named-query> </entity-mappings>
Now that we have seen how we can define named queries using annotations or using orm.xml, we will now see a small xml fragment for defining an outbound-channel-adapter using named query
<int-jpa:outbound-channel-adapter
<int-jpa:outbound-channel-adapter <int:poller/> <int-jpa:transactional/>> <int:poller/> <int-jpa:transactional/>
> <int-jpa:parameter/>> <int-jpa:parameter/>
> </int-jpa:outbound-channel-adapter>> </int-jpa:outbound-channel-adapter>
The JPA Inbound Channel Adapter allows you to poll a database in order to retrieve one or more JPA entities and the retrieved data is consequently used to start a Spring Integration flow using the retrieved data as message payload.
Additionally, you may use JPA Outbound Channel Adapters at the end of your flow in order to persist data, essentially terminating the flow at the end of the persistence operation.
However, how can you execute JPA persistence operation in the middle of a flow? For example, you may have business data that you are processing in your Spring Integration message flow, that you would like to persist, yet you still need to execute other components further downstream. Or instead of polling the database using a poller, you rather have the need to execute JPQL queries and retrieve data actively which then is used to being processed in subsequent components within your flow.
This is where JPA Outbound Gateways come into play. They give you the ability to persist data as well as retrieving data. To facilitate these uses, Spring Integration provides two types of JPA Outbound Gateways:
Whenever the Outbound Gateway is used to perform an action that saves, updates or soley deletes some records in the database, you need to use an Updating Outbound Gateway gateway. If for example an entity is used to persist it, then a merged/persisted entity is returned as a result. In other cases the number of records affected (updated or deleted) is returned instead.
When retrieving (selecting) data from the database, we use a Retrieving Outbound Gateway. With a Retrieving Outbound Gateway gateway, we can use either JPQL, Named Queries (native or JPQL-based) or Native Queries (SQL) for selecting the data and retrieving the results.
An Updating Outbound Gateway is functionally very similar to an Outbound Channel Adapter, except that an Updating Outbound Gateway is used to send a result to the Gateway's reply channel after performing the given JPA operation.
A Retrieving Outbound Gateway is quite similar to an Inbound Channel Adapter.
This similarity was the main factor to use the central
JpaExecutor class to unify common functionality
as much as possible.
Common for all JPA Outbound Gateways and simlar to the outbound-channel-adapter, we can use
Entity classes
JPA Query Language (JPQL)
Native query
Named query
for performing various JPA operations. For configuration examples please see Section 18.6.4, “JPA Outbound Gateway Samples”.
JPA Outbound Gateways always have access to the Spring Integration Message as input. As such the following parameters are available:
parameter-source-factory
An instance of
o.s.i.jpa.support.parametersource.ParameterSourceFactory
that will be used to get an instance of
o.s.i.jpa.support.parametersource.ParameterSource.
The ParameterSource is used to resolve the
values of the parameters provided in the query. The
parameter-source-factory attribute is ignored,
if operations are performed using a JPA entity. If a
parameter sub-element is used, the factory
must be of type
ExpressionEvaluatingParameterSourceFactory,
located in package o.s.i.jpa.support.parametersource.
Optional.
use-payload-as-parameter-source
If set to true, the payload of the Message
will be used as a source for providing parameters. If set to
false, the entire Message will be available
as a source for parameters. If no JPA Parameters are passed in,
this property will default to true.
This means that using a default
BeanPropertyParameterSourceFactory, the
bean properties of the payload will be used as a source for
parameter values for the to-be-executed JPA query. However, if
JPA Parameters are passed in, then this property will by default
evaluate to false. The reason is that JPA
Parameters allow for SpEL Expressions to be provided and therefore
it is highly beneficial to have access to the entire Message,
including the Headers.
<int-jpa:updating-outbound-gateway <int:poller/> <int-jpa:transactional/> <int-jpa:parameter <int-jpa:parameter </int-jpa:updating-outbound-gateway> <int:poller/> <int-jpa:transactional/> <int-jpa:parameter <int-jpa:parameter </int-jpa:updating-outbound-gateway>
<int-jpa:retrieving-outbound-gateway <int:poller></int:poller> <int-jpa:transactional/> <int-jpa:parameter <int-jpa:parameter </int-jpa:retrieving-outbound-gateway> <int:poller></int:poller> <int-jpa:transactional/> <int-jpa:parameter <int-jpa:parameter </int-jpa:retrieving-outbound-gateway>
This section contains various examples of the Updating Outbound Gateway and Retrieving Outbound Gateway
Update using an Entity Class
In this example an Updating Outbound Gateway
is persisted using solely the entity class
org.springframework.integration.jpa.test.entity.Student
as JPA defining parameter.
<int-jpa:updating-outbound-gateway
Update using JPQL
In this example, we will see how we can update an entity using the Java Persistence Query Language (JPQL). For this we use an Updating Outbound Gateway.
<int-jpa:updating-outbound-gateway <int-jpa:parameter <int-jpa:parameter </int-jpa:updating-outbound-gateway> <int-jpa:parameter <int-jpa:parameter </int-jpa:updating-outbound-gateway>
When sending a message with a String payload and containing a header rollNumber with a long value, the last name of the student with the provided roll number is updated to the value provided in the message payload. When using an UPDATING gateway, the return value is always an integer value which denotes the number of records affected by execution of the JPA QL.
Retrieving an Entity using JPQL
The following examples uses a Retrieving Outbound Gateway together with JPQL to retrieve (select) one or more entities from the database.
<int-jpa:retrieving-outbound-gateway <int-jpa:parameter <int-jpa:parameter </int-jpa:outbound-gateway>
Update using a Named Query
Using a Named Query is basically the same as using a JPQL query directly. The difference is that the named-query attribute is used instead, as seen in the xml snippet below.
<int-jpa:updating-outbound-gateway <int-jpa:parameter <int-jpa:parameter </int-jpa:outbound-gateway> | http://static.springsource.org/spring-integration/docs/2.2.0.RELEASE/reference/html/jpa.html | CC-MAIN-2013-20 | en | refinedweb |
Write Once, Communicate… Nowhere?
Back in August I made the off-hand comment "thank goodness for RXTX" when talking about communicating with serial ports using Java. I've been meaning to revisit that topic in a bit more detail, and now is as good a time as any.
More Insights
White Papers
- Transforming Change into a Competitive Advantage
- Accelerating Economic Growth and Vitality through Smarter Public Safety Management
Reports
- Strategy: How Cybercriminals Choose Their Targets and Tactics
- Strategy: Apple iOS 6: 6 Features You Need to Know
WebcastsMore >>
I can't decide if I love Java or hate it. I've written books on Java, produced a special DDJ issue on Java, and was a Java columnist for one of our sister magazines, back when we had those. I even edited the Dobb's Java newsletter for a while. There must be something I like. However, I always find I like it more in the server and network environment. For user interfaces, I tend towards Qt these days. For embedded systems, I haven't warmed to Java, even though there are a few options out there, some of which I will talk about sometime later this year.
The lines, however, are blurring between all the different kinds of systems. You almost can't avoid Java somewhere. The Arduino uses Java internally. So does Eclipse. Many data acquisition systems need to connect over a network and Java works well for that, either on the device or running on a host PC (or even a Raspberry Pi).
Another love-hate relationship I have is with the serial port. My first real job was with a company that made RS-232 devices, so like many people I have a long history with serial communications. We keep hearing that the office is going paperless and serial ports are dead. Neither of those seems to have much chance of being true anytime soon. Even a lot of USB devices still look like serial ports, so it is pretty hard to completely give up on the old standard, at least for now.
It isn't that Java doesn't support the serial port. Sun released the Java Communications API in 1997 and Oracle still nominally supports it. It covers using serial ports and parallel ports (which are mostly, but not completely, dead). The problem has always been in the implementation.
When reading the official Oracle page, I noticed that there are reference implementations for Solaris (both SPARC and x86) and Linux x86. I guess they quit trying to support Windows, which is probably a good thing. The last time I tried to use the Windows port, it suffered from many strange errors. For example, if the library wasn't on the same drive as the Java byte code, you couldn't enumerate ports. Things like that.
Pretty much everyone I know has switched to using an open project's implementation, RXTX. The Arduino, for example, uses this set of libraries to talk to the serial port (even if the serial port is really a USB port). The project implements the "official" API and requires a native library, but works on most flavors of Windows, Linux, Solaris, and MacOS. The project is pretty much a drop-in replacement unless you use the latest version.
The 2.1 series of versions still implement the standard API (more or less), but they change the namespace to gnu.io.*. If you use the older 2.0 series, you don't have to even change the namespace from the official examples.
Speaking of examples, instead of rewriting code here, I'll simply point you to the RXTX examples if you want to experiment.
One thing I did find interesting, however, is that RXTX uses JNI to interface with the native library (which, of course, varies by platform). I have always found JNI to be a pain to use (although, you don't use JNI yourself if you are just using RXTX in your program). I much prefer JNA. JNA is another way to call native code in Java programs, and it is much easier to manage. Granted, you can get slightly better performance in some cases using JNI, but in general, modern versions of JNA perform well and typically slash development costs.
I did a quick search to see if there was something equivalent to RXTX but using JNA. There is PureJavaComm. Even if you don't want to switch off RXTX, the analysis of the design of PureJavaComm at that link is interesting reading. Using JNA, the library directly calls operating system calls and avoids having a platform-specific shared library, which is a good thing for many reasons.
Have you managed to dump all your serial ports? Leave a comment and share your experiences. | http://www.drdobbs.com/embedded-systems/write-once-communicate-nowhere/240149018?cid=SBX_ddj_related_mostpopular_default_testing&itc=SBX_ddj_related_mostpopular_default_testing | CC-MAIN-2013-20 | en | refinedweb |
03 May 2007 06:17 [Source: ICIS news]
By Jeanne Lim
(updates with latest developments, analyst and market comments)
SINGAPORE (ICIS news)--A fire broke out at ExxonMobil’s Singapore refinery at 01:15 local time (18:15 GMT) on Thursday, killing two people and injuring two others, a company spokeswoman said.
The fire at the refinery at Pulau Ayer Chawan, ?xml:namespace>
The 115,000 bbl/day CDU, which is one of two at the refinery, was shut down immediately following the fire while the other, which has a capacity of 185,000 bbl/day, continues to operate, Ho added.
It wasn’t clear whether the company’s aromatics unit was affected by the fire at the
“We are still checking the refinery situation and cannot comment further on this,” he added.
The 300,000 bbl/day refinery processes feedstock for its petrochemical plant producing 400,000 tonnes/year of paraxylene (PX) and 150,000 tonnes/year of benzene.
So far, it appears that downstream production has not been affected by the fire at the refinery.
“No impact has been heard on the market as far as polymers as concerned,” said Aaron Yap, a trader at Singapore-based petrochemical firm, Integra.
The fire didn’t have a substantial impact on the naphtha market during the morning, a Singapore-based broker said.
The company said that its other refinery at its Jurong site was not affected. The CDUs at Jurong have a combined capacity of 225,000 bbl/day.
Meanwhile, the cause of the fire has yet to be determined, ExxonMobil’s Ho told ICIS news.
All parties involved in the fire were contractors, and the two injured have been sent to the hospital, the firm said in a statement issued a few hours after the incident.
“We are sorry that this has happened. We are greatly saddened by this tragic event and express our deepest sympathy to the families of those affected,” said the refinery’s manager Steve Blume in the statement.
The company said that it was cooperating with the Singapore Civil Defence Force and other relevant agencies to investigate the incident.
James Dennis and Mahua Mitra. | http://www.icis.com/Articles/2007/05/03/9025850/one+cdu+down+in+exxonmobil+singapore+fire.html | CC-MAIN-2013-20 | en | refinedweb |
MIDP's main user -interface classes are based on abstractions that can be adapted to devices that have different display and input capabilities. Several varieties of prepackaged screen classes make it easy to create a user interface. Screens have a title and an optional ticker. Most importantly, screens can contain Commands , which the implementation makes available to the user. Your application can respond to commands by acting as a listener object. This chapter described TextBox , a screen for accepting user input, and Alert , a simple screen for displaying information. In the next chapter, we'll get into the more complex List and Form classes.
In the last chapter, you learned about MIDP's simpler screen classes. Now we're getting into deeper waters, with screens that show lists and screens with mixed types of controls.
After TextBox and Alert , the next simplest Screen is List , which allows the user to select items (called elements ) from a list of choices. A text string or an image is used to represent each element in the list. List supports the selection of a single element or of multiple elements.
There are two main types of List , denoted by constants in the Choice interface:
MULTIPLE designates a list where multiple elements may be selected simultaneously .
EXCLUSIVE specifies a list where only one element may be selected. It is akin to a group of radio buttons .
For both MULTIPLE and EXCLUSIVE lists, selection and confirmation are separate steps. In fact, List does not handle confirmation for these types of lists-your MIDlet will need to provide some other mechanism (probably a Command ) that allows users to confirm their choices. MULTIPLE lists allow users to select and deselect various elements before confirming the selection. EXCLUSIVE lists permit users to change their minds several times before confirming the selection.
Figure 6-1a shows an EXCLUSIVE list. The user navigates through the list using the arrow up and down keys. An element is selected by pressing the select button on the device. Figure 6-1b shows a MULTIPLE list. It works basically the same way as an EXCLUSIVE list, but multiple elements can be selected simultaneously. As before, the user moves through the list with the up and down arrow keys. The select key toggles the selection of a particular element.
Figure 6-1: List types: (a) EXCLUSIVE and (b) MULTIPLE lists
A further refinement of EXCLUSIVE also exists: IMPLICIT lists combine the steps of selection and confirmation. The IMPLICIT list acts just like a menu. Figure 6-2 shows an IMPLICIT list with images and text for each element. When the user hits the select key, the list immediately fires off an event, just like a Command . An IMPLICIT list is just like an EXCLUSIVE list in that the user can only select one of the list elements. But with IMPLICIT lists, there's no opportunity for the user to change his or her mind before confirming the selection.
Figure 6-2: IMPLICIT lists combine selection and confirmation.
When the user makes a selection in an IMPLICIT List , the commandAction() method of the List 's CommandListener is invoked. A special value is passed to commandAction() as the Command parameter:
public static final Command SELECT_COMMAND
For example, you can test the source of command events like this:
public void commandAction(Command c, Displayable s) { if (c == nextCommand) // ... else if (c == List.SELECT_COMMAND) // ... }
There's an example at the end of this chapter that demonstrates an IMPLICIT List .
To create a List , specify a title and a list type. If you have the element names and images available ahead of time, you can pass them in the constructor:
public List(String title, int type) public List(String title, int type, String[] stringElements, Image[] imageElements)
The stringElements parameter cannot be null ; however, stringElements or imageElements may contain null array elements. If both the string and image for a given list element are null , the element is displayed blank. If both the string and the image are defined, the element will display using the image and the string.
Some List s will have more elements than can be displayed on the screen. Indeed, the actual number of elements that will fit varies from device to device. But don't worry: List implementations automatically handle scrolling up and down to show the full contents of the List .
Our romp through the List class yields a first look at images. Instances of the javax.microedition.lcdui.Image class represent images in the MIDP. The specification dictates that implementations be able to load images files in PNG format. [1] This format supports both a transparent color and animated images.
Image has no constructors, but the Image class offers four createImage() factory methods for obtaining Image instances. The first two are for loading images from PNG data.
public static Image createImage(String name ) public static Image createImage(byte[] imagedata , int imageoffset, int imagelength)
The first method attempts to create an Image from the named file, which should be packaged inside the JAR that contains your MIDlet. You should use an absolute pathname or the image file may not be found. The second method creates an Image using data in the supplied array. The data starts at the given array offset, imageoffset , and is imagelength bytes long.
Images may be mutable or immutable . Mutable Images can be modified by calling getGraphics() and using the returned Graphics object to draw on the image. (For full details on Graphics , see Chapter 9.) If you try to call getGraphics() on an immutable Image , an IllegalStateException will be thrown.
The two createImage() methods described above return immutable Images . To create a mutable Image , use the following method:
public static Image createImage(int width, int height)
Typically you would create a mutable Image for off-screen drawing, perhaps for an animation or to reduce flicker if the device's display is not double buffered.
Any Image you pass to Alert , ChoiceGroup , ImageItem , or List should be immutable. To create an immutable Image from a mutable one, use the following method: public static Image createImage(Image image) .
List provides methods for adding items, removing elements, and examining elements. Each element in the List has an index. The first element is at index 0, then next at index 1, and so forth. You can replace an element with setElement() or add an element to the end of the list with appendElement() . The insertElement() method adds a new element to the list at the given index; this bumps all elements at that position and higher up by one.
public void setElement(int index, String stringElement, Image imageElement) public void insertElement(int index, String stringElement, Image imageElement) public int appendElement(String stringElement, Image imageElement)
You can examine the string or image for a given element by supplying its index. Similarly, you can use deleteElement() to remove an element from the List .
public String getString(int index) public Image getImage(int index) public void deleteElement(int index)
Finally, the size () method returns the number of elements in the List .
public int size()
You can find out whether a particular element in a List is selected by supplying the element's index to the following method:
public boolean isSelected(int index)
For EXCLUSIVE and IMPLICIT lists, the index of the single selected element is returned from the following method:
public int getSelectedIndex()
If you call getSelectedIndex() on a MULTIPLE list, it will return−1.
To change the current selection programmatically, use setSelectedIndex() .
public void setSelectedIndex(int index, boolean selected)
Finally, List allows you to set or get the selection state en masse with the following methods. The supplied arrays must have as many array elements as there are list elements.
public int getSelectedFlags(boolean[] selectedArray_return) public void setSelectedFlags(boolean[] selectedArray)
The example in Listing 6-1 shows a simple MIDlet that could be part of a travel reservation application. The user chooses what type of reservation to make. This example uses an IMPLICIT list, which is essentially a menu.
Listing 6-1: The TravelList source code.
import java.io.*; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; public class TravelList extends MIDlet { public void startApp() { final String[] stringElements = { "Airplane", "Car", "Hotel" }; Image[] imageElements = { loadImage("/airplane.png"), loadImage("/car.png"), loadImage("/hotel.png") }; final List list = new List("Reservation type", List.IMPLICIT, stringElements, imageElements); final Command nextCommand = new Command("Next", Command.SCREEN, 0); Command quitCommand = new Command("Quit", Command.SCREEN, 0); list.addCommand(nextCommand); list.addCommand(quitCommand); list.setCommandListener(new CommandListener() { public void commandAction(Command c, Displayable s) { if (c == nextCommand c == List.SELECT_COMMAND) { int index = list.getSelectedIndex(); System.out.println("Your selection: " + stringElements[index]); // Move on to the next screen. Here, we just exit. notifyDestroyed(); } else notifyDestroyed(); } }); Display.getDisplay(this).setCurrent(list); } public void pauseApp() public void destroyApp(boolean unconditional) private Image loadImage(String name) { Image image = null; try { image = Image.createImage(name); } catch (IOException ioe) { System.out.println(ioe); } return image; } }
To see images in this example, you'll need to either download the examples from the book's Web site or supply your own images. With the J2MEWTK, image files should go in the res directory of your J2MEWTK project directory. TravelList expects to find three images named airplane.png, car.png , and hotel.png .
Construction of the List itself is very straightforward. Our application also includes a Next command and a Quit command, which are both added to the List . An inner class is registered as the CommandListener for the List . If the Next command or the List 's IMPLICIT command are fired off, we simply retrieve the selected item from the List and print it to the console.
The Next command, in fact, is not strictly necessary in this example since you can achieve the same result by clicking the select button on one of the elements in the List . Nevertheless, it might be a good idea to leave it there. Maybe all of the other screens in your application have a Next command, so you could keep it for user interface consistency. It never hurts to provide the user with more than one way of doing things, either.
The difference between EXCLUSIVE and IMPLICIT lists can be subtle. Try changing the List in this example to EXCLUSIVE to see how the user experience is different.
[1] MIDP implementations are not required to recognize all varieties of PNG files. The documentation for the Image class has the specifics. | http://flylib.com/books/en/1.520.1.42/1/ | CC-MAIN-2013-20 | en | refinedweb |
NAME
munmap -- remove a mapping
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/mman.h> int munmap(void *addr, size_t len);
DESCRIPTION
The munmap() system call deletes the mappings for the specified address range, and causes further references to addresses within the range to generate invalid memory references.
RETURN VALUES
The munmap() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error.
ERRORS
The munmap() system call will fail if: [EINVAL] The addr argument was not page aligned, the len argument was zero or negative, or some part of the region being unmapped is outside the valid address range for a process.
SEE ALSO
madvise(2), mincore(2), mmap(2), mprotect(2), msync(2), getpagesize(3)
HISTORY
The munmap() system call first appeared in 4.4BSD. | http://manpages.ubuntu.com/manpages/oneiric/man2/munmap.2freebsd.html | CC-MAIN-2013-20 | en | refinedweb |
ConcurrentDictionary<TKey, TValue> Class
Represents a thread-safe collection of key-value pairs that can be accessed by multiple threads concurrently.
System.Collections.Concurrent.ConcurrentDictionary<TKey, TValue>
Assembly: mscorlib (in mscorlib.dll)
[SerializableAttribute] [ComVisibleAttribute(false)] [HostProtectionAttribute(SecurityAction.LinkDemand, Synchronization = true, ExternalThreading = true)] public class ConcurrentDictionary<TKey, TValue> : IDictionary<TKey, TValue>, ICollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IDictionary, ICollection, IEnumerable
Type Parameters
- TKey
The type of the keys in the dictionary.
- TValue
The type of the values in the dictionary.
The ConcurrentDictionary<TKey, TValue> type exposes the following members. *. | http://msdn.microsoft.com/en-us/library/dd287191(v=VS.100).aspx | CC-MAIN-2013-20 | en | refinedweb |
RE: ViewcvsOxley, David wrote:
> Yes, I read the install file. It said to copy the files
> under mod_python to a directory hosted by Apache.
>
> I've just run "cvs up -D 06/14/2003" on viewcvs and got it
> working with the main links, but the links at the top ([svn]
> /Project/trunk) are giving the original error. HEAD of
> viewcvs is completely hosed and unusable unless you type in
> the urls manually. i.e. None of the links work..
>
> Dave
This appears to be caused by bug in mod_python. I posted a bug report to the
mod_python mailing list about this (for some reason it hasn't shown up yet).
Anyway, here's a temporary workaround for viewcvs:
--- sapi.py.orig Fri Aug 8 19:29:24 2003
+++ sapi.py Sat Aug 9 08:06:31 2003
@@ -282,6 +282,18 @@
def getenv(self, name, value = None):
try:
+ if name == 'SCRIPT_NAME':
+ path_info = self.request.subprocess_env.get('PATH_INFO', None)
+ if path_info and path_info[-1] == '/':
+ script_name = self.request.subprocess_env['SCRIPT_NAME']
+ path_len = len(path_info) - 1
+ if path_len:
+ assert script_name[-path_len:] == path_info[:-1]
+ return script_name[:-path_len]
+ else:
+ assert script_name[-1] == '/'
+ return script_name[:-1]
+
return self.request.subprocess_env[name]
except KeyError:
return value
- Russ
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org
Received on Sat Aug 9 17:38:00 2003
This is an archived mail posted to the Subversion Dev
mailing list. | http://svn.haxx.se/dev/archive-2003-08/0457.shtml | CC-MAIN-2013-20 | en | refinedweb |
Generally, I've seen that warning when the WordPerfect
importer mis-identifies the file type, and decides to
import the document. If you have that plugin
installed, try removing it.
--- Scott Prelewicz <scottp@noein.com> wrote:
>
>
> This could be for any file. Specifically, we want to
> be able to convert
> uploaded resumes to pdf for the client. Actually,
> the files could be
> Word files or .txt files, though 95% of them will be
> Word files.
>
>
> -----Original Message-----
> From: Dom Lachowicz [mailto:domlachowicz@yahoo.com]
> Sent: Tuesday, June 06, 2006 2:30 PM
> To: Scott Prelewicz; abiword-user@abisource.com
> Subject: Re: AbiWord - .doc to pdf command line
> conversion
>
> Is this for just 1-2 files in particular, or any
> Word
> file?
>
> --- Scott Prelewicz <scottp@noein.com> wrote:
>
> >
> > Hello All,
> >
> > I am trying to convert .doc files to .pdf
> > programmatically. With help
> > from those on IRC, I've managed to make some
> > progress on this however I
> > am getting stuck at the error below while trying
> to
> > test on the command
> > line interface.
> >
> > I am not familiar with AbiWord much, a newbie, but
> > I'm thinking it's
> > possibly an installation issue? Correct? And if
> so,
> > what should I do? :)
> >
> > Running AbiWord 2.4 on OpenBSD 3.6.
> >
> >
> >
> > >> abiword --to=/tmp/foo.pdf /tmp/bar.doc
> >
> > (abiword:14375): libgsf:msole-CRITICAL **:
> > ole_get_block: assertion
> > `block < ole->info->max_block' failed
> >
> > ** (abiword:14375): WARNING **: failed request
> with
> > status 1030
> > Aborted
> >
> >
> >
> >
> > /tmp/foo.pdf DOES get created, however it is a
> zero
> > length file unable
> > to be read by Acrobat reader. The test doc is a
> > simple one sentence Word
> > document.
> >
> > Thank you for any help,
> >
> > Sc Jun 6 20:40:26 2006
This archive was generated by hypermail 2.1.8 : Tue Jun 06 2006 - 20:40:26 CEST | http://www.abisource.com/mailinglists/abiword-user/2006/Jun/0009.html | CC-MAIN-2013-20 | en | refinedweb |
25 February 2011 18:43 [Source: ICIS news]
LONDON (ICIS)--Lubricants demand in India, which is expected to overtake China as the world’s fastest growing major economy, could expand by more than 19% from 2009 to 2.23m tonnes in five years, said an industry expert on Friday.
Industrial lubricants are likely to be the fastest growing segment, followed by the consumer and commercial sectors, said Kline & Co’s Geeta Agashe.
Agashe spoke at the 15th ICIS World Base Oils & Lubricants Conference in ?xml:namespace>
Industrial oils and fluids, which accounted for slightly over half of India’s total lubricants demand in 2009, was projected to grow at 4.5% per year to 1.18m tonnes in 2014, according to Agashe. The country’s total lubricants consumption was at an estimated 1.86m tonnes in 2009.
A sizeable portion of the growth in industrial lubricants demand would come from the power generation plants, as well as the automotive and auto-component industries, she added.
Consumer lubricants consumption, driven mainly by rising car and motorcycle population, could expand to 214,000 tonnes by 2014, based on a projected annual growth rate of 3.3% per year, Agashe said.
The market is also expected to shift towards better quality and lighter viscosity engine oils, partly because of higher emission standards and stricter requirements by the original equipment manufacturers (OEM).
By 2014, 5W and 0W lower viscosity engine oils are expected to make up around 10% of the market in
Growth in the commercial automotive lubricants sector, which includes heavy duty engine oil for buses and trucks, as well as lubricants for agricultural equipment such as tractors, is expected to increase at a yearly rate of 2.6% to 834,000 tonnes in 2014, she said.. | http://www.icis.com/Articles/2011/02/25/9439042/indias-lubricants-demand-predicted-to-see-strong-growth.html | CC-MAIN-2013-20 | en | refinedweb |
Join devRant
Search - "unofficial"
- /*
*:
-
- To the Windows 10 users of the unofficial devRant client.
I will release soon the v1.4, but I'm already thinking about the future of this project.
What do you think about this concept?
Let me know if you like/hate it.
Any suggestion will be considered.
Thanks!61
- //
//
- Unofficial devRant Racing Team in Forza Motorsport 7 basing on Ford Falcon in Supercars series - probably most geeky racing team ever!12
-.6
- I'm so excited about this small little feature I implemented in devRant unofficial UWP.
Can't stop using it… 😅
I hope you'll enjoy it (coming in v2.0.0-beta15 very soon). 😁14
- OK i saw extension for chrome allow you to explore devRant from litlle window in chrome but not for firefox so today i like to introduce the same but for firefox !!!...
credits for idea goes to @Raspik
Enjoy !!!46
- This rant just fucked up devRant unofficial for Windows 10.
It causes a JSON syntax error in the API response. 🤣
Thanks @kwilliams! 😁
-
-
- As most of you already know, the mentioned users in a rant don't receive a notification so you have to mention them again in a comment.
After a suggestion of @Cozyplanes I decided to implement a feature that make this automatic.
Just check the box and forget about it.
It will be available soon in the next update of devRant unofficial UWP!7
- //
//53
-
-! :)
- Best: My first app for Windows 10, "devRant unofficial".
Worst: A website for a client using Facebook APIs which don't want to work properly.4
- //
-
-
- Guys addon devRant in Firefox is available to Download from HERE:...
ENJOY!!!!!!!!!!!!
- - Release the stable v2 of devRant unofficial UWP
- Work on a new app
- Improve everyday and never give up1
-
- The rant filters are now available also on Windows 10 (on Desktop, Mobile, HoloLens, Xbox One and Surface Hub | Anniversary Update and later)!
Download:
- #include <stdio.h>
#include <day.h>
#include <birthday. h>
int main() {
if (Yesterday == 25.02) {
beAYearOlder;
printf("Yeah I'm 17 now");
}
return 0;
}
//I'm 17 years old everything goes very fast and I didn't even notice how fast. When I was 16 I created a unofficial devRant extension for Firefox and dome other apps. But this year I gave myself a goal for this year create an app which is complex and very useful which is my project of voice changer. Yeah but not like other ones that have defined robot voice and like so. No I will create voice changer that can take someone voice and when you say something it will say it like different person. And of course some other apps but I will see if I can do the voice thing. Wish me a luck.18
- Here we go again… a new update for devRant unofficial UWP blocked by Microsoft because contains "profanity"...
Interesting fact:
The screenshots which contain """profanity""" (probably bad words are enough to violate the rules) are still the same you can already find in the store, so even without this update they are visible...15
-
- Guys notifications for new Rant in Firefox unofficial devRant extension are coming in few days !!!!!1
- ?29
- When I created my Firefox devRant unofficial extension it was great to then add feature to it with 2 guys from here it was fun but I have fun now with my classmate creating a new game in C in SDL2 its hard as hell but fun14
-
-
- // devRant unofficial UWP update (v1.5.0.0)
I decided to release another "big" update before v2, with some interesting and useful features already present in the official clients and a completly new feature suggested by users present ONLY here ("hide notifs already seen").
I hope you enjoy it! 😉
v1.5.0.0:
- Added weekly rant banner to rants with 'wk' in tags;
- Added avatars in notif list;
- Added ability to hide notifs already seen;
- Added 'draft Rant', you can now write a Rant without posting it, it will be automatically saved and available to be posted later;
- Updated Swag Store, now always up to date;
- Updated 'Mute notifs from this rant', now except @mentions;
- Improved date format of rants;
- UI improvements;
- Minor internal changes;
-
- Wow 22 downloads on my first programming thing what I released I feel super excited for that...
- Yesterday I submitted my chrome extension to the chrome web store.
Today it got accepted!
- When you send a new update for devRant unofficial UWP to certification and it get blocked because contains profanity and inappropriate content... WTF? 😂12
- Guys if somebody here use Firefox(I know there are lot of you guys !!!) And you wanna to check devRant often but you hate to always open it close it and so on. Or you may wanna to get notified about new rant posts well why not give:... A try ? its awesome and i have posted about it long time ago so new comers may not know about this awesome addon so check it out . Also its developed by me !4
-
- Hard at work moving the unofficial devRant api documentation to GitBook.
Note: The previous link provided will stop working in due course.4
- Downloaded Lineage 13.0 and 12.1.1 and going to build both I know 13.0 will be hard because devs already fucked that device and 12.1.1 building just to have latest security patches hopefully.
So I'm probably last person who is still supporting my device :)))) Support unofficial ended before cm 13.0 was prepared for this device so no cm13.0 on my device at least i have 12.12
-
- Taken way too long to make but 1.0.3 of devRant Desktop is finally out!! :P
Download:
-
- Unknown notification error.
devRant unofficial
@JS96 @dfox
too bad... I'm sure @dfox didn't tested in production (joking... don't be too serious guys...)4
- Guys unofficial devRant extension for firefox version 2.11 is out it now finally supports Firefox Quantum (Yayyyyy).
But i had to remove the notification feature.16
- I've released my unofficial C# wrapper for the devRant public API. Feel free to check it out and contribute if you would like! Feedback is appreciated.
- I fucking hate the Internet
day before Yesterday, I was searching for a software on internet(which is not free) I found a site (unofficial) giving me both free full & trial version. so I thought, why not get the full version. I downloaded it, installed it. awesome.
everything was going great until I found out that all of my files in a folder were encrypted by some WankDecrypt. I was lucky the files in that folder were useless. but next day some mysterious links started to pop up into my browser. and today some fucking wank decentralized shit started eating up my ram. FML
Somebody fucking stuck his shit with cracked version of software. so beware devs.13
- 18 commits later, the unofficial documentation has been ported over to GitBook.
The documentation now lives in a private repo on GitHub which is hooked up to a CI tool to build the book when a commit is pushed.
This will make maintaining the documentation much easier and also allow for collaboration which was previously not possible.
Because this documentation contains some endpoints some of you might not even know about, access is provided on a invite-only basis which is controlled by @dfox.
For new requests, contact @dfox with your name and what you are planning to build.
If you have already created something with the API email me at support@nblackburn.uk with your name and a link and I will send you a invite.
-
- Just got the (unofficial) devRant app for my Mac.... feeling satisfied. 😬4
-
- The 19th day each month is the German unofficial "tax payer day". Because we start working for our own salary, not for taxes...9
- How Youtube deters unofficial downloaders.... It's really fast for 90% of the download. Then the final 10% is slow as hell...
Maybe will never actually finish... perhaps they're using the halving algo where in theory it never reaches the end4
- when will the next update roll out? (devrant unofficial uwp)
I think the changelog will be an essay.....
@JS964
- I LOVE CyanogenMod but in order to get the latest updates I have to make my own unofficial rom -_- UGHHHHH19
- Hey ranters, I made an RSS feed reader for devRant feeds. Open up your favorite RSS reader app on your device and add for devRant feeds to be shown up. The project is open sourced at.... Any suggestions welcome :)3
-
- Contest is over. I accidentally submitted broken code at the last minute so this is my unofficial placement]
-
-
- Today is the last day to sign up for the unofficial devRant hackathon.
Link:...
We only are 5, so it probably won't take place7
-
- The beta version of the new DEVRANT TOOLBOX is available now.
Its an unofficial web extension for Chrome and Firefox.
Chrome Web Store:...
Firefox:
The certifaction process takes a long time, therefore I provided a direct download for the xpi file (for side loading)....
Additional features: DUAL FRAME MODE (feeds left, rants right), themes (black, mono, darkgray, darkblue, comic, solarized), scrollbar plugin (perfect scrollbar, FF only), extended controls, fixed header, sorted userprofiles (by votes), autoreload (recent feed, 180 sec), highlighting new rants (recent feed), personal filter, image preview (mouseover), keyboard shortcuts, timestamps for rants, compact mode, colored notifs with clickable usernames, weekly rant.
I tested the extension with Windows Browsers only.
It would be great to get a feedback how it works with other systems!
Have fun with the toolbox.7
- Sneak peek at something, maybe, coming soon to the MacOS version of the unofficial devRantApp desktop client.
- Does the devRant API provide for authenticating and posting new rants?
I've found several unofficial docs that mention grabbing rants and profiles but none that I've found have a method to post rants.10
- !rant
Just dropping by to tell you guys about my unofficial app for DevRant: (alpha).
Anybody willing to contribute is very welcome :)
-
- Hey dfox, judging by their FAQ I bet you've heard of the unofficial API docs. How often would be appropriate to call the dR API? Would once every for minutes be too much?9
- My dev shitposting chat co-members are having fun coding each of our chat bots right now.
And I'm gonna release my own chat bot to my non-dev friends too. Lets see if she passes the Turing Test as me. 😂2
- i am indeed working for some fun litte projects with the Dev Rant Unofficial Wrapper API and it looks good to use it! Now. Lets make some Action!2
-
- came across them on product hunt. All are web wrappers created in electron. Fast, Smooth, Dark Mode
Unofficial Instagram -
Unofficial twitter -...
Unofficial messenger -...
What you all think?
- How do you post a message with:
devrant-unofficial-uwp
Nice split screen reader, but its not obvious how you comment to a message !12
-
-
- I wanted to explore the devRant API. But couldn't find any resource. I know it's unofficial but there has to be a minimal wiki with the endpoints available right? Could someone point me to it?4
- Whenever I update OxygenOS on my OP6 I try installing ther latest unofficial Lineage...
And if it fails... I install the full OTA ROM from fastboot.
Lineage failed... :(2
-
-
- The feeling when you find an attempt for an unofficial devRant app in your favorite framework...
*jumps* ^^
- In an unofficial iCloud help app: You'll receive 5gb free with your account. That's quite a lot - more than 4gb for example.
Well at least I don't have to setup iCloud for my family anymore.
- Is anyone running LineageOS on OnePlus 6? Apparently it's been out for awhile but seems it's still unofficial... And basically have to write the whole phone again... And there's something about needing to flash A/B partitions too with Stock OxygenOS?
But I'm already running OOS?
-
- I really hoped for an official UWL version of DevRant... At least there is a really good unofficial one!
- Finally my laptop is back and it's keyboard problem, thanks for official repair stop. Other two unofficial repair stops told me it's main board problem, FUXK THEM.
-.
- @dfox I know it's the unofficial app but I also know you guys supported it a lot. I think you should add a windows store link to the website next to ios and android ones.2
Top Tags | https://devrant.com/search?term=unofficial | CC-MAIN-2019-51 | en | refinedweb |
grahampaul1,865 Points
On the first task, I get an error with code that is not present (on line 67). Not enough chars to paste the error.
JavaTester.java:67: error: constructor Course in class Course cannot be applied to given types; Course course = new Course("Java Data Structures"); ^ required: String,Set found: String
This code doesn't exist!
package com.example.model; import java.util.List; import java.util.Set; public class Course { private String mTitle; private Set<String> mTags; public Course(String title, Set<String> tags) { mTitle = title; mTags = tags; //; } }
2 Answers
Steve HunterTreehouse Moderator 57,555 Points
Hi Graham,
The code it is referring to is the code that runs behind the scenes to test that you've addeed the correct code to the challenge. In this instance, it is calling the constructor for the
Course class and is finding an error. That's because you have amended the constructor to take a
String and a
Set of
String. It is expecting it just to take a
String` object, in this case, "Java Data Structures".
The first task asks you to initialize the
Set in the constructor. You don't need to pass it in to do that; you just want to assign something to it inside there. Take that parameter out of the constructor so that it just takes the single string parameter.
The member variable
mTags is a
Set but you'll want to initialize it as a
HashSet. First, add the
HashSet to the imports:
import java.util.HashSet;
Then, inside the constructor, assign a new
HashSet to
mTags:
mTags = new HashSet<String>();
That should do it for the first task.
Steve.
Steve HunterTreehouse Moderator 57,555 Points
Steve HunterTreehouse Moderator 57,555 Points
No problem. As long as you got through.
| https://teamtreehouse.com/community/on-the-first-task-i-get-an-error-with-code-that-is-not-present-on-line-67-not-enough-chars-to-paste-the-error | CC-MAIN-2019-51 | en | refinedweb |
slow
SBT warning: Getting the hostname was slow
Death.
A Scala method to run any block of code slowly
The book, Advanced Scala with Cats, has a nice little function you can use to run a block of code “slowly”:
def slowly[A](body: => A) = try body finally Thread.sleep(100)
I’d never seen a try/finally block written like that (without a
catch clause), so it was something new for the brain.
In the book they run a
factorial method slowly, like this:
slowly(factorial(n - 1).map(_ * n))
FWIW, you can modify
slowly to pass in the length of time to sleep, like this:
def slowly[A](body: => A, sleepTime: Long) = try body finally Thread.sleep(sleepTime)
iPhone/iOS: How to quit using cellular data when using WiFi
I live in Colorado, where cellular reception can be very hit or miss because of the mountains and rolling hills.).
My GoDaddy 4GH hosting review
Here's my GoDaddy 4GH hosting review: It sucks. A friend on Twitter warned me about it, but sadly, I didn't listen.
As a backup to that "review", here's the downtime on just one of my websites for the last several days:
Does Yahoo Mail have a memory leak?
I.
My first problem with Windows
How to slowly minimize a Mac OS X window to the Dock using the Genie effect
If<<
Apple TimeCapsule backups: Initial backup with Apple Time Capsule runs very slow over a Wireless-G network in my home network to a 500 GB Time Capsule that I just purchased, and it has been crawling along. | https://alvinalexander.com/taxonomy/term/2526 | CC-MAIN-2019-51 | en | refinedweb |
Let’s learn from a precise demo on Fitting Random Forest on Titanic Data Set for Machine Learning
Description: On April 15, 1912, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This tragedy has led to better safety regulations for ships.
Machine learning Problem : To predict which passengers survived in this tragedy based on the data given
What we will do :
1. basic cleaning for missing values in train and test data set
2. 5 fold crossvalidation
3. Model used is Random Forest Classifier
4. Predict for test data set
Let's import the library
import numpy as np import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn import cross_validation import matplotlib.pyplot as plt %matplotlib inline
Let's import the data set
train=pd.read_csv('C:\\Users\\Arpan\\Desktop\\titanic data set\\train.csv') test=pd.read_csv('C:\\Users\\Arpan\\Desktop\\titanic data set\\test.csv')
Please have a look on post for missing values before you proceed on this.Let's create a function for cleaning
def data_cleaning(train): train["Age"] = train["Age"].fillna(train["Age"].median()) train["Fare"] = train["Age"].fillna(train["Fare"].median()) train["Embarked"] = train["Embarked"].fillna("S") train.loc[train["Sex"] == "male", "Sex"] = 0 train.loc[train["Sex"] == "female", "Sex"] = 1 train.loc[train["Embarked"] == "S", "Embarked"] = 0 train.loc[train["Embarked"] == "C", "Embarked"] = 1 train.loc[train["Embarked"] == "Q", "Embarked"] = 2 return train
Let's clean the data
train=data_cleaning(train) test=data_cleaning(test)
Let's choose the predictor variables.We will not choose the cabin and Passenger id variable
predictor_Vars = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]
Let's choose model parameters
modelRandom = RandomForestClassifier(n_estimators=1000,max_depth=4,max_features=3,random_state=123)
Below is a nice course followed by many with good ratings on Udemy.Price is Low!! Hurry !! & buy this course and begin to learn R & Python
Let's do the 5 fold crossvalidation
modelRandomCV= cross_validation.cross_val_score(modelRandom,train[predictor_Vars],train["Survived"],cv=5)
Let's check the accuracy metric of each of the five folds
modelRandomCV
array([ 0.83240223, 0.82681564, 0.8258427 , 0.79213483, 0.85875706])
Let's see the same information on the plot
plt.plot(modelRandomCV,"p")
[<matplotlib.lines.Line2D at 0xa981470>]
Let's check the mean model accuracy of all five folds
print(modelRandomCV.mean())
0.827190493466
Let's now fit the model with the same parameters on the whole data set instead of 4/5th part of data set as we did in crossvalidation
modelRandom = RandomForestClassifier(n_estimators=1000,max_depth=4,max_features=3,random_state=123) modelRandom.fit(train[predictor_Vars], train.Survived)
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=4, max_features=3, max_leaf_nodes=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=1000, n_jobs=1, oob_score=False, random_state=123, verbose=0, warm_start=False)
predictions=modelRandom.predict(test[predictor_Vars]) | https://analyticsdataexploration.com/random-forest-for-data-analytics-in-python/ | CC-MAIN-2019-51 | en | refinedweb |
Closed Bug 426732 Opened 12 years ago Closed 12 years ago
Implement -moz-nativehyperlinktext
Categories
(Core :: Widget, defect)
Tracking
()
mozilla1.9.1a1
People
(Reporter: faaborg, Assigned: fittysix)
References
Details
(Keywords: access, Whiteboard: [not needed for 1.9])
Attachments
(7 files, 8 obsolete files)
Currently hyperlinks in Firefox, both in chrome and content, are hard coded to (0,0,255). We use native platform colors for hyperlinks instead of full blue to give the product a softer and more integrated feel.
Flags: blocking-firefox3?
Example of (0,0,255) hyperlinks in chrome
Example of default visited and unvisited link colors for hyperlinks in the content area.
Example of the softer blue used in vista for hyperlinks, this case in the content area.
Example of the softer color for hyperlinks in the content area, from windows media player.
In case added code is in the scope of this bug, in GTK 2.10, Widgets have style attributes "visited-link-color" and "link-color" which would be perfect for this
Here is an example of hyperlinks in mail.app
There is a HotTrack system color for windows.
After a little digging I found COLOR_HOTLIGHT: I'm not sure what it has to do with hotlights :P, but it does seem to return the right color on vista, and from what I've found this was implemented first in windows 98. This patch just sticks it in with the CSS user defined system colors, which is probably not quite the right way to do this. I'm not sure what winCE will do with this color either, so it might have to be #ifndefed out of that one. The next step I suppose is to just make -moz-hyperlinktext default to COLOR_HOTLIGHT, but change when the user changes it.
Comment on attachment 314007 [details] [diff] [review] Implement COLOR_HOTLIGHT v1 Neil, is this the right way to expose the property? This isn't a blocker, but I don't think it's any worse for us to hard code the right colours if possible. Would take a minimally invasive patch, which might not be easy since I'm not sure that this is a theme thing as opposed to a widget thing :(
Attachment #314007 - Flags: review?(enndeakin)
Flags: wanted-firefox3+
Flags: blocking-firefox3?
Flags: blocking-firefox3-
Comment on attachment 314007 [details] [diff] [review] Implement COLOR_HOTLIGHT v1 Use '-moz-linktext' rather than 'hotlight' to be clearer and as it isn't a css defined value. Also, add the color to nsXPLookAndFeel.cpp for preference support. And of course, fix up the alignment in nsILookAndFeel.h
Attachment #314007 - Flags: review?(enndeakin) → review-
we already have a -moz-hyperlinktext, wouldn't -moz-linktext be a little redundant and confusing? I was thinking maybe -moz-nativelinktext or something (which could be filled by link-color on linux for example) or just filling -moz-hyperlinktext with %nativecolor by default, but somehow changing it to user pref when the user changes it. I was also wondering, since nsCSSProps is global, what would happen on linux/mac? I found color_hotlight is also not implemented on NT4, which (as of firefox2 anyways) is still on the supported OS list.
I think we have a couple options: 1) Default browser.anchor_color (and therefore -moz-hyperlinktext) to the appropriate value per-os. This is certainly the easiest way afaik, just a few #ifs. Personally I think this would be a decent solution, since it's user configable anyways. 2) Implement native colors. We know windows provides us with one, which is editable using the Advanced Appearance dialog, apparently GTK gives us one (what does that do on kde?), and I can't find anything on OSX having such a property or not. It seems accessibility is brought up in every bug about theme/color, but windows at least does use different Hyperlink colors on high contrast themes, which is something to consider. We might be able to do a bit of both though, since the hard coded default of -moz-hyperlinktext is defined right here: we just need to set that to %nativecolor when applicable using #ifdef I think? (or does this get overridden by a default pref?) We wouldn't even need to add a CSS keyword for that one.
Addresses everything in Comment #10, but with a different CSS keyword due to my concerns in Comment #11 To set this as the default color of -moz-hyperlinktext you simply set browser.anchor_color to -moz-nativelinktext since colors in user prefs are able to take CSS colors. This is the easiest way I've found to set default native colors until the user specifies otherwise, albeit a strange and somewhat round-about way. I was going to make a patch for this too, but I didn't know if this should be done in firefox.js or all.js. Either way it's a simple #ifdef XP_WIN to set the pref for now anyways, could change if we do other OS native link colors. Interesting tidbit since I'm sure everyone is now wondering: setting browser.anchor_color to -moz-hyperlinktext produces some strange seemingly random color, possibly some pointer being turned into a color?
Attachment #314007 - Attachment is obsolete: true
Attachment #314512 - Flags: review?(enndeakin)
Wait, setting browser.anchor_color to -moz-nativelinktext doesn't work. It works in the preferences > content > colors window properly, but it doesn't work right in the actual content. That patch will give us a CSS color for native color hyperlinks on windows, but there's still no way to default to that native color.
My code editors hate me :/ I think I've finally found the pref that's causing tabs to look like spaces. Fixes the alignment, see comment #13 for the rest
Attachment #314512 - Attachment is obsolete: true
Attachment #314528 - Flags: review?(enndeakin)
Attachment #314512 - Flags: review?(enndeakin)
Might as well get the ability to pull the colour, even if we're not going to use it immediately. Hurrah for third party themers.
Comment on attachment 314528 [details] [diff] [review] Implement COLOR_HOTLIGHT v1.1.1 need sr for this...
Looks OK to me but a style system person should sign off on it. I would want a comment somewhere explaining how this differs from -moz-hyperlinktext ... it seems the answer is "this one is not affected by user preferences".
In addition to what roc said, you should document what background color it goes against.
Should that be documented in code or MDC? probably both? I'm guessing a quick blurb in code like //extracted native hyperlink color from the system, not affected by user preferences MDC should probably mention that it's good on Window, ThreeDFace and -moz-dialog There is no official MSDN documentation on what background this color should be used on, I've just seen it personally on those in native chrome on windows. I'm also thinking that we should take this alongside the gnome color if we can get it (I don't really know how to do that one), and if we can't find anything for OSX then we should probably just return the user pref or a hard coded color that matches the screenshot in attachment 313314 [details]
Assignee: nobody → fittysix
So, does this make sense/work? I don't fully understand the workings of GTK or mozilla widget GTK code, but from what I can tell this appears to be correct. I would build & check this myself, but I don't have a linux machine that can build mozilla. file: case eColor__moz_nativelinktext: //"link-color" is implemented since GTK 2.10 #if GTK_CHECK_VERSION(2,10,0) GdkColor colorvalue = NULL; gtk_widget_style_get(MOZ_GTK_WINDOW, "link-color", &colorvalue, NULL); if (colorvalue) { aColor = GDK_COLOR_TO_NS_RGB(colorvalue); break; } // fall back to hard coded #endif //GTK_CHECK_VERSION(2,10,0) aColor = NS_RGB(0x00,0x00,0xEE); break; Specifically I'm not sure I'm calling gtk_widget_style_get correctly. I used MOZ_GTK_WINDOW because I think this is the GtkWidget for a window, and the link-color of a window is probably exactly what we want. It defaults to #0000EE because that's the current value of most hyperlinks. roc? I believe you're the guy to ask for Widget - GTK
(In reply to comment #21) bah, I forgot the * for the pointer in the definition and aColor assignment
I meant document in code comments, and this latest patch doesn't do that yet. We do need Mac and GTK2 code for this. We should also see the code to actually use this in Firefox. Michael Ventnor can probably test/finish the GTK2 path.
Adds comments, and hard codes the mac color (assuming the underline is the non-aliased color) I searched and can not find any reference to a link, hyperlink or anchor text color anywhere in cocoa or as a kThemeTextColor*. There might be something if we hooked in to webkit, they have a -webkit-link, but I'm guessing we don't want to do that.
Attachment #314528 - Attachment is obsolete: true
Attachment #315439 - Flags: superreview?(roc)
Attachment #315439 - Flags: review?(dbaron)
Looks good, but still needs GTK love.
>I searched and can not find any reference to a link, hyperlink or anchor text >color anywhere in cocoa or as a kThemeTextColor*. There might be something if >we hooked in to webkit, they have a -webkit-link I couldn't find this information either. As far as I can tell Safari doesn't use platform native hyperlink colors, and instead defaults to (0,0,255).
I've gotten this far with the GTK version: This still doesn't work (link goes red, colorValuePtr remains null) and I can't figure out why. I'm attempting this on Kubuntu in KDE, but GTK should be all the same.
Michael V, can you help here?
Ryan, looking at your pastebin, that code doesn't belong there. It must go in InitLookAndFeel() and the resulting color saved to a static variable. I think the problem is he hasn't realized the window; this will be fixed (and a LOT of code removed) if he moves the code to InitLookAndFeel() and reuses the widgets and local variables there. I could be wrong though, but he should still move the code regardless. Also, I think colorValuePtr needs to be freed like what was done with the odd-row-color code. We also don't need the 2.10 check, we require 2.10 now.
It works! And As far as I can tell, it has been working for some time now :/ Even the version on that pastebin works (which I meant to mention, was written that way for testing purposes, I was going to do it this way once I actually got something working.) Finding a gtk2 theme that actually specifies a link-color has proven more difficult than pulling it out. (I ended up just editing the gtkrc file of Raleigh) Interesting note: as far as I can tell any GtkWidget will work, we could use an already defined widget rather than creating the gtklinkbutton, but it is possible AFAIK to specify link-color per widget, so this is probably the best way to do it.
Attachment #315439 - Attachment is obsolete: true
Attachment #315439 - Flags: superreview?(roc)
Attachment #315439 - Flags: review?(dbaron)
Attachment #316102 - Flags: superreview?(roc)
Attachment #316102 - Flags: review?(dbaron)
Why don't you move the gdk_color_free call to within the first check for colorValuePtr?
Good point, I thought of that as I was going over the diff, but left it like that because that's how the treeview was done. Now that I look over the treeview again though I see there's 2 instances where it could be used, which is obviously why it was done that way there.
Attachment #316102 - Attachment is obsolete: true
Attachment #316135 - Flags: superreview?(roc)
Attachment #316135 - Flags: review?(dbaron)
Attachment #316102 - Flags: superreview?(roc)
Attachment #316102 - Flags: review?(dbaron)
This patch is separate, but obviously depends on the other one. I had to add a special case in the part where it reads the pref, but I don't know if there's a better way to do this. It might be worth changing MakeColorPref to recognize named colors, but this is so far the first time we've wanted to do this.
Whiteboard: [has patch][need review dbaron][needed for blocking bug 423718]
I suppose I never did document which background colors this goes against, but since that isn't done on any of the other native colors I think it's something best left for MDC. On there we can note that there is no official documentation specifying a safe background color, but that it's used on dialogs and windows in native OS apps.
Attachment #316135 - Attachment is obsolete: true
Attachment #317890 - Flags: review?(dbaron)
Attachment #316135 - Flags: review?(dbaron)
the old one still works with fuzz, but might as well attach this
Attachment #317890 - Attachment is obsolete: true
Attachment #319518 - Flags: review?(dbaron)
Attachment #317890 - Flags: review?(dbaron)
Whiteboard: [has patch][need review dbaron][needed for blocking bug 423718] → [not needed for 1.9][has patch][need review dbaron]
What about comment 19? (I didn't go through the other comments in detail to check that they were addressed; I just found one that wasn't addressed.)
(In reply to comment #36) > What about comment 19? I kind of addressed it in comment 34. If it should be documented in code; is nsILookAndFeel.h the best place for that with the other comments?
Comment on attachment 319518 [details] [diff] [review] -moz-nativelinktext v1.6 unbitrot 2 >+ eColor__moz_nativelinktext, //hyperlink color extracted from the system, not affected by user pref Could you call this value (and the corresponding CSS value) nativehyperlinktext rather than nativelinktext? I think that's more consistent with existing names. (Or was there a reason you chose what you did?) > // Colors which will hopefully become CSS3 Could you add your new color to the end of the section below this comment, rather than above it? And could you add an item to layout/style/test/property_database.js testing that the new value is parsed? r=dbaron with those changes (I don't seem to have permission to grant the review flag you requested, though I can grant the other review flag) I'm presuming that roc's superreview+ above means he reviewed the platform widget implementations; if not, somebody with some platform knowledge should review that.
Attachment #319518 - Flags: review?(dbaron) → review+
Yeah, the platform widget implementations are reviewed.
(In reply to comment #38) > And could you add an item to layout/style/test/property_database.js testing > that the new value is parsed? I've looked at this file, and tbh I'm not entirely certain on how to add this property to it. If I understand it correctly the Initial value would be entirely dependent on your system settings, so I could only guess on the initial values. For other_values the only thing that it wouldn't really be is transparent/semi-transparent. And the only invalid thing is non-longhand colors, since it should always return six character colors. With that, based on other items in the file I have this, but I don't know if it's correct: "-moz-nativehyperlinktext": { domProp: "MozNativeHyperLinkText", inherited: false, type: CSS_TYPE_LONGHAND, initial_values: [ "#0000ee", "#144fae", "#0066cc", "#000080", "#0000ff" ], other_values: [ "transparent", "rgba(255,128,0,0.5)" ], invalid_values: [ "#0", "#00", "#0000", "#00000", "#0000000", "#00000000", "#000000000" ] Other than that I've made the other changes mentioned, though I will wait for feedback on this before posting a patch. It was named that way mostly due to the suggestion in comment #10, but nativehyperlinktext makes more sense.
David, comment 40 is for you.
-moz-nativehyperlinktext isn't a property; it's a value for existing properties. You should add two items to the "other_values" line for the "color" property and then check that the mochitests in layout/style/test still show no failures.
(In reply to comment #37) > I kind of addressed it in comment 34. If it should be documented in code; is > nsILookAndFeel.h the best place for that with the other comments? Yes.
All comments should be addressed, I rearranged stuff to make a little more sense (these changes are at the bottom of the appropriate list unless they should be elsewhere), added a comment on the mac color in-line with other comments in the file, and clarified exactly which user pref it is not affected by.
Attachment #319518 - Attachment is obsolete: true
Keywords: checkin-needed
Whiteboard: [not needed for 1.9][has patch][need review dbaron] → [not needed for 1.9]
Status: NEW → RESOLVED
Closed: 12 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 3.1a1
Added to: Hope that's right.
Bug 371870 and bug 437358 are about using the native color in chrome. Please file a new bug for using it in content.
Component: Theme → Widget
Flags: wanted-firefox3+
Flags: blocking-firefox3-
Product: Firefox → Core
Summary: Use native platform colors for hyperlinks both in chrome and content → Implement -moz-nativehyperlinktext
Target Milestone: Firefox 3.1a1 → mozilla1.9.1a1
Attachment #329015 - Attachment description: -moz-nativelinktext v1.7 → -moz-nativehyperlinktext v1.7 | https://bugzilla.mozilla.org/show_bug.cgi?id=426732 | CC-MAIN-2019-51 | en | refinedweb |
vmod_rewrite aims to reduce the amount of VCL code dedicated to url and
headers manipulation. The usual way of handling things in VCL is a long list of
if-else clauses:
sub vcl_recv { # simple regex if (req.url ~ "pattern1") { set req.url = regsub(req.url, "pattern1", "subtitute1"); # case insensitivity else if (req.url ~ "(?i)PaTtERn2") { set req.url = regsub(req.url, "(?i)PaTtERn2", "subtitute2"); # capturing group else if (req.url ~ "pattern([0-9]*)") { set req.url = regsub(req.url, "pattern([0-9]*)", "subtitute\1"); ... }
Using
vmod_rewrite, the VCL boils down to:
import rewrite; sub vcl_init { new rs = rewrite.ruleset("/path/to/file.rules"); } sub vcl_recv { set req.url = rs.replace(req.url); }
with
file.rules containing:
"pattern1" "subtitute1" "(?i)PaTtERn2" "subtitute2" "pattern([0-9]*)" "subtitute\1" ...
This is specially useful to clean URL normalization code as well as redirection generation. Thanks to the object-oriented approach, you can create multiple rulesets, one for each task, and keep your VCL code clean and isolated.
OBJECT ruleset(STRING path=NULL, STRING string=NULL, INT min_fields = 2,
ENUM {any, regex, prefix, suffix, exact, glob, glob_path, glob_dot} type =
"regex", ENUM {quoted, blank, auto} field_separator = "quoted")
Parse the file indicated by
path or contained in
string and
create a new rules object. This stores all the rewrite rules described in
the file.
The file list all the rules, one per line, with following format (except if
type=any)::
PAT [SUB...]
If
type=any, a first field is inserted to give the type for that line:
TYPE PAT [SUB...]
PAT and SUBs are both quoted strings, TYPE is not, and all are separated by whitespaces. PAT is the regex to match and SUB the string to rewrite the match with. Empty lines and those starting with “#” are ignored.
TYPE (in the rule file) and
type (as function argument) can be:
regex: PAT is matched as a regular expression.
prefix: PAT is a string that tries to match the beginning of the target.
suffix: PAT is a string that tries to match the end of the target.
exact: PAT is a string and tries to match the full string.
glob: PAT is matched as a wilcard (
*matches any group of characters)
glob_path: Same as
glob, but
*doesn’t match slashes (useful to match paths).
glob_dots: Same as
glob, but
*doesn’t match dots (useful to match IP adresses).
any: use the first field in the rule file to decide (can’t be used in the rule file).
min_fields dictate how many strings each line should contain (not including
TYPE), if that minimum isn’t reached, the call will fail and the VCL won’t load.
field_separator specifies how the strings are quoted:
quoted: double-quotes delimit a string and are not included in said string.
blank: string starts with its first non-whitespace character, and end with its last.
auto: each word in the ruleset can use either
quoted(starts with double-quotes) or blank (starts with anything else).
This method is called in
vcl_init and you can create as many objects as you
need:
sub vcl_init { new redirect = rewrite.ruleset("/path/to/redirect.rules"); new normalize = rewrite.ruleset(string = {" # this is a comment pattern1 subtitute1 (?i)PaTtERn2 subtitute2 pattern([0-9]*) subtitute\1 "}, fied_separator = auto); }
VOID .add_rules(STRING path=0, STRING string=0, ENUM {any, regex, prefix,
suffix, exact, glob, glob_path, glob_dot} type = "regex", ENUM {quoted, blank,
auto} field_separator = "quoted")
Add rules to an existing ruleset. This is a convenience for split-VCL setups where rules need to be centralized in a single ruleset, but initialized in multiple places to be co-located with related logic.
This function can only be called from
vcl_init. Just like ruleset
constructors,
path and
string arguments are mutual-exclusive. When
rules are added in multiple places, they are then treated in the same order
they were added. It is possible to both specify rules in a ruleset constructor
and then add more rules.
BOOL .match(STRING)
Returns
true if a ruled in the rule set matched the argument,
false
otherwise.
Example:
import rewrite; sub vcl_init { new rs = rewrite.ruleset(string = {" "^/admin/" "^/purge/" "^/private" "}, min_fields = 1); } sub vcl_recv { if (rs.match(req.url)) { return (synth(405, "Restricted"); } }
STRING .rewrite(INT field = 2, ENUM {regsub, regsuball, only_matching} mode = "regsuball")
.rewrite() is called after
.match(), and applies the previously matched
rule, skipping the lookup operation.
By default, the first substitute string (index 2 of the rule definition) is
used, but you can specify a different
field if needed. If the field doesn’t
exist, the string is not rewritten.
mode dictates how the string should be rewritten:
--only-matchingoption of GNU grep.
For example, considering this rule::
"bar" "qux"
and the string “/foo/bar/bar”:
You can use this function to retrieve multiple values associated to one rule:
import std; import rewrite; sub vcl_init { new rs = rewrite.ruleset(string = {" # pattern ttl grace keep "\.(js|css)" "1m" "10m" "1d" "\.(jpg|png)" "1w" "1w" "10w" "}); } sub vcl_backend_response { # if there's a match, convert text to duration if (rs.match(bereq.url)) { set beresp.ttl = std.duration(rs.rewrite(0, mode = only_matching), 0s); set beresp.grace = std.duration(rs.rewrite(1, mode = only_matching), 0s); set beresp.keep = std.duration(rs.rewrite(2, mode = only_matching), 0s); } }
STRING .match_rewrite(STRING, INT field = 2, ENUM {regsub, regsuball, only_matching} mode = "regsuball")
This is a convenience function combining the
.match() and
.rewrite()
methods:
redirect.match_rewrite(req.url, field = 3, mode = regsuball);
is functionally equivalent to:
redirect.match(req.url); redirect.rewrite(field = 3, mode = regsuball);
You can use it to apply the first matching rewrite rule to a string:
import rewrite; sub vcl_init { new rs = rewrite.ruleset(string = {" "^(api|www).example.com$" "example.com" "^img(|1|2|3).example.com$" "img.example.com" "temp.example.com" "test.example.com" "}); } sub vcl_recv { # normalize the host set req.url = rs.match_rewrite(req.url); }
STRING .replace(STRING, INT field = 2, ENUM {regsub, regsuball, only_matching} mode = "regsuball")
.replace() is deprecated. Please use
.match_rewrite() instead.
vmod_rewrite is available starting from version 4.1.7r1.
This vmod is packaged directly in the
varnish-plus package.
The package contains further installation and usage instructions, accessible
via
man vmod_rewrite.
Contact support@varnish-software.com if you need assistance. | https://docs.varnish-software.com/varnish-cache-plus/vmods/rewrite/ | CC-MAIN-2019-51 | en | refinedweb |
US20120159163A1 - Local trusted services manager for a contactless smart card - Google PatentsLocal trusted services manager for a contactless smart card Download PDF
Info
- Publication number
- US20120159163A1US20120159163A1 US13/235,375 US201113235375A US2012159163A1 US 20120159163 A1 US20120159163 A1 US 20120159163A1 US 201113235375 A US201113235375 A US 201113235375A US 2012159163 A1 US2012159163 A1 US 2012159163A1
- Authority
- US
- United States
- Prior art keywords
- secure element
- software application
- smart card
- tsm
- contactless smart
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0 claims description 145
- 230000004044 response Effects 0 claims description 22
- 238000004891 communication Methods 0 claims description 14
- 230000000875 corresponding Effects 0 claims description 10
- 238000004422 calculation algorithm Methods 0 abstract description 8
- 238000000034 methods Methods 0 description 16
- 238000000638 solvent extraction Methods 0 description 13
- 238000003860 storage Methods 0 description 12
- 238000009434 installation Methods 0 description 7
- 239000002609 media Substances 0 description 6
- 239000000203 mixtures Substances 0 description 6
- 238000005365 production Methods 0 description 5
- 238000004590 computer program Methods 0 description 4
- 239000004065 semiconductor Substances 0 description 4
- 230000001276 controlling effects Effects 0 description 3
- 230000000977 initiatory Effects 0 description 3
- 230000003993 interaction Effects 0 description 3
- 238000004519 manufacturing process Methods 0 description 3
- 238000005192 partition Methods 0 description 3
- 230000001960 triggered Effects 0 description9000003795 chemical substance by application Substances 0 description 2
- 238000009826 distribution Methods 0 description 2
- 230000004048 modification Effects 0 description 2
- 238000006011 modification Methods 0 description 2
- 230000002633 protecting Effects 0 description 2
- 230000004224 protection Effects 0 description 2
- 230000003213 activating Effects 0 description 1
- 239000003570 air Substances 0 description 1
- 230000002950 deficient Effects 0 description 1
- 230000018109 developmental process Effects 0 description 1
- 239000003999 initiator Substances 0 description 1
- 239000010410 layers Substances 0 description 1
- 239000010912 leaf Substances 0 description 1
- 238000010295 mobile communication Methods 0 description 1
- 230000001264 neutralization Effects 0 description 1
- 230000003287 optical Effects 0 description 1
- 229920001690 polydopamine Polymers 0 description 1
- 238000004513 sizing Methods 0 description 1
- 238000005476 soldering
Systems, methods, computer programs, and devices are disclosed herein for deploying a local trusted service manager within a secure element of a contactless smart card device. The secure element is a component of a contactless smart card incorporated into a contactless smart card device. An asymmetric cryptography algorithm is used to generate public-private key pairs. The private keys are stored in the secure element and are accessible by a trusted service manager (TSM) software application or a control software application in the secure element. A non-TSM computer with access to the public key encrypts and then transmits encrypted application data or software applications to the secure element, where the TSM software application decrypts and installs the software application to the secure element for transaction purposes.
Description
- This application claims priority to U.S. Provisional Patent Application No. 61/424,604, filed Dec. 17, 2010 and entitled “Systems And Methods For Deploying A Trusted Service Manager Locally In A Contactless Payment Device.” The entire contents of the above-identified priority application are hereby fully incorporated herein by reference.
- The present disclosure relates generally to computer-implemented systems, methods, and devices for partitioning the namespace of a secure element in contactless smart card devices and for writing application data in the secure element using requests from a software application outside the secure element.
- Contactless transaction systems use secure contactless smart cards for transaction purposes. Certain card type devices are enabled using electronic components, such as an antenna and secure memory, and supporting semiconductor components, such as a memory management unit, a processor, and a cryptographic generator.
- The different types of software application or application data memory areas include random access memory (RAM), read only memory (ROM), and non-volatile flash memory. These memory areas are typically secure memory areas and store all the secure information required to operate software applications for access, membership, or payment purposes. Certain low end contactless smart cards may not offer significant processing capabilities; these smart cards are often passive and transmit a radio frequency with information from a passive memory. Further, each secure memory area is assigned specific application functions, which are included in the secure element area within the contactless smart card.
- only by certain card readers. In many widely used contactless transaction cards, such as the MIFARE Classic®, a limited amount of resources are available within the smart card to enable further development. For example, on a 4 KB card, a requirement exists that all of the 4 KB should be active within the card at any given time.
- In some secure element namespaces, also referred to as “memory areas” within contactless cards, the available memory is statically partitioned, and the partitions are further encoded in the card reader. Eventually, the card reader reads only from the pre-determined partitions. This division of an already over-subscribed namespace results in frequent collisions, and therefore, anti-collision protocols that further reduce available memory space. Further, limited security protocols are enforced for cards that do not have any processor capabilities. This enforcement may reduce the security options within the card and the card readers compared to, for example, EMV type cards that are commonly used for credit card applications.
- Some software applications may limit information stored within the card, as well as the control of the information to owners of the secure keys. On a contactless smart card that includes multiple applications, conflicts and errors arise as a result of shared memory. Further, if the second company needs to protect a part of the data on the card, this protection will not be possible as one key does not offer security over-riding another key. The limited application space, data space, and security with multi-party interests are deficient in the current application. Further, the access keys on the card cannot be updated without permission of key “B” owners.
- In certain exemplary embodiments, a computer-implemented method for implementing a trusted service manager (TSM) locally within the secure element of a contactless smart card device comprises installing, in the secure element of the contactless smart card device, a TSM software application, wherein the TSM software application comprises computer code for executing a transmitting function to request application data and a decrypting function for decrypting an encrypted form of received application data, the received application data received at the contactless smart card device in response to the request from the transmitting function; storing, in the secure element, a private key assigned to the TSM software application, the private key generated along with a corresponding public key using an asymmetric cryptography algorithm; transmitting, by the transmitting function of the TSM software application to one a plurality of registered remote non-TSM computers, a request for application data, wherein the remote non-TSM computer is configured to access the public key, and wherein the remote non-TSM computer encrypts the requested application data with the public key; receiving, in the contactless smart card device, the encrypted application data responsive to the transmitted request; and decrypting, by the decrypting function of the TSM software application, using the private key, the encrypted application data.
FIG. 1illustrates a computer-implemented system and device for partitioning the namespace of a secure element in contactless smart card devices and for writing application data in the secure element using requests from a software application outside the secure element according to certain exemplary embodiments. FIG. 2illustrates a computer-implemented system and device for partitioning the namespace of a secure element in contactless smart card devices and for writing application data in the secure element using requests from a software application outside the secure element according to certain exemplary embodiments. FIG. 3illustrates a data structure for the namespace of a secure element in contactless smart card devices and the application data associated with the control software application which controls the partitioning and storage of application data in the secure element namespace according to certain exemplary embodiments. FIG. 4illustrates a computer-implemented method for partitioning the namespace of a secure element into at least two storage types by a control software application within the secure element according to certain exemplary embodiments. FIG. 5illustrates a computer-implemented method for writing application data in a secure element namespace using requests from a user-interface software application resident outside the secure element according to certain exemplary embodiments. FIG. 6illustrates a computer-implemented method for implementing a trusted service manager (TSM) locally within the secure element of a contactless smart card device according to certain exemplary embodiments. FIG. 7illustrates a computer-implemented method of controlling access to the secure element namespace for partitioning and provisioning purposes according to certain exemplary embodiments.
-.
- The application identifier (AID) within the contactless smart card is a 16 bit code divided into a function cluster and an application code, each of 8 bit length. Key “A” or the “A Key” of sector 0 is a public sector, certain contactless smart cards or implementations of a contactless smart card, the secure element namespace can be divided into different partitions for different card types, including different card protocols or platforms, for example, EMVCo on JavaCard platform, near field communication (NFC) for proximity sensing, or MIFARE. In one embodiment, the secure element namespace is virtually divided into sectors, where each sector includes 4 memory blocks that are each 16 bytes in length, with the options for sector sizing different from the default of the 16 bytes. are used to access the memory blocks. The remainder of the memory blocks in a sector is data memory blocks that contain application data or the software application. Software applications also can” also one or more additional software applications issuer that include AIDs of unique applications on the card and the access memory block which provides access to the application directory table. The manufacturer's data contactless smart card device incorporates the contactless smart card and provides a user-interface software application access to the contactless smart card. Exemplary contactless smart card devices include smart phones; mobile phones; application's provider, for example, a transit office, a different set of access keys, key A and key B, which allow the station information entry and exit to be registered. A value of charge is calculated between the entry and exit stations and is then APIs may be supported by the runtime environment of the secure element or the contactless smart card device that hosts the secure element. is partitioned into two storage types by a control software application installed within the secure element namespace. The control software application may be installed within a physically or virtually different secure element namespace, where the physically different secure element namespace includes a secure communication channel with the secure element namespace of the memory block and sector structure disclosed herein. By way of an example,, the secure element namespace is partitioned into a sold memory block or a sold slot (SSLOT) and a rented memory block or a rented slot (RSLOT). Further, the SSLOT or the RSLOT may be a group of memory blocks that form the sector, or a group of memory blocks across multiple sectors. The SSLOT is a sold slot and may be contractually sold by a contactless smart card manufacturer to a card issuer. The card issuer then deploys software applications that are owned by a software application provider into the card for an end-user to use. By way of an example, a phone service provider may perform the role of the contactless smart card manufacturer while issuing a SIM or a UICC card, where the SIM or UICC card includes the secure element. The RSLOT is a slot that may be rented to the second party card user. The software application provider is an organization that utilizes custom applications within the card for such operations as financial transactions, secure authentication, ticketing, or coupons. The card issuer sets the applications and values within the allocated rented or sold SLOTs and assigns a card reader to make changes to the values within the applications in the card.
- In certain exemplary embodiments, the allocation of slots is determined by the allocation of sectors, access bits, and access keys. For example, the RSLOT can comprise rented sectors and memory blocks in the secure element namespace, rented to a to a software application provider, along with key A authentication and associated access bits for the rented sectors and memory blocks. Alternatively, multiple software application providers may partner together or may individually prefer to maintain full control of their data and life-cycle management mechanisms on their software applications and application data, wherein the complete control of the life-cycle, from download and installation to use and update is controlled using key B provided by the card issuer. An exemplary application in such a scenario is a disconnected refill station for adding card value for a transit card; this process may need key B to access sensitive data memory blocks in certain sectors of the contactless smart card. To satisfy the demands of these software application providers, the card issuer also can share SSLOT access keys with the software application provider.
- In certain exemplary embodiments, an SSLOT (sold slot) portion of the namespace may be fully surrendered to a second party, where the SSLOT portion includes key B for select sectors in the secure element namespace. Further, SSLOT for the entire secure element namespace may be provided to the software application provider by providing the same access keys for all the sectors in the contactless smart card. While yielding control of an SSLOT, the card issuer allows a service provider to access both key B and key A for the SLOT. As part of the SSLOT contract, the second party may not rotate key B without the explicit consent of the control software application (or a JavaCard based control applet) located within the secure element. The control software application is owned and deployed in the secure element by the card issuer. The intent of this arrangement is to allow the control applet to dynamically swap SSLOTS in and out of the sector per the requirements of the software applications provider. Further, when an end-user installs multiple software applications in the contactless smart card device incorporating the contactless smart card, the end-user will be provided with an option to activate certain software applications for transaction purposes, even when the secure element namespace is crowded. In an exemplary embodiment, an external secure memory may be used as a temporary memory for loading software applications that are inactive. The external secure memory may also be incorporated within the existing secure element with a different directory structure, inaccessible to external card reader devices.
- In certain exemplary embodiments, the contract between the card issuer and the software application provider who may be a second party partner of the card issuer is a contract based on a service level agreement (SLA) and business rules between the card issuer and the software application provider. The SLA defines the limitations, the transactions, and the processes that arise from the sharing of access keys to enable the SLOT transfer automatically or manually. The external rotation of keys B in the block (in other words, contactless interaction) may depend on the stipulations in the SLA. Any violation of the SLA would imply an absence of technical means to reclaim the sold portion of the namespace. This function distinguishes SSLOTS and RSLOTS. Because the SSLOT surrenders critical control to the second party partner, the sharing may be enforced via the SLA to highly valued and trusted partners.
- To make the most of the limited secure element namespace, the card issuer can “sell” as few slots as possible, while reserving as much of the namespace as possible for RSLOTs. To best utilize the reserved RSLOTs, in certain exemplary embodiments, the card issuer uses a system that allows dynamic mapping (instead of statically partitioning) of the RSLOTs. The intent of the card issuer is to make the RSLOT namespace directly manageable by a wallet software application or a user-interface software application on the contactless smart card device that interacts with the control software application in the secure element of the contactless smart card incorporated in the device. By way of an example, an end-user, on a contactless NFC enabled mobile phone, uses the wallet software application to interact with the control applet within the smart card, thereby enabling the end-user to control certain aspects in a multi-application environment in the crowded secure element namespace.
- When managing the namespace, the control applet maintains copies of the entire set of A and B keys that have access to the namespace. The possession of the A and B keys for all the sectors provides the card issuer with the flexibility to dynamically manage the namespace. The dynamic management may be applicable via a remote software application resident on a remote computer, such as a trusted service manager (TSM) in a remote server, the TSM owned or controlled by the card issuer. The wallet software application may also be used to dynamically swap software application and application data in and out of the namespace areas based on various parameters, such as user action and/or location, using the access key provided by the control applet. For example, in a transit system, if the contactless smart card device end-user travels from one location to another, the smart card credentials applicable to the first location transit system is swapped for the second location transit credentials, and the card is available for use in the second location.
- To leverage the application directory (AD), the card issuer, in one exemplary embodiment, uses a modified version of an application directory that occupies a specific part of the card block and uses a card issuer defined namespace in the secure element. Further, a card issuer application directory implementation can support the ability to provision directly from the wallet software application (user-interface software application), thereby swapping the contents of a secure element namespace in a dynamic manner without additional external permissions.
- In certain exemplary embodiments, the wallet software application may be deployed by the card issuer or the software application provided, where a software application and the control applet within the secure element can collaborate or interact with each other for accessing the namespace without using an intermediary trusted secure manager (TSM) for external authentications. To securely support the provisioning of sensitive data, the control applet supports asymmetric public/private key encryption. The control applet includes both keys in a secure memory within, or external to, the secure element, and will only make the public key available outside of the secure element.
- The control applet acts as an extension of the wallet software application on the secure element and supports such features as an EMVCo compliance in the secure element. The control applet can accept commands directly from the wallet software application, thereby supporting two types of storage (SSLOTs and RSLOTs); managing the access keys for RSLOTs and SSLOTs; exporting/rotating of the access keys to second party partners or software application providers for SSLOTs; supporting a public/private key pair so that encrypted instructions may be received from second party devices; provisioning data to the slots where the provisioning is without the TSM or a trusted service agent (TSA) software for provisioning application data, software application, or access keys to the secure element; dynamically swapping of the access keys and data for both SSLOTS and RSLOTs to support over-subscription of the namespace; and implementing a proprietary version of the AD that locates the root AD block in a card issuer's specified location. In certain exemplary embodiments, this can be defined to be at the position directly after the first kilobyte of memory (sector 16) of the smart card.
- The card issuer application directory in the namespace can be managed entirely by the card issuer. The control software application also can initialize the keys for the block, while always retaining a copy of A and B keys for all the slots (or sectors) in the namespace. In the case of SSLOTs, the possession of a valid B key may be contractually, as opposed to, technically enforced. In one example, automatic implementation via SLA and business policies from a non-TSM server or agent may be implemented. The wallet software application initiates the provisioning of all instruments to the contactless smart card device, which is an example of “pull provisioning.” Further, the wallet software application initiates all non-contactless transactions and can allow push notifications for already provisioned software applications, in which case the wallet software application may subsequently initiate the requested transaction.
- Upon installing the control software application on the secure element, the control software application may typically rotate a set of derived keys into the block and saves the keys, thereby defining the secure memory within the secure element namespace. The access keys can be derived using a combination of a master key, a unique identifier (UID), and the secure element CPLC (card production life-cycle data), each provided by the card issuer or manufacturer. The sectors are then partitioned according to the settings of the access keys and the access bits assigned to each sector. The first 1 kilobyte of the block can be reserved for transit and these sectors may be distributed as SSLOTs or rented as RSLOTs. Either way, the reserved portion of the block will be reserved for transit. The next 3 kilobyte of the block on a 4 KB card can be reserved for the card issuer application directory table. The AD root block will reside in the first sector of the block reserved for the application directory.
- In certain exemplary embodiments, key rotation may be implemented by the control software application. The control software application may initially be installed in the secure element via the TSM or at the card issuer's manufacturing facilities prior to incorporating within contactless smart card devices. However, the key rotation can occur at instantiation time of the control software application, at the first use of the device. The access key rotation may initiate as part of the instantiation code of the control software applet that is triggered when the contactless smart card device is turned on. In some exemplary embodiments, the control applet can be pre-installed via the card manufacturer's production process whereby it is installed in the ROM or the EEPROM memory section when the semiconductor die leaves the manufacturer's wafer processing (after testing). As part of that process, the instantiation code for the control applet will not be executed thereafter. For this reason, a check for “not yet rotated” can be included once the control applet has been selected for instantiation to ensure that the control applet cannot be used (or selected) without having the access key rotated. A special command that needs disabling is not needed as the check at any time will only execute the access key rotation once. The control applet, in this case, needs to ensure that it secures a possible abort of access key rotation to ensure that all keys have been rotated at least once before the access key rotation feature is disabled for the device.
- In certain exemplary embodiments, the access key rotation can be executed as early as the production process of the contactless card and as late as the incorporation and initiating of the various driver software for each component, including the contactless smart card in the contactless smart card device, for example, a mobile phone. Incorporating and initiating of the smart card will ensure that the process of securing (or disabling) the embedded secure element (eSE) is not needed thereafter. Further, the process where the key rotation is performed at the original equipment manufacturer (OEM) at time of production or testing of the card is useful in verifying if the NFC module, containing the NFC controller, PN544, and JCOP, is working correctly. This process ensures that any soldering and die work has not cracked or damaged the chip. The OEM can execute this checking process as a functional test of the semiconductor die. As a result, the OEM can implement a quality check to improve device quality prior to delivery, and the card issuer will have the advantage of the key rotation being done prior to implementing the card issuer's embedded software.
-, a TSM software application (or a TSM applet in the JavaCard VM environment) may independently, or as a part of the control software application, provide TSM implementation in the secure element via the public-private key asymmetric cryptography algorithm that allows for a non-TSM computer to provide public key encrypted software applications and application data, as well as life-cycle controls to the TSM software application in the secure element. The TSM software application provides the control software application or the installed software applications, the required permissions to perform certain software application life-cycle functions, including installation, instantiation, starting, stopping, destroying, updating, activating, de-activating, swapping of sectors and memory blocks, changing of access keys, and changing of access bits for the access conditions, each function performed with intervention from an external TSM computer device. Each of a number of a non-TSM computers, including software applications or application data, registers with the card issuer via the TSM software application or the control software application, where the registration process provides the non-TSM computer with a public key for providing application data to the secure element of the contactless smart card. Further, the TSM software application may then control the lifecycle of the application data within the secure element using permissions granted to the TSM software application as a part of the registration process.
-.
- In certain exemplary embodiments, the non-TSM computer, in turn, uses the public key to encode application data or software application that the non-TSM computer wants to provision into the namespace. A partner device may also be a common non-TSM computer including software applications from multiple software application providers. While the notion of a control applet transaction can be narrow and controlled by design of the commence provision message, it can also be one of many potentially shared APIs between the wallet software application and other software applications within the secure element. In an exemplary embodiment, a specify check balance interface and other functions of a transit system software application can included as part of the software application function. The control software application's message transaction can be one mechanism by which a software application provider may commit data to the secure element from a non-TSM computer. The software application or application data from the non-TSM server may be stored in a part of the secure element assigned for temporary storage, or in an external secure memory area with a secure channel communication to the control software application in the secure element.
- In certain exemplary embodiments, an external card reader device with the capability to access SSLOTs within the contactless smart card device may include provisions to function as a TSM computer or a non-TSM computer with the public-private key pair encryption in place. The TSM or non-TSM type card reader can control software applications resident that are identified as being within the card reader's control or were issued by a software application provider with control over both, the card reader and certain software applications within the secure element. When a software application provider wants to initiate a software application related transaction via a card reader device, the relevant non-TSM computer may send a push notification to the wallet software application. In turn, the wallet software application initiates a control software application transaction using the secure communication channel, to verify if the request is deemed valid.
- In certain exemplary embodiments, the wallet software application may not offer strong guarantees for receipt or response to notifications within any specific timeframe. The commence provision message structure between the control applet (or TSM applet) and the partner device may comprise the TSM software application or the control software application public key for encryption purposes, the unique ID for the secure element, a protocol version number, a transaction identifier, and an AD A-key to enable access to the AD for partner readers, an event notification that the partner can reference if it was the partner who requested the transaction via a push notification, and a wallet software application callback identifier, so that the partner can push notifications to the wallet at a future date. The provisioning response message can comprise a response status (in other words, SUCCESS, ERROR, RETRY, etc.); a response detail (in other words, if response status is RETRY, then the detail string may say server is down, try again later); an RSLOT/SSLOT Boolean where an AID is required if this is an RSLOT and the AID must be the card issuer application directory ID assigned to software application service provider.
- Further, in certain exemplary embodiments, when the SLOT assigned to a software application in the response of a commence provision message is an SSLOT, the AD ID is the valid SSLOT application ID assigned to the partner software application. The response from the software application provider via the card reader or the TSM computer can further comprise the A-key that should be used to protect access to the application data of the selected SLOT or lifecycle functions assigned to the software application, where each is encrypted using the control applet public key. The response area for the public key may be blank if the correct key is already in place at the TSM computer, the TSM software application, or card reader device from where the data to be provisioned. Similarly, the response area for a rotation code may be blank if the transaction is a key rotation type transaction, and existing data is valid; then the B-Key is used in the response when the SLOT is an SSLOT. The SSLOT is encrypted using the control applet public key for rotating the key, while the RSLOT partners cannot rotate the B key. After a transaction with a card reader or TSM computer is completed, a transaction message that the partner would like to share with the user is applied to the contactless smart card and shows up on the user-interface software application (wallet software application), indicating, for example, if the transaction was a top-up of a gift card, where the message could be: “Thanks for topping up your gift card!”
- In certain exemplary embodiments, provisioning of the SSLOT initiates when a software application provider and a card issuer enter into a contract ensuring the protection of the B-key and consent to a user-interface software application to be implemented via the wallet software application for interacting with a control software application, thereby controlling certain aspects of the software application from the software application provider. When all agreements are in place, the control software application (combined with the TSM software application) in installed within the secure element of the contactless smart card device. The provisioning of an SSLOT proceeds by securing a user interaction or push notification to trigger a control applet transaction; the wallet software application forms a secure connection channel to the software application partner (software application non-TSM or TSM computer); a commence provision request is sent to the partner encoded as JSON over REST; the partner uses the data in the request to potentially encode data and A+B keys in the response; the wallet software application checks the validity of the response message for correctness and legality (for example, ‘StoreA’ cannot overwrite the ‘StoreB’ application); when the response is legal and correct, the wallet software application packages the application data payload into a control applet command; the command then sent to the control applet over a locally secured channel (secured using session ID, SSL, and binary app signing+card OS security); the control applet using the private key to decodes the data payload and keys in the incoming control applet command; the control applet performs key management diversifying and saving the A+B keys as necessary; and the control applet writes the data payload into the correct location specified by the SSLOT application ID (AID).
- The provisioning of an RSLOT proceeds in the same fashion as the SSLOT provisioning described previously with the following exceptions: An RSLOT partner can only specify an A-key to be diversified and the RSLOT partner must use an RSLOT or card issuer directory application ID for their application. Since the control applet maintains knowledge of all the keys for the namespace at all times, it is capable of oversubscribing access to the block. The control applet, for example, could allow two transit authorities who use the entire 1 KB of SSLOT space to co-exist in the block. This action is implemented by using user provisioned areas for both a first city transit card and a second city transit card into their wallet. The control applet copies the different applets into and out of the secure element dynamically at the request of the user or based on GPS location of the device hosting the contactless card. When the data for one transit agency is rotated out of the block, the control applet stores the “standby” data in the secure element. When it is necessary to re-enable the standby card, the control applet swaps the standby data into the live block and stores the replaced data. This process can be applied to the RSLOTS as well.
- In certain exemplary embodiments, a local trusted service manager (TSM) can be implemented using the asymmetric cryptography method, where the TSM applet exists within the secure element of the smart card and the TSM applet stores a private key to decrypt a public key encrypted application data set from a non-TSM server. The corresponding public key is signed and authorized by the card issuer or a software application provider with the same signed certificates. This process allows the contactless smart card to interact with external card reader devices and to secure a script for software applications and application data without the TSM or a TSA requirement. By way of example, the implementation uses a wallet software application, where the wallet software application sends a certificate to the owner of the application data (software application provider). The wallet software application and application data may include a bank seeking to provision account information in a secure element, a transit agency seeking to provision or change balance information, or a merchant wishing to provision or change gift card, loyalty card, coupons, or other information. The application data issuer examines certificates, validates the signature from the wallet software application, and encrypts the application data with a public key specific to the end-user's contactless smart card device that requested the application data. The application data provider (software application provider) then sends the encrypted data to the local TSM applet (or the control applet, when combined), within the secure element of the end-user's contactless smart card device which incorporates the contactless smart card.
- In certain exemplary embodiments, the data path for this encrypted message including the application data can be through the wallet software application (similar to the control applet) using secure communication channels or directly to the control applet. The local TSM applet receives the requested data, verifies the format, verifies the permissions, and performs any other checks to authenticate the application data. Thereafter, the local TSM applet decrypts the application data and installs it to the secure element. In the case of the control applet implementing the local TSM, the received data is decrypted, verified, and installed directly using the contactless card's APIs. In certain exemplary embodiments, the local TSM applet creates a secure script that uses the contactless smart card device's access keys to install the application data. The downloaded application data in encrypted format may be stored in a temporary memory in the secure element or outside the secure element with secure channel connection to the secure element. Further, the secure script is exported from the secure element and executed within the contactless smart card device by a native software application running in the host operating system. In certain exemplary embodiments, the application data from the software application provider is never exposed outside the TSM software application and the contactless smart card device, and similar to the TSM computer, is secure without interacting with a external TSM computer.
- The combination of a local TSM applet and the RSLOT implementation using a control applet allows the contactless smart card device to receive and install card information securely from a non-TSM computer. This process can prevent the software application provider from actively managing the lifecycle of this data. The data can be swapped, enabled, and displayed within the secure element of the contactless smart card by using the secure channel and user preferences from a wallet software application cam be deployed with permission from the TSM applet without contacting an external TSM computer.
FIG. 1illustrates a computer-implemented system 100 and device 144 for partitioning the namespace of a secure element in contactless smart card devices and for writing application data in the secure element using requests from a software application outside the secure element according to certain exemplary embodiments. Contactless smart card device 144 includes secure element 156, 176 and the antenna 180. The secure element 156 may be a part of a SIM card, a UICC card, an integrated circuit chip of a CDMA contactless payment device, or an SD card. The external secure element and secure memory 184 is illustrated to provide an example of a temporary, but secure memory connected to the secure element, for software applications to be temporarily placed prior to installation, or during de-activation, to free space in the secure element sectors.
- The secure element 156 includes the control software application 160, the secure element namespace 164, which holds the application data and the software applications for transaction purposes. A temporary memory 168 may be incorporated into a section of the existing sectors of the secure element namespace, or in a different partition of the secure element namespace. The temporary memory 168 also may be used in lieu of the external secure element 184. The downloaded application data or software application 172, as well as de-activated software applications, may reside within the temporary memory 168. The NFC controller 176 is triggered via changes made at the control software application or within the sectors of the secure element namespace. Alternatively, if the contactless smart card device is set to passively transmit a radio signal for a reader terminal 188, the NFC controller may remain active when the phone is switched off to enable this passive application of the contactless smart card device 144.
- In certain exemplary embodiments, the user-interface software application 152 is the wallet software application that executes within the operating system or the virtual machine environment 148 of the contactless smart card device 144. The user-interface software application 152 provides information to the end-user and accepts information from the end-user via a keypad, voice, or a touch sensitive method. Each of the contactless smart card components may communicate with the secure element or external secure memory. The contactless smart card device 144 communicates with the card issuer 104 and the software application providers 112, using one of the wireless radio communication methods 140 or wireless internet network (Wi-Fi) 196. In certain exemplary embodiments, the card issuer 104 may be the wireless service provider 136. The two components 104 and 136 illustrated in
FIG. 1may then be combined to host the trusted service manager 108, which is illustrated as being resident on the card issuer's 104 side. Software application providers 112 may include credit card companies 116, ticketing companies (transit systems) 120, coupon companies 124, authentication companies (loyalty, membership, and security authentication) 128, and a protected information provider 121, such as a bank, merchant, or other financial service provider, for providing confidential or otherwise protected information (for example, account information), which may be used to instantiate a particular card. Each component 116-128 may include independent secure computers hosting application data and software applications which may be provided to the contactless smart card device 144 directly using connection 196 or indirectly through 136 and 140.
- In certain exemplary embodiments, the software application providers 112 provide software applications for transaction purposes to the card issuer 104 for hosting in the TSM computer 108. The software applications may provide secure download capabilities via a secure Wi-Fi connection 196, but to make use of wireless mobile communication's security features, the TSM 108 is used to deploy software applications. In certain secure element applications, the process of installation application data or software applications uses signed certificates that are tracked from the TSM 108 to the secure element 156; accordingly, installation to the secure element may not apply to the Wi-Fi channel 196 and in such circumstances, it may be preferred to use the GSM/CDMA wireless channel 140.
FIG. 2illustrates a computer-implemented system 200 and device 244 for partitioning the namespace of a secure element in contactless smart card devices and for writing application data in the secure element using requests from a software application outside the secure element according to certain exemplary embodiments. Contactless smart card device 244 includes secure element 256, 276 and the antenna 280. The secure element 256 may be a part of a SIM card, a UICC card, an integrated circuit chip of a CDMA contactless payment device, or an SD card. The external secure element and secure memory 284 is illustrated to provide an example of a temporary, but secure memory connected to the secure element, for software applications to be temporarily placed prior to installation, or during de-activation, to free space in the secure element sectors.
- The secure element 256 includes the control software application or a TSM software application 260, as well as the secure element namespace 264, which holds the application data and the software applications for transaction purposes. A temporary memory 268 may be incorporated into a section of the existing sectors of the secure element namespace, or in a different partition of the secure element namespace. The temporary memory 268 also may be used in lieu of the external secure element 284. The downloaded application data or software application 272, as well as de-activated software applications may reside within the temporary memory 268. The NFC controller 276 is triggered via changes made at the control software application or within the sectors of the secure element namespace. Alternatively, if the contactless smart card device is set to passively transmit a radio signal for a reader terminal 292, the NFC controller may remain active when the phone is switched off to enable this passive application of the contactless smart card device 244.
- In certain exemplary embodiments, the user-interface software application 252 is the wallet software application that executes within the operating system or the virtual machine environment 248 of the contactless smart card device 244. The user-interface software application 252 provides information to the end-user and accepts information from the end-user via a keypad, voice, or a touch sensitive method. Each of the contactless smart card components may communicate with the secure element or external secure memory. The contactless smart card device 244 communicates with the card issuer 204 and the software application providers 212, using one of the wireless radio communication methods 240 or wireless internet network (Wi-Fi) 296. In certain exemplary embodiments, the card issuer 204 may be the wireless service provider 236. The two components 204 and 236 illustrated in
FIG. 2may then be combined to host the a computer capable of deploying software applications via a public key, where the computer is a non-TSM computer 208, which is illustrated as being resident on the card issuer's 204 side. Software application providers 212 may include credit card companies 216, ticketing companies (transit systems) 220, coupon companies 224, authentication companies (loyalty, membership, and security authentication) 228, and a protected information provider 221, such as a bank, merchant, or other financial service provider, for providing confidential or otherwise protected information (for example, account information), which may be used to instantiate a particular card. Each component 216-228 may include independent secure computers hosting application data and software applications which may be provided to the contactless smart card device 244 directly using connection 296 or indirectly through 236 and 240.
- In certain exemplary embodiments, the control software application or the TSM software application access a private key stored in the temporary memory 268. In an exemplary embodiment, the private key is generated by the card issuer using an asymmetric cryptography algorithm. The private key may be changed and pushed from the card issuer 204 to the secure element 256 at pre-determined intervals to keep the private key rotated and secure. Further, the TSM software application may be integrated into the control software application, thereby enabling the two software applications to control the transaction software applications from the software application providers. The public key generated by the cryptography algorithm is then distributed to a variety of legal software application providers, including providers 216-228 and the software applications hosted by the non-TSM computer 208. The use of the asymmetric cryptography algorithm provides a benefit to the system 200, where a remote TSM is not required for minor permissions for software applications, including instantiation, stopping, starting, and destroying of the software application.
- The permissions may be granted via the TSM software application 260, which includes the private key to decrypt and authenticate software applications from non-TSM computers 208 and 216-228. Further, the TSM software application may authenticate requests for changes to be performed on installed software applications within the secure element, thereby eliminating the secure element runtime environment from calling APIs for seeking permissions for software applications in terms of lifecycle functions.
FIG. 3illustrates a data structure 300A for the namespace of a secure element 304 in contactless smart card devices and the application data 300B associated with the control software application 300 which controls the partitioning and storage of application data in the secure element namespace according to certain exemplary embodiments. The secure element namespace is illustrated as a table in FIG. 3which includes 16 bytes per memory block 316 and 4 blocks 312 per sector 308. Each memory block includes access memory blocks 328A-Z, and data memory blocks 332. Each access memory block 328A-Z further includes access keys 320 and 324, where each of the A keys 320A-Z and the B key 324A-Z provides one, or a combination of two or more access types to the entire block. The access memory blocks 328 include access bits describing the access type assigned to the blocks in the sector. The manufacturer's block 336 includes version information and unique identifier information for deriving the default access keys A and B. The data memory block for sector 0 also includes an application directory in block 1 and block 2 of sector 0. The application directory 332A is a table with AID information and pointers to the sector including the software application or application data underlying the AID.
- The control software application 340 is shown for illustrative purposes as including the application data, but in certain exemplary embodiments, the application data is stored in data memory blocks of the same secure element namespace 304 or a physically or virtually different secure element outside the secure element 304. The control software application 340 stores all the access keys 344, including access keys for changing the B key and the access bits 348 for each of the sectors in the secure element namespace 304. The sector type 352 is defined according to the access bits stored in the control software application, where the sector type allows a single software application to perform certain functions within the sector—for example, write, read, increment, decrement, and a directory sector type. Further, the sector type associates with the slot selection and distribution made by the card issuer via the control software application. The read/write blocks may be assigned SSLOT sectors, while the initial value in sector 15 can only be written when transaction type software application has control of the sector, and is therefore an SSLOT owner. When a software application is stored across multiple sectors, the AID per sector is stored 356 in the control software application for following the structure of software applications in the contactless smart card. A change log logs end-user requests, changes made by an external TSM computer and requests for access keys made by external card readers during the lifecycle of a software application in the secure element.
FIG. 4illustrates a computer-implemented method 400 for partitioning the namespace of a secure element into at least two storage types by a control software application within the secure element according to certain exemplary embodiments. In block 405, a card issuer or the contactless smart card device end-user defines access types, for example, a first access type, a second access type, a first access key, and a second access key, for a number of memory blocks within the secure element namespace. Each of the first access key and the second access key provides one of the first access type, the second access type, or a combination of the first and the second access types to the plurality of memory blocks within the secure element namespace. The control software application may be used to define the access types and access keys, where, in an alternate embodiment, the definition may be performed after production, during the rotation of the access keys as described above. Access keys include the A key and B key, and access types include write, read, increment, decrement, and restore or default.
- Block 410 performs a selection process using the control software application to select from the memory blocks within the secure element namespace, at least a first group of memory blocks, a second group of memory blocks, and access types for each of the selected groups of memory blocks. At least one of the memory blocks in each selected groups of memory blocks is an access memory block for providing the selected access type for the software application or application data within data memory blocks of the selected groups of memory blocks to an external data requesting device
- Block 415 performs a transmitting function to transmit, from the control software application, for storage in the access memory block for each of the selected groups of memory blocks, the first access key, the second access key, and the selected access types for each respective selected groups of memory blocks, thereby partitioning the namespace of the secure element into at least two storage types.
FIG. 5illustrates a computer-implemented method 500 for writing application data in a secure element namespace using requests from a user-interface software application resident outside the secure element according to certain exemplary embodiments. Block 505 performs an initial transmitting function, from the user-interface software application or the wallet software application, to a remote trusted service manager (TSM) computer, a request for application data, and at least an access key for a write access type. The application data requested via block 505 is to be written to the secure element namespace.
- Block 510 performs a receiving step, receiving the requested application data, and the requested access key at a temporary memory of the secure element, from the remote TSM computer. As discussed above, the temporary memory may be physically or virtually different for the secure element used for storing application data and software applications for transaction purposes. For example, the temporary memory may be the external secure memory 184, 284 or the temporary memory 168, 268. Block 515 uses the control software application in the secure element to write the requested application data from the temporary memory of the secure element to a data memory block of the secure element namespace. The data memory block is pre-determined or assigned by the control software application. Further, the data memory block of the secure element namespace is accessed by the control software application using the requested access key received from the TSM computer.
FIG. 6illustrates a computer-implemented method 600 for implementing a trusted service manager (TSM) locally within the secure element of a contactless smart card device according to certain exemplary embodiments. A TSM software application is installed by block 605 in the secure element of the contactless smart card device. Block 605 may represent a step right after the rotation of the key at manufacture of the contactless smart card, or prior to deployment of the contactless smart card in the contactless smart card device. The TSM software application may be incorporated within the control software application of the secure element, or may be executed independently. The TSM software application includes computer code for executing a transmitting function to request application data and a decrypting function for decrypting an encrypted form of received application data, where the received application data is received at the contactless smart card device in response to the request from the transmitting function.
- Block 610 stores a private key in the secure element, where the private key is assigned to the TSM software application, where the private key is generated along with a public key using, for example, an asymmetric cryptography algorithm.
- A transmitting step follows via block 615 for transmitting by the TSM software application to one of a number of registered remote non-TSM computers, a request for application data. These non-TSM computers include devices 208 and 216-228 of
FIG. 2. The remote non-TSM computer is configured to access the public key for encrypting the application data responsive to the request. The TSM software application also can transmit a request for application data to a TSM computer, which may use the public key to return data to the device 244.
- Block 620 performs a receiving function in the contactless smart card device, where the encrypted application data is received and stored. The encrypted application data may be stored in a temporary memory within the secure element sectors assigned for the purpose, or via an external secure memory belonging to contactless smart card device, where the external secure memory is connected to the secure element via a secure communication channel. Application data providers may encrypt the requested application data using the public key and then communicate the encrypted data to the device 244 for receipt in block 620.
- Block 625 decrypts the encrypted application data using the private key assigned to the TSM software application. The decrypted application data is ready for installation within a pre-determined data memory block of the secure element, where the data memory block allocation is decided by the control software application based on the current status of the memory blocks, the access bits assigned to the memory blocks, and the state of the sector—SSLOT or RSLOT.
- In an exemplary embodiment, the secure element 256 can have assigned thereto a unique private key and corresponding public key. When the TSM software application is first installed, it can generate two public/private key pairs and save these key pairs internally. One key pair is used for receiving encrypted communication as described with reference to
FIG. 6, and the other key pair is used to allow the TSM software application to sign messages.
- A trusted entity, such as a remote trusted service manager, can contact the TSM software application to obtain the public keys and to create certificates that allow third parties to verify that these public keys are indeed associated with the TSM software application in a real secure element. These third parties, for example, the devices 208 and 216-228 of
FIG. 2, then can encrypt messages using the public key for encryption, send the encrypted messages to the secure element 256, and verify that messages they receive originated with secure element 256.
- Invocation of the decryption function of the TSM software application, using the private key for decryption, can only be called by other applications installed in the secure element 256. Certificates can be created based on the public/private key pairs to vouch for the security of the public keys.
FIG. 7illustrates a computer-implemented method 700 of controlling access to the secure element namespace for partitioning and provisioning purposes, the access conditions 704, access types 708-712, and access keys, for example, 720 and 724 that many be assigned to memory blocks within various sectors of a contactless smart card according to certain exemplary embodiments. For the sector or memory block listed in column 716, the type of change that may be implemented via the control software application is defined by the access conditions 704. The control software application stores the access condition information along with the table 300B illustrated in FIG. 3. The read access type in the first row is set to Key A1 which implies that the related sector 728 may be read by an external card reader device capable of displaying the same Key A1 to the contactless smart card. Similarly, Key A1 or B1 may be used to access the writing capabilities to the sector defined by the access condition in 728. By way of an example, using the transit system, for a sector that has access bits, in the access memory block for allowing exit and entry data input, the external card reader provides the smart card with the B key to the particular sector for writing the exit and entry stations. Initial value changes may be made at sector 740 using key B2 which may be different from key B1. A card reader at a turnstile may not access this sector, and a special reader at the transit office may provide access to this sector for adding value to the contactless smart card.
- The access keys themselves may be changed in certain circumstances, but in the embodiments described herein, the control software application logs and permits changes to the B key based on the contractual obligation between the card issuer and the software application provider. Accordingly, as illustrated in
FIG. 7, Key A2 752 may be changed using the same key A2 or the high privileged key, key B2 to first access and change the access keys in the access memory block of the sector. Key B2 is always a higher security key and can be used to perform the access key changes to Key A2 and key B2 of a selected sector, as illustrated via 744-764. The B key may not be changed with the A key even though the converse may work for select sectors. Finally, access bits in the access memory block may be changed 776, thereby assigning the memory block different privileges for RSLOT and SSLOT purposes. Further, software applications in the memory blocks may be read out and stored in different memory blocks prior to changing the access keys or the access bits. The application data or software applications may then be written back to a new or the original memory blocks after the access keys and bits have been changed. For an RSLOT, by way of an example, memory sectors 728 and 740 may need to be SSLOTs to allow a transit authority to add values to the data within these slots. However, the access bits in the access memory block may not be an SSLOT, and may instead be an RSLOT, thereby allowing the transit authority to change access conditions, from increment to decrement without changing keys in the block.
- One or more aspects of the disclosure disclosure in computer programming, and the disclosure should not be construed as limited to any one set of computer program instructions. Further, a skilled programmer would be able to write such a computer program to implement an embodiment of the disclosed disclosure based on the appended flow charts and associated description in the application text. Therefore, disclosure of a particular set of program code instructions is not considered necessary for an adequate understanding of how to make and use the disclosure. The inventive functionality of the disclosure will be explained in more detail in the following description of exemplary embodiments, read in conjunction with the figures illustrating the program flow.
- disclosure defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures.
Claims (19)
1. A computer-implemented method for implementing a trusted service manager (TSM) locally within a secure element of a contactless smart card device, the method comprising:
installing, in the secure element of the contactless smart card device, a TSM software application, wherein the TSM software application comprises computer code for executing a transmitting function to request application data and a decrypting function to decrypt an encrypted form of received application data, the received application data received at the contactless smart card device in response to the request from the transmitting function;
storing, in the secure element, a private encryption key assigned to the TSM software application and a corresponding public encryption key;
transmitting, by the transmitting function of the TSM software application, a request for application data to a registered remote computer, wherein the remote computer is configured to access the public key, and wherein the remote computer encrypts the requested application data using the public key;
receiving, in the contactless smart card device, the encrypted application data responsive to the transmitted request; and
decrypting, by the decrypting function of the TSM software application, the encrypted application data using the private key.
2. The method according to
claim 1, further comprising:
writing, by the TSM software application or a control software application in the secure element, the decrypted application data to at least one memory block in the secure element.
3. The method according to
claim 1, further comprising:
managing, by a user-interface software application on the contactless smart card device and resident outside the secure element, the TSM software application in the secure element, thereby executing inputs at the user-interface software application for transmitting a request for application data to the remote computer, receiving the encrypted application data, and decrypting the encrypted form of the received application data.
4. The method according to
claim 3, wherein the user-interface software application transmits user inputs from a display or a keypad of the contactless smart card device to the TSM software application in the secure element via a secure communication channel.
5. The method according to
claim 1, wherein the encrypted application data is received in the secure element or a secure memory outside the secure element of the contactless smart card device, the secure memory connected to the secure element via a secure communication channel.
6. The method according to
claim 1, wherein the remote computer is registered for deploying application data to the contactless smart card device only when the remote computer is in possession of the public key.
7. The method according to
claim 1, wherein application data comprises at least one of a software application executable in the secure element or data to support an existing software application in the secure element.
8. The method according to
claim 1,.
9. A computer-implemented system for implementing a trusted service manager (TSM) locally within the secure element of a contactless smart card device, the system comprising:
a contactless smart card device;
a secure element resident on the device and storing a private encryption key assigned to the secure element and a corresponding public encryption key;
a TSM software application resident in the secure element, the TSM software application comprising computer code for executing a transmitting function to request application data and a decrypting function for decrypting an encrypted form of received application data, the received application data received at the contactless smart card device in response to the request from the transmitting function,
wherein the transmitting function of the TSM software application transmits, to one of a plurality of remote computers, a request for application data, wherein the remote computer is configured to access the public key, and wherein the remote computer encrypts the requested application data using the public key;
wherein the device receives the encrypted application data in response to the transmitted request, and
wherein the decrypting function of the TSM software application decrypts the encrypted application data using the private key.
10. The system according to
claim 10, wherein the secure element further comprises a control software application that writes the decrypted application data to at least one memory block in the secure element.
11. The system according to
claim 10, further comprising:
a user-interface software application resident on the device and outside the secure element that manages the TSM software application in the secure element, thereby executing inputs at the user-interface software application to transmit a request for application data to the remote computer, receive the encrypted application data, and decrypt the encrypted form of the received application data.
12. The system according to
claim 11, wherein the user-interface software application transmits user inputs from a display or a keypad of the contactless smart card device to the TSM software application in the secure element via a secure communication channel.
13. The system according to
claim 10, wherein the encrypted application data is received in the secure element or a secure memory outside the secure element of the contactless smart card device, the secure memory connected to the secure element via a secure communication channel.
14. The system according to
claim 10, wherein each of the plurality of registered remote computers is registered for deploying application data to the contactless smart card device only when the remote computer is in possession of the public key.
15. The system according to
claim 10, wherein application data comprises one of at least a software application executable in the secure element or data to support an existing software application in the secure element.
16. The system according to
claim 10,.
17. A computer-implemented method for implementing a trusted service manager (TSM) locally within the secure element of a contactless smart card device, the method comprising:
registering a non-TSM computer thereby obtaining access for the non-TSM computer to a public encryption key for encrypting application data prior to transmission of the encrypted data to the contactless smart card device, the public key corresponding to a private encryption key assigned to the contactless smart card device;
receiving, at the non-TSM computer from a contactless smart card device, a request for application data, the application data resident in the remote non-TSM computer, the request comprising the public key;
encrypting, at the non-TSM computer, the requested application data using the public key; and
transmitting, to the contactless smart card device, the encrypted application data.
18. The method according to
claim 19, wherein application data comprises at least one of a software application executable in the secure element or data required to support an existing software application in the secure element.
19. The method according to
claim 19, wherein encrypted application data is transmitted from the non-TSM computer to a secure element of the contactless smart card.
Priority Applications (2)
Applications Claiming Priority (11)
Related Child Applications (1)
Publications (2)
Family
ID=46236017
Family Applications (3)
Family Applications After (2)
Country Status (8)
Cited By (76)
Families Citing this family (68)
Family Cites Families (149)
- 2011
- 2011-09-17 US US13/235,375 patent/US8352749B2/en active Active
- 2011-09-26 US US13/244,715 patent/US8335932B2/en active Active
- 2011-12-16 CN CN 201180060859 patent/CN103430222B/en active IP Right Grant
- 2011-12-16 JP JP2013544841A patent/JP5443659B2/en active Active
- 2011-12-16 EP EP11808491.2A patent/EP2638529A1/en active Pending
- 2011-12-16 CN CN201410768420.4A patent/CN104504806B/en active IP Right Grant
- 2011-12-16 CA CA2820915A patent/CA2820915C/en active Active
- 2011-12-16 WO PCT/US2011/065590 patent/WO2012083221A1/en active Application Filing
- 2011-12-16 KR KR1020137015609A patent/KR101463586B1/en active IP Right Grant
- 2011-12-16 AU AU2011343474A patent/AU2011343474B2/en active Active
- 2012
- 2012-12-17 US US13/717,686 patent/US8793508B2/en active Active | https://patents.google.com/patent/US20120159163A1/en | CC-MAIN-2019-51 | en | refinedweb |
Word UDF
The Word UDF offers functions to control and manipulate Microsoft Word documents.
This page describes the Word UDF that comes with AutoIt 3.3.10.0 or later.
Contents
- 1 Features
- 2 Concepts
- 3 Functions
- 4 Script breaking changes after AutoIt version 3.3.8.1
- 4.1 General
- 4.2 Function _WordCreate/_Word_Create
- 4.3 Function _WordDocPropertyGet/-
- 4.4 Function _WordDocPropertySet/-
- 4.5 Function _WordErrorHandlerDeRegister/_DebugCOMError
- 4.6 Function _WordErrorHandlerRegister/_DebugCOMError
- 4.7 Function _WordErrorNotify/-
- 4.8 Function _WordMacroRun/-
- 4.9 Function _WordPropertyGet/-
- 4.10 Function _WordPropertySet/-
Features
New versions of Microsoft Office have been released since the last changes were made to the Word UDF. The new extensions (e.g. docx) were not (fully) supported, new functions were missing etc. The current version of the Word UDF lifts this limitations.
- Works with as many instances of Word as you like - not just one
- Works with any document - not just the active one
- Only does what you tell it to do - no implicit "actions"
- Works with ranges and lets you move the insert marker
- Supports reading/writing of tables
- Support for every file format Word supports
The UDF only covers basic user needs. Single line functions (like getting document properties) or functions with too many parameters (like running a macro) are not covered by this UDF. You need to use the Word COM yourself.
Concepts
Good reading: Working with Word document content objects
Range
A Range is a block made of one or more characters that Word treats as a unit. The functions of the UDF mainly work with ranges. A range - unlike a selection - is not visible on the screen.
Example:
Global $oWord = _Word_Create() Global $oDoc = _Word_DocAdd($oWord) Global $oRange = _Word_DocRangeSet($oDoc, 0) ; Use current selection $oRange.Insertafter("INSERTED TEXT") ; Insert Text $oRange = _Word_DocRangeSet($oDoc, $oRange, $WdCollapseEnd) ; Collapse the start of the range to the end position (create an insertion mark)
Functions
_Word_DocFindReplace
This function supports wildcards.
The following document is a good reading regarding wildcards.
To find and replace a string not only in the main text but everywhere in the document (including headers, footnotes etc.) please have a look at this article
Microsoft limits the replacement text to 255 characters. How to overcome this limitation (and how to do it in AutoIt) is described here.
Tips and tricks
Convert units back and forth
The following functions allow conversions from/to all units know to Microsoft Word.
Func CentimetersToPoints($iValue) Return ($iValue * 28.35) EndFunc ;==>CentimetersToPoints Func InchesToPoints($iValue) Return ($iValue * 72) EndFunc ;==>InchesToPoints Func LinesToPoints($iValue) Return ($iValue * 12) EndFunc ;==>LinesToPoints Func PicasToPoints($iValue) Return ($iValue * 12) EndFunc ;==>PicasToPoints Func PointsToCentimeters($iValue) Return ($iValue / 28.35) EndFunc ;==>CentimetersToPoints Func PointsToInches($iValue) Return ($iValue / 72) EndFunc ;==>InchesToPoints Func PointsToLines($iValue) Return ($iValue / 12) EndFunc ;==>LinesToPoints Func PointsToPicas($iValue) Return ($iValue / 12) EndFunc ;==>PicasToPoints
Script breaking changes after AutoIt version 3.3.8.1
New versions of Microsoft Office have been released since the last changes were made to the Word UDF. New file types and new functions needed to be supported, hence the UDF was complete rewritten.
Some functions/parameters have been removed or renamed, new functions/parameters have been added. A detailed list of changes can be found here.
General
All function names have been changed from _Word* to _Word_*.
@extended no longer contains the number of the invalid parameter. The code returned in @error tells exactly what went wrong.
The following list shows the old/new function/parameter name (a "-" is shown if the function/parameter has been removed) and some example scripts how to mimic the behaviour of the "old" UDF. If there is no entry for a removed function/parameter then there is no need for this functionality.
Function _WordCreate/_Word_Create
It's mandatory now to call function _Word_Create before any other function. You could have used _WordCreate or _WordAttach in the old Word UDF. @extended is set if Word was already running.
Parameter $s_FilePath/-
Optional parameter to specify the file to open upon creation. Use _Word_DocOpen or _Word_DocAdd now.
Function _WordDocPropertyGet/-
Retrieves builtin document properties.
Word object model reference on MSDN for Word 2010:
Example code:
Global $oWord = _Word_Create() Global $oDoc = _Word_DocOpen($oWord, @ScriptDir & "\test.doc") Global $wdPropertyAuthor = 3 Global $sAuthor = $oDoc.BuiltInDocumentProperties($wdPropertyAuthor).Value ; Retrieves the Author of the document
Function _WordDocPropertySet/-
Sets builtin document properties.
For links to the Word object model reference on MSDN see function _WordDocPropertyGet.
Example code:
Global $oWord = _Word_Create() Global $oDoc = _Word_DocOpen($oWord, @ScriptDir & "\test.doc") Global $wdPropertyAuthor = 3 $oDoc.BuiltInDocumentProperties($wdPropertyAuthor).Value = "PowerUser" ; Sets the Author of the document
Function _WordErrorHandlerDeRegister/_DebugCOMError
The default COM error handler has been moved to the Debug UDF. See _WordErrorHandlerRegister for details.
Function _WordErrorHandlerRegister/_DebugCOMError
The default COM error handler has been moved to the Debug UDF. But you can still create a custom COM error handler by using ObjEvent.
Example code:
#include <Debug.au3> _DebugSetup("Word Debug Window", True, 1, "", True) _DebugCOMError(1) ; Register a default COM error handler to grab Word COM errors and write the messages to the Debug window ; Do Word processing here _DebugCOMError(0) ; DeRegister the default COM error handler
Function _WordErrorNotify/-
The Word UDF no longer creates text error messages and writes them to the Console.
You need to check the macros @error and @extended after you called a function. The returned values are documented in the help file.
Function _WordMacroRun/-
A macro can now be run by a single line.
Example code:
Global $oWord = _Word_Create() Global $oDoc = _Word_DocOpen($oWord, @ScriptDir & "\test.doc") $oWord.Run("macro_name")
Function _WordPropertyGet/-
Retrieves application and document properties. Many of the properties for the Options object correspond to items in the Options dialog box.
Word object model reference on MSDN for Word 2010:
Example code:
Global $oWord = _Word_Create() $bVisible = $oWord.Visible ; Returns True when the Word application is visible, else False $bUpdatePrint = $oWord.Options.UpdateFieldsAtPrint ; True if Microsoft Word updates fields automatically before printing a document
Function _WordPropertySet/-
Sets application and document properties. Many of the properties for the Options object correspond to items in the Options dialog box.
For links to the Word object model reference on MSDN see function _WordPropertyGet.
Example code:
Global $oWord = _Word_Create() $bVisible = $oWord.Options.SaveInterval = 5 ; Sets Word to save AutoRecover information for all open documents every five minutes | https://www.autoitscript.com/w/index.php?title=Word_UDF&oldid=14016 | CC-MAIN-2019-51 | en | refinedweb |
Configuration instructions¶
Installing Weblate¶
Choose an installation method that best fits your environment in our Installation instructions.
Software requirements¶
Other services¶
Weblate is using other services for it’s operation. You will need at least following services running:
- PostgreSQL database server, see Database setup for Weblate.
- Redis server for cache and tasks queue, see Background tasks using Celery.
- SMTP server for outgoing e-mail, see Configuring outgoing e-mail.
Python dependencies¶
Weblate is written in Python and supports Python
2.7, 3.4 or newer. You can install dependencies using pip or from your
distribution packages, full list of them is available in
requirements.txt.
Most notable dependencies:
- Django
-
- Celery
-
- Translate Toolkit
-
- translation-finder
-
- Python Social Auth
-
- Whoosh
-
- Django REST Framework
-
Optional dependecies¶
Following modules are necessary for some of Weblate features. You can find all
of them in
requirements-optional.txt.
Mercurial(optional for Mercurial repositories support)
-
phply(optional for PHP support)
-
tesserocr(optional for screenshots OCR)
-
akismet(optional for suggestion spam protection)
-
ruamel.yaml(optional for YAML files)
-
backports.csv(needed on Python 2.7)
-
Zeep(optional for Microsoft Terminology Service)
-
aeidon(optional for Subtitle files)
-
Database backend dependencies¶
Any database supported in Django will work, see Database setup for Weblate and backends documentation for more details.
Other system requirements¶
The following dependencies have to be installed on the system:
Git
-
- Pango, Cairo and related header files and gir introspection data
-,, see Pango and Cairo
hub(optional for sending pull requests to GitHub)
-
git-review(optional for Gerrit support)
-
git-svn(optional for Subversion support)
-
tesseract.
Pango and Cairo¶
Changed in version 3.7.
Weblate uses Pango and Cairo for rendering bitmap widgets (see Promoting the translation) and rendering checks (see Managing fonts). To properly install Python bindings for those you need to install system libraries first - you need both Cairo and Pango, which in turn need Glib. All those should be installed with development files and GObject introspection data.
Verifying release signatures¶
Weblate release are cryptographically signed by the releasing developer. Currently this is Michal Čihař. Fingerprint of his PGP key with
.asc files which contains the PGP signature
for it. Once you have both of them in the same folder, you can verify the signature:
$: Can't check signature: public key not found
As you can see gpg complains that it does not know the public key. At this point you should do one of the following steps:
- Use wkd to download the key:
$ gpg --auto-key-locate wkd --locate-keys [email protected] pub rsa4096 2009-06-17 [SC] 63CB1DF1EF12CF2AC0EE5A329C27B31342B7511D uid [ultimate] Michal Čihař <[email protected]> uid [ultimate] Michal Čihař <[email protected]> uid [ultimate] [jpeg image of size 8848] uid [ultimate] Michal Čihař (Braiins) <[email protected]> sub rsa4096 2009-06-17 [E] sub rsa4096 2015-09-09 [S]
- Download the keyring from Michal’s server, then import it with:
$ gpg --import wmxth3chu9jfxdxywj1skpmhsj311mzm
- Download and import the key from one of the key servers:
$ gpg --keyserver hkp://pgp.mit.edu --recv-keys 87E673AF83F6C3A0C344C8C3F4AA229D4D58C245 gpg: key 9C27B31342B7511D: "Michal Čihař <[email protected]>" imported gpg: Total number processed: 1 gpg: unchanged: 1
This will improve the situation a bit - at this point you can verify that the signature from the given key is correct but you still can not trust the name used in the key:
$] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 63CB 1DF1 EF12 CF2A C0EE 5A32 9C27 B313 42B7 511D.
Once the key is trusted, the warning will not occur:
$ gpg --verify Weblate-3.5.tar.xz.asc gpg: assuming signed data in 'Weblate-3.5.tar.xz' gpg: Signature made Sun Mar 3 16:43:15 2019]
Should the signature be invalid (the archive has been changed), you would get a clear error regardless of the fact that the key is trusted or not:
$ gpg --verify Weblate-3.5.tar.xz.asc gpg: Signature made Sun Mar 3 16:43:15 2019 CET gpg: using RSA key 87E673AF83F6C3A0C344C8C3F4AA229D4D58C245 gpg: BAD signature from "Michal Čihař <[email protected]>" [ultimate]
Filesystem permissions¶
The Weblate process needs to be able to read and write to the directory where
it keeps data -
DATA_DIR. All files within this directory should be
owned and writable by user running Weblate.
The default configuration places them in the same tree as the Weblate sources, however
you might prefer to move these to a better location such as:
/var/lib/weblate.
Weblate tries to create these directories automatically, but it will fail when it does not have permissions to do so.
You should also take care when running Management commands, as they should be ran under the same user as Weblate itself is running, otherwise permissions on some files might be wrong.
See also
Database setup for Weblate¶
It is recommended to run Weblate with PostgreSQL database server. Using a SQLite backend is really only suitable for testing purposes.
See also
Use a powerful database engine, Databases
PostgreSQL¶
PostgreSQL is usually the best choice for Django based sites. It’s the reference database used for implementing Django database layer.
See also', # Database name 'NAME': 'weblate', # Database user 'USER': 'weblate', # Database password 'PASSWORD': 'password', # Set to empty string for localhost 'HOST': 'database.example.com', # Set to empty string for default 'PORT': '', } }
Migrating from other databases.
Add PostgeSQL as additional database connection to the
settings.py:': '', } }
Create empty tables in the PostgreSQL¶
Run migrations and drop any data inserted into the tables:
python manage.py migrate --database=postgresql python manage.py sqlflush --database=postgresql | psql
Dump legacy database and import to PostgreSQL¶
python manage.py dumpdata --all --output weblate.json python manage.py loaddata weblate.json --database=postgresql
Other configurations¶
Configuring outgoing e-mail¶
Weblate sends out e-mails on various occasions - for account activation and on various notifications configured by users. For this it needs access to a SMTP server.
The mail server setup is configured using these settings:
names are quite self-explanatory, but you can find more info in the
Django documentation.
Note
You can verify whether outgoing e-mail is working correctly by using the
sendtestemail management command (see Invoking management commands
for instructions how to invoke it in different environments).
HTTP proxy¶
Weblate does execute VCS commands and those accept proxy configuration from
environment. The recommended approach is to define proxy settings in
settings.py:
import os os.environ['http_proxy'] = "" os.environ['HTTPS_PROXY'] = ""
See also
Proxy Environment Variables
Adjusting configuration the hosts your site is supposed to serve. For example:ALLOWED_HOSTS = ['demo.weblate.org']
See also
SESSION_ENGINE
Configure how your sessions will be stored. In case you keep the default database backend engine, you should schedule: ./manage.py clearsessions to remove stale session data from the database.
If you are using Redis as cache (see Enable caching) it is recommended to use it for sessions as well:SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
See also
Configuring the session engine,
SESSION_ENGINE
DATABASES
Connectivity to database server, please check Django’s documentation for more details.
See also
Database setup for Weblate,
DATABASES, Databases
DEBUG
Disable this for any production server. With debug mode enabled, Django will show backtraces in case of error to users, when you disable it, errors will be sent per e-mail to
ADMINS(see above).
Debug mode also slows down Weblate, as Django stores much more info internally in this case.
DEFAULT_FROM_EMAIL
See also
SECRET_KEY
Key used by Django to sign some info in cookies, see Django secret key for more info.
SERVER_EMAIL
Email used as sender address for sending e-mails to the administrator, for example notifications on failed merges.
See also
Filling up the database¶
After your configuration is ready, you can run
./manage.py migrate to create the database structure. Now you should be
able to create translation projects using the admin interface.
In case you want to run an installation non interactively, you can use
./manage.py migrate --noinput, and then create an admin user using
createadmin command.
You should also log in to the admin interface (on
/admin/ URL) and adjust the
default sitename to match your domain by clicking on Sites and once there,
change the
example.com record to match your real domain name.
Once you are done, you should also check the Performance report in the admin interface, which will give you hints of potential non optimal configuration on your site.
Production setup¶
For a production setup you should carry out adjustments described in the following sections. The most critical settings will trigger a warning, which is indicated by a red exclamation mark in the top bar if logged in as a superuser:
It is also recommended to inspect checks triggered by Django (though you might not need to fix all of them):
./manage.py check --deploy
See also
Disable debug mode¶
Disable Django’s debug mode (
DEBUG) by:
DEBUG = False
With debug mode on, Django stores all executed queries and shows users backtraces of errors, which is not desired in a production setup.
See also
Properly configure admins¶
Set the correct admin addresses to the
ADMINS setting to defining who will receive
e-mails in case something goes wrong on the server, for example:
ADMINS = ( ('Your Name', '[email protected]'), )
See also
Set correct sitename¶
Adjust sitename in the admin interface, otherwise links in RSS or registration e-mails will not work.
Please open the admin interface and edit the default sitename and domain under the
Sites › Sites (or do it directly at the
/admin/sites/site/1/ URL under your Weblate installation). You have to change
the Domain name to match your setup.
Note
This setting should only contain the domain name. For configuring protocol,
(enabling HTTPS) use
ENABLE_HTTPS and for changing URL, use
URL_PREFIX.
Alternatively, you can set the site name from the commandline using
changesite. For example, when using the built-in server:
./manage.py changesite --set-name 127.0.0.1:8000
For a production site, you want something like:
./manage.py changesite --set-name weblate.example.com
Correctly configure HTTPS¶
It is strongly recommended to run Weblate using the encrypted HTTPS protocol.
After enabling it, you should set
ENABLE_HTTPS in the settings, which also adjusts
several other related Django settings in the example configuration.
You might want to set up HSTS as well, see SSL/HTTPS for more details.
Use a powerful database engine¶
Please use PostgreSQL for a production environment, see Database setup for Weblate for more info.
Enable caching¶
If possible, use Redis from Django by adjusting the', } } }
See also
Avatar caching, Django’s cache framework
Avatar caching¶
In addition to caching of Django, Weblate performs caching of avatars. It is recommended to use a, }, }
Configure e-mail addresses¶
Weblate needs to send out e-mails on several occasions, and these e-mails should
have a correct sender address, please configure
SERVER_EMAIL and
DEFAULT_FROM_EMAIL to match your environment, for example:
SERVER_EMAIL = '[email protected]' DEFAULT_FROM_EMAIL = '[email protected]'
Allowed hosts setup¶
Django 1.5 and newer require
ALLOWED_HOSTS to hold a list of domain names
your site is allowed to serve, leaving it empty will block any requests.
See also
Django secret key¶
The
SECRET_KEY setting is used by Django to sign cookies, and you should
really generate your own value rather than using the one from the example setup.
You can generate a new key using
weblate/examples/generate-secret-key shipped
with Weblate.
See also
Home directory¶
The home directory for the user running Weblate should exist and be writable by this user. This is especially needed if you want to use SSH to access private repositories, but Git might need to access this directory as well (depending on the Git version you use).
You can change the directory used by Weblate in
settings.py, for
example to set it to
configuration directory under the Weblate tree:
os.environ['HOME'] = os.path.join(BASE_DIR, 'configuration')
Note
On Linux, and other UNIX like systems, the path to user’s home directory is
defined in
/etc/passwd. Many distributions default to a non-writable
directory for users used for serving web content (such as
apache,
www-data or
wwwrun, so you either have to run Weblate under
a different user, or change this setting.
See also
Template loading¶
It is recommended to use a cached template loader for Django. It caches parsed
templates and avoids the need to. This is now automatically done by Background tasks using Celery and covers following tasks:
- Configuration health check (hourly).
- Committing pending changes (hourly), see Lazy commits and
commit_pending.
- Updating component alerts (daily).
- Update remote branches (nightly), see
AUTO_UPDATE.
- Translation memory backup to JSON (daily), see
dump_memory.
- Fulltext and database maintenance tasks (daily and weekly taks), see
cleanuptrans.
Changed in version 3.2: Since version 3.2, the default way of executing these tasks is using Celery and Weblate already comes with proper configuration, see Background tasks using Celery.
Running server¶
You will need several services to run Weblate, the recommended setup consists of:
- Database server (see Database setup for Weblate)
- Cache server (see Enable caching)
- Frontend web server for static files and SSL termination (see Serving static files)
- Wsgi server for dynamic content (see Sample configuration for NGINX and uWSGI)
- Celery for executing background tasks (see Background tasks using Celery)
Note
There are some dependencies between the services, for example cache and database should be running when starting up Celery or uwsgi processes.
In most cases, you will run all services on single (virtual) server, but in
case your installation is heavy loaded, you can split up the services. The only
limitation on this is that Celery and Wsgi servers need access to
DATA_DIR.
Running web server¶
Running Weblate is not different from running any other Django based program. Django is usually executed as uWSGI or fcgi (see examples for different webservers below).
For testing purposes, you can use the built-in web server in Django:
./manage.py runserver
Warning
Do not use this in production, as this has severe performance limitations.
Serving static files¶
Changed in version 2.4: Prior to version 2.4, Weblate didn’t properly use the Django static files framework and the setup was more complex.
Django needs to collect its static files in a single directory. To do so,
execute
./manage.py collectstatic --noinput. This will copy the static
files into a directory specified by the
STATIC_ROOT setting (this defaults to
a
static directory inside
DATA_DIR).
It is recommended to serve static files directly from your web server, you should use that for the following paths:
/static/
- Serves static files for Weblate and the admin interface (from defined by
STATIC_ROOT).
/media/
- Used for user media uploads (e.g. screenshots).
/favicon.ico
- Should be rewritten to rewrite a rule to serve
/static/favicon.ico
/robots.txt
- Should be rewritten to rewrite a rule to serve
/static/robots.txt
See also
Deploying Django, Deploying static files
Content security policy¶
The default Weblate configuration enables
weblate.middleware.SecurityMiddleware
middleware which sets security related HTTP headers like
Content-Security-Policy
or
X-XSS-Protection. These are by default set up to work with Weblate and it’s
configuration, but this might clash with your customization. If that is the
case, it is recommended to disable this middleware and set these headers
manually.
Sample configuration for Apache¶
The following configuration runs Weblate as WSGI, you need to have enabled
mod_wsgi (available as
webl
The following configuration runs Weblate in Gunicorn and Apache 2.4
(available as
weblate/examples/apache.gunicorn.conf):
# # VirtualHost for weblate using gunicorn on localhost:8000 # # This example assumes Weblate is installed in /usr/share/weblate # # <VirtualHost *:443>
To run production webserver, use the wsgi wrapper installed with Weblate (in
virtual env case it is installed as
~/weblate-env/lib/python3.7/site-packages/weblate/wsgi.py). Don’t
forget to set the Python search path to your virtualenv as well (for example
using
virtualenv = /home/user/weblate-env in uWSGI).
The following configuration runs Weblate as uWSGI under the NGINX webserver.
Configuration for NGINX (also available as
webl
weblate/examples/weblate.uwsgi.ini):
[uwsgi] plugins = python3 master = true protocol = uwsgi socket = 127.0.0.1:8080 wsgi-file = /home/weblate/weblate-env/lib/python3.7/site-packages/weblate/wsgi.py # Add path to Weblate checkout if you did not install # Weblate by pip # python-path = /path/to/weblate # In case you're using virtualenv uncomment this: virtualenv = /home/weblate/weblate-env # Needed for OAuth/OpenID buffer-size = 8192 # Increase number of workers for heavily loaded sites # workers = 6 # Child processes do not need file descriptors close-on-exec = true # Avoid default 0000 umask umask = 0022 # Run as weblate user uid = weblate gid = weblate # Enable harakiri mode (kill requests after some time) # harakiri = 3600 # harakiri-verbose = true # Enable uWSGI stats server # stats = :1717 # stats-http = true # Do not log some errors caused by client disconnects ignore-sigpipe = true ignore-write-errors = true disable-write-exception = true
See also
How to use Django with uWSGI
Running Weblate under path¶
Changed in version 1.3: This is supported since Weblate 1.3.
A sample Apache configuration to serve Weblate under
/weblate. Again using
mod_wsgi (also available as
webl/weblate WSGIPassAuthorization On <Directory /usr/share/weblate/weblate> <Files wsgi.py> Require all granted </Files> </Directory> </VirtualHost>
Additionally, you will have to adjust
weblate/settings.py:
URL_PREFIX = '/weblate'
Background tasks using Celery¶
New in version 3.2.
Weblate uses Celery to process background tasks. The example settings come with eager configuration, which does process all tasks in place, but you want to change this to something more reasonable for a production setup.
A typical setup using Redis as a backend looks like this:
CELERY_TASK_ALWAYS_EAGER = False CELERY_BROKER_URL = 'redis://localhost:6379' CELERY_RESULT_BACKEND = CELERY_BROKER_URL
You should also start the Celery worker to process the tasks and start scheduled tasks, this can be done directly on the command line (which is mostly useful when debugging or developing):
./weblate/examples/celery start ./weblate/examples/celery stop
Most likely you will want to run Celery as a daemon and that is covered by
Daemonization. For the most common Linux setup using
systemd, you can use the example files shipped in the
examples folder
listed below.
Systemd unit to be placed as
/etc/systemd/system/celery-weblate.service:
[Unit] Description=Celery Service (Weblate) After=network.target [Service] Type=forking User=weblate Group=weblate EnvironmentFile=/etc/default/celery-weblate WorkingDirectory=/home/weblate RuntimeDirectory=celery RuntimeDirectoryPreserve=restart LogsDirectory=celery ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \ -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \ --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'} ${CELERYD_OPTS}' [Install] WantedBy=multi-user.target
Environment configuration to be placed as
/etc/default/celery-weblate:
# Name of nodes to start CELERYD_NODES="celery notify search memory backup" # Absolute or relative path to the 'celery' command: CELERY_BIN="/home/weblate/weblate-env/bin/celery" # App instance to use # comment out this line if you don't use an app CELERY_APP="weblate" # Extra command-line arguments to the worker, # increase concurency if you get weblate.E019 CELERYD_OPTS="--beat:celery --concurrency:celery=4 --queues:celery=celery --prefetch-multiplier:celery=4 \ --concurrency:notify=4 --queues:notify=notify --prefetch-multiplier:notify=4 \ --concurrency:search=1 --queues:search=search --prefetch-multiplier:search=2000 \ --concurrency:memory=1 --queues:memory=memory --prefetch-multiplier:memory=2000 \ --concurrency:backup=1 --queues:backup=backup" # Logging configuration # - %n will be replaced with the first part of the nodename. # - %I will be replaced with the current child process index # and is important when using the prefork pool to avoid race conditions. CELERYD_PID_FILE="/var/run/celery/weblate-%n.pid" CELERYD_LOG_FILE="/var/log/celery/weblate-%n%I.log" CELERYD_LOG_LEVEL="INFO" # Internal Weblate variable to indicate we're running inside Celery CELERY_WORKER_RUNNING="1"
Logrotate configuration to be placed as
/etc/logrotate.d/celery:
/var/log/celery/*.log { weekly missingok rotate 12 compress notifempty }
Weblate comes with built-in setup for scheduled tasks. You can however define
additional tasks in
settings.py, for example see Lazy commits.
You can use
celery_queues to see current length of Celery task
queues. In case the queue will get too long, you will also get configuration
error in the admin interface.
Note
The Celery process has to be executed under the same user as Weblate and the WSGI
process, otherwise files in the
DATA_DIR will be stored with
mixed ownership, leading to runtime issues.
Warning
The Celery errors are by default only logged into Celery log and are not visible to user. In case you want to have overview on such failures, it is recommended to configure Collecting error reports.
Monitoring Weblate¶
Weblate provides the
/healthz/ URL to be used in simple health checks, for example
using Kubernetes.
Collecting error reports¶
Weblate, as any other software, can fail. In order to collect useful failure states we recommend to use third party services to collect such information. This is especially useful in case of failing Celery tasks, which would otherwise only report error to the logs and you won’t get notified on them. Weblate has support for the following services:
Sentry¶
Weblate has built in support for Sentry. To use
it, it’s enough to set
SENTRY_DSN in the
settings.py:
SENTRY_DSN = "https://[email protected]/" (e.g. generate a fresh one using
rebuild_index. | http://docs.weblate.org/en/weblate-3.9.1/admin/install.html | CC-MAIN-2019-51 | en | refinedweb |
Back. Send me an email at jamie.iles@oracle.com with a resume and/or a github link if you're interested!
Should probably specify that sizeof(int)==4.
In a 64 bit architecture, an int would not be 8 bytes instead of 4 bytes in 32bits arch machine?
If so sizeof(int) is NOT 4 but 8.
Thanks
printf("%p\n", x+1);
What will this print? 0x7fffdfbf7f08
Not exactly...
sizeof(int) should be 8, not 4, on a 64-bit system.
You really should have specified sizeof (int) here.
Sure, on most 32 & 64 bit systems today it'll be 4, but, especially since you make a point of specifying a 64-bit system, it's certainly not a sure thing.
For #2 you need to specify whether this is an ILP64 system or an LP64 system.
The right answer to all four questions is: it is undefined behavior. p conversion specifier expects a pointer to void argument. As it is undefined behavior, everything could be printed, even the books of Shakespeare;)
Now if we correctly cast the arguments to a pointer to void, the right answer for the four questions is then: it depends as it is implementation defined. Not only the size of an int is implementation defined, and this even in 64-bit systems (some are reported to have 64-bit int). But also the p conversion specifier is specified by the C Standard to print in an "implementation-defined manner".
One can, but shouldn't, think "&x" as "x" being an instance of some struct. That way it makes sense to sizeof(x) == 5*sizeof(int)
> As far as I can see, there exists no 64-bit architecture which treats int as 8 bits wide
Some Cray machine on UNICOS were know to have 64-bit int.
See for example Table 3. in "9.1.2.2 Types" in this Cray reference manual:
docs.cray.com/books/004-2179-003/004-2179-003-manual.pdf
> explicitly stating it reminds you more than I'd like about how pointer arithmetic works
That's a very good reason! Besides, after answering, the quiz-taker can tell whether they had the answer essentially correct.
I remember being horribly confused by pointers and arrays when first learning C, because of the unfortunate "pun" of the "decay to pointer". For that reason, I avoid using the pun. For years I have written such code as
printf("%p\n", &x[0]);
printf("%p\n", &x[1]);
printf("%p\n", &x);
printf("%p\n", &x+1); // see note below
And I assume nobody would get the wrong answers if in C, arrays were a first-class data type, and if the syntax for array declarations weren't suggestively confusing.
Finally, even in C, if my intent were to treat an array as an object that I want to get to the end of, I would write &x[ARRAY_SIZE], where "#define ARRAY_SIZE 5" or something, rather than &x+1. Furthermore, if what I really want is the address of something that actually follows the array, then I might write
struct {
int x[5];
int y;
} s;
&s.y
(assuming it is possible to define the struct to map to memory without padding issues).
So basically, I would like to know in what situations clearer code than the example code cannot be written.
> on x86-64, an int is generally 4 bits wide, just like on a regular x86 system
Note that the size of an int is technically dependent on the COMPILER, not on the target machine. Of course, a compiler generally uses the target machine as one of the factors that influences the size.
Regarding the question of int size, you could keep from explicitly declaring the size of int by mentioning, just in passing, that the total array size was 20 bytes. That would not even impact the last question since you would have to know exactly how to calculate &x + 1 before it became any kind of prompt.
yup, sizeof(int) must be specifically specified to be 4 and compile specified to have packed arrays to support the explicit given answers.
Any optimization for speed at all will align the ints at word boundaries, so ints can be spaced 8 bytes apart, and &x+1 can will move forward 5*8 bytes...
printf("0x%p\n", x); will print 0x7fffdfbf7f00, your code will print 7fffdfbf7f00. You're no stranger to memory notation. alright.
All arrays are always packed. The ISO C standard requires this absolutely. Structs are an entirely different matter, but arrays are required to be stored as a contiguous series of items. This is why even the most hardened language lawyers over at comp.lang.c will happily divide the sizeof an array by the sizeof an element of that array in order to obtain the number of elements in that array (which makes it easy to write code that makes no assumptions about the array, so you only have one place to change the number of elements, and don't even need to rely on macros).
But that still leaves sizeof(int) == 4 as an assumption. It may be a reasonable one, but it's still an assumption, and is by no means guaranteed.
guest: Arrays are *always* packed in C. If the compiler wants to align 32-bit ints in arrays at 8-byte boundaries, it must return 8 for sizeof(int) - that is, the four padding bytes must be internal to the int.
When I got my first DEC Alpha with OSF/1 UNIX, the compiler treated an int as 64 bits. Later fixed under the renamed Digital UNIX when they moved to conform with the X/Open single UNIX Specification back to 32 bits.
It only caught me once, but that was enough.
This doesn't seem to be testing understanding of pointers so much as understanding obscure C pointer/array syntax semantics.
Basically, trying apply the & operator to a value that is already an address constant results in the address constant. It could just as easily default to 0, NULL, or whatever.
As many people stated, the answer to the second question is wrong. 64-bit system, sizeof(int) would return 8 not 4. the answer would be 0x7fffdfbf7f08
Do not be fooled.... The Size of data types is defined by the Model the installed Compiler uses and not the Machine Architecture..
Its not CORRECT to say that on 32 bit Arch... size of int is 4 bytes
and for 64 bit Arch it will be 8 bytes.
The 64 bit compiler models are available as follows:
LP64 - Long and Pointers are 64 bit (8 bytes)
ILP64 - Integer, Long and Pointers are 64 bit (8 bytes)
This question was asked on StackOverflow.
How is 0x7FFFDFBF7F00 different from 0x7fffdfbf7f00 other than one's in uppercase and the other in lower? You should correct your javascript, gentlemen ...
@erKURITA: You're right :) Consider yourself having gotten full credit.
@Felix: I'd encourage you to actually try this out on any reasonable system you have access to. On such a system (which is probably going to be an LP64 system), sizeof(int) is going to be 4.
All of your answers are wrong. The correct answers are:
00007FFFDFBF7F00
00007FFFDFBF7F04
00007FFFDFBF7F00
00007FFFDFBF7F14
printf with "%p" does not print the leading 0x. You would need "%#p" for that. Even then, all the letters would still be capitalized. To get the output your test considers correct you would need to use "%#x". This would make the letters lowercase and truncate the leading zeros to match your answers.
You missed one detail
different rules applies here:
void foo(int y[5]) // same as if 'void foo(int* y)'
{
printf("%p\n", &y);
printf("%p\n", &y+1);
}
I passed the test - but I consider this a tougher one:
#include <stdio.h>
typedef int mystery[5];
void foo(mystery a)
{
printf("%p\n", &a);
printf("%p\n", &a+1);
}
int main() {
mystery b;
foo(b);
return 0;
}
@Pausing Reality:
Well, my answers aren't "wrong" any more than your answers are wrong -- that's what my 64-bit Ubuntu machine produces!
The real issue here is that the behavior of %p isn't specified, and is implementation-defined, so yes, a more lenient checker would accept a number of forms.
Warning: Do not use &x+1 to reference memory or objects. This will probably cause stack corruption, object corruption, heap corruption, memory buffer overflow, or other bad behavior. | https://blogs.oracle.com/linux/the-ksplice-pointer-challenge-v2 | CC-MAIN-2019-51 | en | refinedweb |
Stack Based Navigation (Push/Pop) with Ionic Web Components
By Josh Morony
The introduction of Stencil and Ionic 4.x has led to a much larger focus on using routes for navigation. In the past, Ionic applications relied mostly on using a navigation stack with a push/pop style of navigation. If you are unfamiliar with how this style of navigation works, I would recommend reading one of my previous tutorials on navigation in Ionic/Angular.
This push/pop style of navigation makes a lot of sense for mobile applications. However, since Ionic is increasing support for Progressive Web Applications, and generally just supplying generic web components that can run anywhere on the web, it makes sense to support URL based routing which makes more sense in a web environment.
If you take a look at the Ionic PWA Toolkit, which is a sort of best practices starter template for building mobile applications with Ionic and Stencil, you will see that there are routes for navigation set up like this:
<ion-app> <main> <ion-router useHash={false}> <ion-route</ion-route> <ion-route</ion-route> <ion-nav></ion-nav> </ion-router> </main> </ion-app>
In order to navigate to a particular page, it is just a matter of activating the correct URL. If we wanted to go to the profile page, we would just need to navigate to:
This is different to what would typically be done in the past, where we would navigate by calling a particular function and triggering code like this:
this.navCtrl.push('MyProfilePage');
Even though routing with URLs is the default supplied in the starter project, and it probably is a good idea to use it if you plan on releasing your application for the web, you can still use this stack-based navigation if you want to. Perhaps you just intend to build your Ionic project as a native mobile application and don’t need to worry about URLs… or maybe you just really like push/pop style navigation.
In this tutorial, we are going to cover how to use push/pop style navigation in an Ionic/Stencil project. We will be using the Ionic PWA Toolkit as a base, and then we will remove the URL routing from that and add in push/pop navigation. Although this example is specifically using Stencil, these concepts will apply no matter where you are using Ionic’s web components. However, keep in mind that these concepts likely won’t apply to Ionic/Angular applications since we will have the
ionic-angular package to help with navigation.
IMPORTANT: We are using an early version of the Ionic 4.x components. Ionic 4 has not officially been released yet. You should not use this in a production application yet (unless you are some kind of web dev maverick).
Generate a New Ionic/Stencil Project
We are going to create a new Ionic/Stencil project by cloning the Ionic PWA Toolkit repo. Just run the following commands to get your project up and running:
git clone ionic-push-pop
cd ionic-push-pop
git remote rm origin
npm install
Once everything has finished installing, you can run your project throughout development with:
npm run dev
Setting up Push/Pop Navigation in an Ionic/Stencil Project
All we need to do to set up push/pop navigation in this project is to modify the template of the root component. This is where the routes are set up currently, but we want to remove those and just rely on
<ion-nav> instead.
Modify the
renderfunction in src/components/my-app/my-app.tsx to reflect the following:
render() { return ( <ion-app> <ion-nav</ion-nav> </ion-app> ); }
It is important to supply a
root component to
<ion-nav> here, which will be the default component that is displayed and also the root of your navigation stack.
Pushing and Popping
Now we are going to take a look at how to push and pop pages. In case you are unfamiliar and haven’t read the tutorial I linked earler, pushing will add a new page onto the navigation stack (making it the current page), and popping will remove the most recent page from the navigation stack (making the previous page the new current page).
There is actually a couple of ways that we can do this. The cool thing about how these web components are set up is that you don’t even need to write any functions or logic to navigate between pages, you can just use these web components:
<ion-nav-push>
<ion-nav-pop>
<ion-nav-set-root>
We are going to modify the home page to demonstrate this.
Modify the
renderfunction src/components/app-home/app-home.tsx to reflect the following:-content> </ion-page> ); }
If we want to push our profile page, all we need to do is this:
<ion-nav-push <ion-button>Push Profile Page</ion-button> </ion-nav-push>
Now when this button is clicked, it will push the
app-profile component. If we instead wanted to make the profile component the new root page, we could do that by using:
<ion-nav-set-root <ion-button>Make Profile Page Root Component</ion-button> </ion-nav-set-root>
Now let’s see how we would pop back to the home page after pushing to the profile page.
Modify the
renderfunction in src/components/app-profile/app-profile.tsx to reflect the following:
render() { return ( <ion-page> <ion-header> <ion-toolbar <ion-buttons <ion-back-button</ion-back-button> </ion-buttons> <ion-title>Ionic PWA Toolkit</ion-title> </ion-toolbar> </ion-header> <ion-content> <ion-nav-pop> <ion-button>Pop</ion-button> </ion-nav-pop> <p>Pop will only work if this page was pushed, not if it was set as the new root component</p> </ion-content> </ion-page> ); }
All we need to do to pop off the current view is:
<ion-nav-pop> <ion-button>Pop</ion-button> </ion-nav-pop>
However, keep in mind that if you arrived on the profile page by setting the profile page as the “root” page, then you won’t be able to “pop” back to the previous page (because the profile page is now the base of the stack and there is nothing left to pop back to).
Navigating Programmatically
Being able to use the web components directly in the HTML to trigger page navigation is pretty cool, but sometimes we will also want to just navigate programmatically as well. This is really simple to do. All we need to do is grab a reference to the
<ion-nav> and then call the appropriate method on it (e.g. push or pop). Let’s take a look at an example.
Modify src/components/app-home/app-home.tsx to reflect the following:
import { Component } from '@stencil/core'; const nav = document.querySelector('ion-nav'); @Component({ tag: 'app-home', styleUrl: 'app-home.scss' }) export class AppHome { goToProfile(){ nav.push('app-profile'); }-button onClick={() => this.goToProfile()}>Push Profile Page with Function</ion-button> </ion-content> </ion-page> ); } }
We’ve set up a button that will trigger the
goToProfile function. This function just calls the
push method on
nav which is a constant variable we set up at the top of this component. All we need to do is supply it with the name of the component that we want to push.
Summary
If you have a background with Ionic/Angular, then this style of navigation will likely feel pretty comfortable to you. In terms of which approach you should use, I would say in general to use the URL route approach if you are going to be deploying your application to the web, and push/pop style navigation if it is just going to be deployed as a native mobile application. In the end, there is no hard and fast rules – use what works best for you. | https://www.joshmorony.com/stack-based-navigation-push-pop-with-ionic-web-components/ | CC-MAIN-2019-51 | en | refinedweb |
Compiling and Running Your First Program
- Compiling Your Program
- Running Your Program
- Understanding Your First Program
- Displaying the Values of Variables
- Exercises
In this chapter, you are introduced to the C language so that you can see what programming in C is all about. What better way to gain an appreciation for this language than by taking a look at an actual program written in C?
To begin with, you'll choose a rather simple examplea program that displays the phrase "Programming is fun." in your window. Program 3.1 shows a C program to accomplish this task.
Program 3.1 Writing Your First C Program
#include <stdio.h> int main (void) { printf ("Programming is fun.\n"); return 0; }
In the C programming language, lowercase and uppercase letters are distinct. In addition, in C, it does not matter where on the line you begin typingyou can begin typing your statement at any position on the line. This fact can be used to your advantage in developing programs that are easier to read. Tab characters are often used by programmers as a convenient way to indent lines.
Compiling Your Program
Returning to your first C program, you first need to type it into a file. Any text editor can be used for this purpose. Unix users often use an editor such as vi or emacs.
Most C compilers recognize filenames that end in the two characters "." and "c" as C programs. So, assume you type Program 3.1 into a file called prog1.c. Next, you need to compile the program.
Using the GNU C compiler, this can be as simple as issuing the gcc command at the terminal followed by the filename, like this:
$ gcc prog1.c $
If you're using the standard Unix C compiler, the command is cc instead of gcc. Here, the text you typed is entered in bold. The dollar sign is your command prompt if you're compiling your C program from the command line. Your actual command prompt might be some characters other than the dollar sign.
If you make any mistakes keying in your program, the compiler lists them after you enter the gcc command, typically identifying the line numbers from your program that contain the errors. If, instead, another command prompt appears, as is shown in the preceding example, no errors were found in your program.
When the compiler compiles and links your program, it creates an executable version of your program. Using the GNU or standard C compiler, this program is called a.out by default. Under Windows, it is often called a.exe instead. | http://www.informit.com/articles/article.aspx?p=327842 | CC-MAIN-2019-51 | en | refinedweb |
Creating Animation with Java
- Animating a Sequence of Images
- Sending Parameters to the Applet
- Workshop: Follow the Bouncing Ball
- Summary
- Q&A
- Quiz
- Activities
Whether you are reading this book in 24 one-hour sessions or in a single 24-hour-long-bring-me-more-coffee-can't-feel-my-hand-are-you-going-to-finish-that-donut marathon, you deserve something for making it all this way. Unfortunately, Sams Publishing declined my request to buy you a pony, so the best I can offer as a reward is the most entertaining subject in the book: animation.
At this point, you have learned how to use text, fonts, color, lines, polygons, and sound in your Java programs. For the last hour on Java's multimedia capabilities, and the last hour of the book, you will learn how to display image files in GIF and JPEG formats in your programs and present them in animated sequences. The following topics will be covered:
Using Image objects to hold image files
Putting a series of images into an array
Cycling through an image array to produce animation
Using the update() method to reduce flickering problems
Using the drawImage() command
Establishing rules for the movement of an image
Animating a Sequence of Images
Computer animation at its most basic consists of drawing an image at a specific place, moving the location of the image, and telling the computer to redraw the image at its new location. Many animations on Web pages are a series of image files, usually .GIF or .JPG files that are displayed in the same place in a certain order. You can do this to simulate motion or to create some other effect.
The first program you will be writing today uses a series of image files to create an animated picture of the Anastasia Island Lighthouse in St. Augustine, Florida. Several details about the animation will be customizable with parameters, so you can replace any images of your own for those provided for this example.
Create a new file in your word processor called Animate.java. Enter Listing 24.1 into the file, and remember to save the file when you're done entering the text.
Listing 24.1 The Full Text of Animate.java
1: import java.awt.*; 2: 3: public class Animate extends javax.swing.JApplet 4: implements Runnable { 5: 6: Image[] picture = new Image[6]; 7: int totalPictures = 0; 8: int current = 0; 9: Thread runner; 10: int pause = 500; 11: 12: public void init() { 13: for (int i = 0; i < 6; i++) { 14: String imageText = null; 15: imageText = getParameter("image"+i); 16: if (imageText != null) { 17: totalPictures++; 18: picture[i] = getImage(getCodeBase(), imageText); 19: } else 20: break; 21: } 22: String pauseText = null; 23: pauseText = getParameter("pause"); 24: if (pauseText != null) { 25: pause = Integer.parseInt(pauseText); 26: } 27: } 28: 29: public void paint(Graphics screen) { 30: super.paint(screen); 31: Graphics2D screen2D = (Graphics2D) screen; 32: if (picture[current] != null) 33: screen2D.drawImage(picture[current], 0, 0, this); 34: } 35: 36: public void start() { 37: if (runner == null) { 38: runner = new Thread(this); 39: runner.start(); 40: } 41: } 42: 43: public void run() { 44: Thread thisThread = Thread.currentThread(); 45: while (runner == thisThread) { 46: repaint(); 47: current++; 48: if (current >= totalPictures) 49: current = 0; 50: try { 51: Thread.sleep(pause); 52: } catch (InterruptedException e) { } 53: } 54: } 55: 56: public void stop() { 57: if (runner != null) { 58: runner = null; 59: } 60: } 61: }
Because animation is usually a process that continues over a period of time, the portion of the program that manipulates and animates images should be designed to run in its own thread. This becomes especially important in a Swing program that must be able to respond to user input while an animation is taking place. Without threads, animation often takes up so much of the Java interpreter's time that the rest of a program's graphical user interface is sluggish to respond.
The Animate program uses the same threaded applet structure that you used during Hour 19, "Creating a Threaded Program." Threads are also useful in animation programming because they give you the ability to control the timing of the animation. The Thread.sleep() method is an effective way to determine how long each image should be displayed before the next image is shown.
The Animate applet retrieves images as parameters on a Web page. The parameters should have names starting at "image0" and ending at the last image of the animation, such as "image3" in this hour's example. The maximum number of images that can be displayed by this applet is six, but you could raise this number by making changes to Lines 6 and 13.
The totalPicture integer variable determines how many different images will be displayed in an animation. If less than six image files have been specified by parameters, the Animate applet will determine this during the init() method when imageText equals null after Line 15.
The speed of the animation is specified by a "pause" parameter. Because all parameters from a Web page are received as strings, the Integer.parseInt() method is needed to convert the text into an integer. The pause variable keeps track of the number of milliseconds to pause after displaying each image in an animation.
As with most threaded programs, the run() method contains the main part of the program. A while (runner == thisThread) statement in Line 44 causes Lines 4551 to loop until something causes these two Thread objects to have different values.
The first thing that happens in the while loop is a call to the applet's repaint() method. This statement requests that the applet's paint() method be called so that the screen can be updated. Use repaint() any time you know something has changed and the display needs to be changed to bring it up to date. In this case, every time the Animate loop goes around once, a different image should be shown.
NOTE
In Java, you can never be sure that calling repaint() will result in the component or applet window being repainted. The interpreter will ignore calls to repaint() if it can't process them as quickly as they are being called, or if some other task is taking up most of its time.
The paint() method in Lines 2934 contains the following statements:
Graphics2D screen2D = (Graphics2D) screen; if (picture[current] != null) screen2D.drawImage(picture[current], 0, 0, this);
First, a Graphics2D object is cast so that it can be used when drawing to the applet window. Next, an if statement determines whether the Image object stored in picture[current] has a null value. When it does not equal null, this indicates that an image is ready to be displayed. The drawImage() method of the screen2D object displays the current Image object at the (x,y) position specified.
NOTE
The paint() method of this applet does not call the paint() method of its superclass, unlike some of the other graphical programs in the book, because it makes the animated sequence look terrible. The applet's paint() method clears the window each time it is called, which is OK when you're drawing a graphical user interface or some other graphics that don't change. However, clearing it again and again in a short time causes an animation to flicker.
The this statement sent as the fourth argument to drawImage() enables the program to use a class called ImageObserver. This class tracks when an image is being loaded and when it is finished. The JApplet class contains behavior that works behind the scenes to take care of this process, so all you have to do is specify this as an argument to drawImage() and some other methods related to image display. The rest is taken care of for you.
An Image object must be created and loaded with a valid image before you can use the drawImage() method. The way to load an image in an applet is to use the getImage() method. This method takes two arguments: the Web address or folder that contains the image file and the file name of the image.
The first argument is taken care of with the getCodeBase() method, which is part of the JApplet class. This method returns the location of the applet itself, so if you put your images in the same folder as the applet's class file, you can use getCodeBase(). The second argument should be a .GIF file or .JPG file to load. In the following example, a turtlePicture object is created and an image file called Mertle.gif is loaded into it:
Image turtlePicture = getImage(getCodeBase(), "Mertle.gif");
NOTE
As you look over the source code to the Animate applet, you might wonder why the test for a null value in Line 31 is necessary. This check is required because the paint() method may be called before an image file has been fully loaded into a picture[] element. Calling getImage() begins the process of loading an image. To prevent a slowdown, the Java interpreter continues to run the rest of the program while images are being loaded.
Storing a Group of Related Images
In the Animate applet, images are loaded into an array of Image objects called pictures. The pictures array is set up to handle six elements in Line 6 of the program, so you can have Image objects ranging from picture[0] to picture[5]. The following statement in the applet's paint() method displays the current image:
screen.drawImage(picture[current], 0, 0, this);
The current variable is used in the applet to keep track of which image to display in the paint() method. It has an initial value of 0, so the first image to be displayed is the one stored in picture[0]. After each call to the repaint() statement in Line 45 of the run() method, the current variable is incremented by one in Line 46.
The totalPictures variable is an integer that keeps track of how many images should be displayed. It is set when images are loaded from parameters off the Web page. When current equals totalPictures, it is set back to 0. As a result, current cycles through each image of the animation, and then begins again at the first image. | http://www.informit.com/articles/article.aspx?p=30419&seqNum=6 | CC-MAIN-2017-30 | en | refinedweb |
. If the appropriate user or group does not already appear, use “Add…”.
- In the “Apply to:” drop-down, select “This namespace and subnamespaces”
- In the Allow column, select Remote Enable
- Check “Apply these permissions to objects and/or containers within this container only” permissions to (“john” in my walkthrough) on the client.
Start Hyper-V Manager from Administrative Tools on the Control Panel. Enter appropriate administrative credentials if UAC is enabled and the account is not an administrator on the client.
Click Connect to Server and enter the name of the remote machine, accepting the EULA if this is a pre-release version of Hyper-V.
Watch in even more awe than you did in part 2 as you get a screen like below 😉 Here I’m managing jhoward-hpu which is the full installation, and jhoward-hp2 which is the server core installation. Wow! I need some time off!
Cheers,
John.
Update 14th Nov 2008. I've just released a script which does all this configuration in one or two command lines: HVRemote
Hyper-V Management Console on Vista x64
Hyper-V Monitor Gadget for Windows Sidebar
Hola Una herramienta imprescindible para configurar los servidores con Hyper-V para que se puedan administra.
Thanks,
John.
So after even more feedback and questions, part 4 of this series provides the walkthrough steps necessary.
Thanks,
John.
Evan - thanks for the feedback 🙂.
Cheers,
John..
Thanks,
John.
George - did you reboot the server after applying the AZMan changes?
Thanks,
John.).
Thanks,
John.
Peter/Lance - finally got a chance to update it.
Thanks,
John.
A noob/freshman - There are so many things wrong here. First, we do not support Hyper-V running as a nested Hypervisor. You should run it on bare metal. As for the namespace not being present, the most likely cause is you have not enabled the Hyper-V role. How are you determining it was successfully installed? (And you go on to say physical computer, yet you say Hyper-V on 2008 is running in a VM. I'm confused what is what). Why are you running Server 2008, not 2008 R2, 2012 or even 2012 R2 Preview for Hyper-V? And finally.... why are you doing the configuration manually? It would be FAR easier to use HVRemote - code.msdn.microsoft.com/HVRemote
Thx, John.).
Thanks,
JOhn.
Aujourd'hui deux outils pour Hyper-V. Pas tout neufs, mais extrêmement utiles. Le premier vous servira
In my last post on installing Hyper-V for my home setup I said I had a number of issues. One was
Well guys ... kalo dah ada yang coba Hyper V ... let's disccuss this .. i was trying to install Hyper
Announcing "HVRemote"...., a tool to "automagically" configure Hyper-V Remote Management.
Timbo - I'm pretty sure you'll see this error if you have older bits on the management computer. Are you sure you're running RTM bits on both server and client (950050 for server and 952627 for vista sp1 client).
Thanks,
John.
@PBaldwin
In my experience, you typically see things like this due to time synchronisation in a domain not operating correctly. Is there a difference of more than a minute or so between the server core machine and the management client?
Thanks,
John.
In my last post on installing Hyper-V for my home setup I said I had a number of issues. One was
PingBack from
Toby - HVRemote only deals with Hyper-V management, not other administrative capabilities such as the ones you list. The best way to diagnose is if you run the latest (0.7) version of hvremote with the /target:otherboxname parameter on both boxes (client and server) to diagnose.
Thanks,
John.
Anthony - are you sure you followed step 2B in part 1, and noticed I updated the above post for 12B immediately before step 13.
That all said, I really strongly recommend that unless you have a need to perform the steps manually, the use of hvremote will save you a lot of pain.
Thanks,
John.?
Thanks,
John.
Tim - 18004 is RC1 (IIRC). RTM release is 18016. Apply the KB articles I mentioned above to both sides, and the problem should go away.
Thanks,
John..
Thanks,
John.
Michael - can you post the full output of hvremote /show /target:othercomputername from both boxes. Also info of whether you have firewalls/routers between the client and server, or whether there is any 3rd party AV or firewall software installed on either box.
Thanks,
John.
It has been a little quiet on the blog front, but sometimes, at least in this case, I hope I've come
For those that cannot expand the "root" note, in Tony's case, this was resolved by not having followed the instructions on the Vista machine to enable anonymous logon remote access in DCOM Security (step 15 above).
Thanks,
John.
Thanks,
John.
Simone - can you post up the output of hvremote /show on both boxes, plus the output of a "ping -4 otherboxname" to try to diagnose.
Thanks,
John.
@Well.... can you try using HVRemote. This is much simpler than trying to follow the steps manually.
John.
I can successfully remotely manage my Hyper-V Server 2012 Core in a workgroup environment. I can also remotely manage the disks on the Hyper-V server.
I wrote a quick 12-step tutorial (article and video) showing exactly what I did to get this working.
pc-addicts.com/12-steps-to-remotely-manage-hyper-v-server-2012-core
Hopefully this can help others who found this to be a very frustrating task.
-Chris.
Thanks,
John.
Hi Thomas
My apologies. Was giving a "lazy" answer 🙂. >>
🙂
Thanks,
John.
Ralph - Unless you have a seperate DC physically somewhere, you run into the chicken and egg problem. I would strongly recommend that you do not only run a single virtual DC on a Hyper-V machine and have the Hyper-V machine itself joined to that domain. While it technically can be done (with some caveats), it is not a supported scenario.
Thanks,
John.
🙂 (Although "American" curry takes a lot of getting used to after my "British" indian cuisine up-bringing). Glad you got it going.
Thanks,
John.
Thanks,
John.
Wesley
You should be able to log on with a cached domain account, or a local administrator to remove the box from the domain. They you just treat it as a workgroup to workgroup scenario. Alternately, create a matching local account on the domain joined server to the account on the client, passwords matching. Then again it should still be WG to WG configuration without any need for the server to contact the domain.
Thanks,
John.
No, 0.7 does not support WS2012 (it works somewhat by accident, but I strongly recommend you do not use it). I will be releasing a version which support Windows 8/WS2012 (and Hyper-V Server 2012, and for R2/Win7 and 2008/Vista) before GA. It's being tested now, but not ready to be made public.
Windows 7 can communicate to 2012 using the v1 WMI namespace, however, it is not recommended. Any of the new capabilities in 2012 will not be available unless you use a Windows 8 client with the newer Hyper-V Manager which uses the v2 namespace.
John.
So far, I’ve covered the following Hyper-V Remote Management scenarios: Workgroup: Vista client to remote
Remote management of Server Core installations helps you. It prevents you from having to struggle withwbemunsecapp.exe
>> into Firewall.cpl to unlock the app.
>> Then reconnect in the UI, it should work.
Thanks,
John.....
Cheers,
John..
Thanks,
John.
David - I confess, I'm completely stumped. Do you get this for all groups and all users using net localgroup, or just the Distributed COM Users group?
Thanks,
John..
Thanks,
John. windowssystem32driversetchosts as a workaround to DNS to verify if that is indeed the cause.
Thanks,
John.,
John..
Thanks,! 🙂username listed as a "user" type right under the "Administrators(BUILTINAdministrators)" group.
I am conected to the hyperv server and see the "No virtual machines were found..." message. It just seems I am missing whatever permission is needed to create new VMs. 🙂
Christopher!
Then sacrifice a goat at the dark of the moon!!!
Your explanantions are tremendous John - but it is a tortuous process, is it not??? 🙂?.
Hyper-V Manager Client: 6.0.6001.18004
Not sure how to tell on the server side.?
Thanks,!
Be aware of the fact that by default the user account on thews08 server will expire. When this happens you will get the "RPC service unavailable"error.
John,.
Microsoft is NOT ready for this solution. This is only sentence I can say.
Any thoughts why I would get this error when following step 12:
C:Usersadministrator>net localgroup "Distributed COM Users" /add tpa01vh01dillon
System error 1376 has occurred.
The specified local group does not exist.
Thank you!. 🙂?
Thanks,
David.
John,.
David's problem is due to double quote formate. Just remove them and retype them again in the shell
Dear Mr. Jhon
thank you very much
i tried your script, but i have the same error
?"
like Mr. taylor
🙁
Solution for add user in "Distributed COM Users group" (STEP 12), type: net localgroup “Distributed COM Users” /add "jhoward-hp2john". Bye!!!
John,:Windowssystem32>cscript UsersThomasDesktopHThomas
INFO: Assuming /mode:client as the Hyper-V role is not installed
-------------------------------------------------------------------------------
DACL for COM Security Access Permissions
-------------------------------------------------------------------------------
Everyone (S-1-1-0)
Allow: LocalLaunch RemoteLaunch (7)
BUILTINPerformance Log Users (S-1-5-32-559)
Allow: LocalLaunch RemoteLaunch (7)
BUILTINDistributed COM Users (S-1-5-32-562)
Allow: LocalLaunch RemoteLaunch (7)
NT AUTHORITYANONYMOUS LOGON (S-1-5-7)
Allow: LocalLaunch RemoteLaunch
-------------------------------------------------------------------------------
Private Firewall Profile is active
Enabled: Microsoft Management Console (UDP)
Enabled: Microsoft Management Console (TCP)
INFO: Are running the latest version
and on the server with the firewall enabled:
Microsoft (R) Windows Script Host Version 5.7
Hyper-V Remote Management Configuration & Checkup Utility
John Howard, Microsoft Corporation.
Version 0.3 20th Nov 2008
INFO: Computername is VT-HYPER-V
INFO: Computer is in workgroup WORKGROUP
INFO: Current user is VT-HYPER-VAdministrator
INFO: Assuming /mode:server as the role is installed
INFO: This machine has the Hyper-V (v1) QFE installed (KB950050)
-------------------------------------------------------------------------------
DACL for WMI Namespace rootcimv2)
-------------------------------------------------------------------------------
DACL for WMI Namespace rootvirtualization)
-------------------------------------------------------------------------------
Contents of Authorization Store Policy
-------------------------------------------------------------------------------
Hyper-V Registry configuration:
- Store: msxml://C:ProgramDataMicrosoftWindowsHyper-VInitialStore.xml
- Service Application: Hyper-V services
Application Name: Hyper-V services
Operation Count: 33
100 - Read Service Configuration
105 - Reconfigure Service
200 - Create Virtual Switch
205 - Delete Virtual Switch
210 - Create Virtual Switch Port
215 - Delete Virtual Switch Port
220 - Connect Virtual Switch Port
225 - Disconnect Virtual Switch Port
230 - Create Internal Ethernet Port
235 - Delete Internal Ethernet Port
240 - Bind External Ethernet Port
245 - Unbind External Ethernet Port
250 - Change VLAN Configuration on Port
255 - Modify Switch Settings
260 - Modify Switch Port Settings
265 - View Switches
270 - View Switch Ports
275 - View External Ethernet Ports
280 - View Internal Ethernet Ports
285 - View VLAN Settings
290 - View LAN Endpoints
295 - View Virtual Switch Management Service
300 - Create Virtual Machine
305 - Delete Virtual Machine
310 - Change Virtual Machine Authorization Scope
315 - Start Virtual Machine
320 - Stop Virtual Machine
325 - Pause and Restart Virtual Machine
330 - Reconfigure Virtual Machine
335 - View Virtual Machine Configuration
340 - Allow Input to Virtual Machine
345 - Allow Output from Virtual Machine
350 - Modify Internal Ethernet Port
1 role assignment(s) were located
Role Assignment 'Administrator' (Targetted Role Assignment)
- All Hyper-V operations are selected
- There are 1 member(s) for this role assignment
- BUILTINAdministrators (S-1-5-32-544)
-------------------------------------------------------------------------------
Contents of Group Distributed COM Users
-------------------------------------------------------------------------------
1 member(s) are in Distributed COM Users
- VT-HYPER-VThomas
-------------------------------------------------------------------------------
DACL for COM Security Launch and Activation Permissions
-------------------------------------------------------------------------------
BUILTINAdministrators (S-1-5-32-544)
Allow: LocalLaunch RemoteLaunch LocalActivation RemoteActivation (31)
Everyone (S-1-1-0)
Allow: LocalLaunch LocalActivation (11)
BUILTINDistributed COM Users (S-1-5-32-562)
Allow: LocalLaunch RemoteLaunch LocalActivation RemoteActivation (31)
BUILTINPerformance Log Users (S-1-5-32-559)
Allow: LocalLaunch RemoteLaunch LocalActivation RemoteActivation (31)
-------------------------------------------------------------------------------
Firewall Settings for Hyper-V
-------------------------------------------------------------------------------
Public Firewall Profile is active
Enabled: Hyper-V (SPL-TCP-In)
Enabled: Hyper-V (RPC)
Enabled: Hyper-V (RPC-EPMAP)
Enabled: Hyper-V - WMI (Async-In)
Enabled: Hyper-V - WMI (TCP-Out)
Enabled: Hyper-V - WMI (TCP-In)
Enabled: Hyper-V - WMI (DCOM-In)
-------------------------------------------------------------------------------
Firewall Settings for Windows Management Instrumentation (WMI)
-------------------------------------------------------------------------------
Public Firewall Profile is active
Enabled: Windows Management Instrumentation (ASync-In)
Enabled: Windows Management Instrumentation (WMI-Out)
Enabled: Windows Management Instrumentation (WMI-In)
Enabled: Windows Management Instrumentation (DCOM-In)
Note: Above firewall settings are not required for Hyper-V Remote Management
INFO: Are running the latest version
John,:
Thomas...
Thanks,
eric??
John, 🙂
cheers,
-stusoftwarepoliciesmicrosoftwindowsdeviceinstallsettings ? 🙂
-stuwbemun.
cheers,
-stu 😉
Could be because of all the fiddeling.
10x man, keep it up, its blogs like this that make IT work. 😉
Hi John, 😉 🙂
Thank you for taking the time and maintaining this site.
Best regards
Paul Kayadoe.
Hi John,
I have a WORKGROUP Hyper-V (English) server and a WORKGROUP Vista Client (Italian).
I followed your instruction but I cannot connect the Hyper-V console to the server (it says that it can not connect to the RPC service) except when I disable the firewall on the server side. Disabling the firewall on the server, the client connects fine.
Using /show with HVRemote confirms that all firewall rules for Hyper-V and WMI are enabled.
Any idea?
Thank you,
Simone
I keep getting stuck at step 13, and cannot get any further.
When i right click on WMI Control/properties/security there is nothing in the box at all, not even root
In the general tab i get "Failed to connect to \WT-HYPERV because "Win32 Access is denied."
I have no idea how to get arround this, I've been sitting here for 4 hours tying to get this thing to work.
Hi,
Having a major problem getting this working and wonder if you can help. Basically I have a Hyper-V Server installed from a fresh ISO download taken yesterday and have allowed it to patch itself up via Windows Update.
Connecting to this is a Win7 RC (7100) box, logged in with a duplicate username/pass.
Have run all the script commands according to the 10-second guide and, whilst I can administer pretty much everything on the server, such as services, event logs, users, groups & disk management, trying to access Hyper-V blocks for 5-10 secs with "Connecting to Virtual Machine Management service...", then fails with the message "You might not have permission to perform this task".
Now I've noted that this message is subtly different to the usual "you do not have required permission to complete this task" so I'm not quite sure what's going on. What could prevent the Hyper-V manager from connecting to the VMM service when all other administrative functions are working fine?
The /show command with your script gives no issues at either end and I've tried completely disabling both firewalls (it's just a test setup anyway so no great problem with that) but to no avail.
I've even read through some of your old "pre-script" guides to check that elements such as the authorization manager are configured correctly and they appear to be.
At a total loss here, any advice appreciated 🙂
Been through the script. still get make sure virtual management service is running. must have been the fine printI I missed.
can rdp and everything else except hyper v console on windws 7. all features enabled.
John,
I wanted to thank you. Having used the other popular Hypervisor software for some time, I decided to see how things were in the Microsoft pond in regards to Hyper-V.
After reading up a good deal on configuring Server Core, I decided that I would give Hyper-V a shot running on Windows Core.
Knowing the task at hand would prove to be a learning experience I kept an open mind (and an open browser!).
After struggling with Server Manager and Disk management - still not resolved - I fired up the Hyper-V console and tried to connect to my Server. I was exasperated to see the "You are no authorized" message.
After a couple minutes of poking around I Bing'd the problem and found your site.
I read a little more and downloaded your script. Within minutes I was installing my first VM on Hyper-V.
Thank you so much for going above and beyond to help the Hyper-V community with your script.
John,
Thanks for all your hard work! I thought I would share my issues along the way, maybe it'll help someone.
I used two boxes, one Vista SP2 (client) and one Hyper-V Server 2008. Both are configured as being in a workgroup.
After running your 10s I still had to do following:
- run updates 941314, 952627, 970203 on the client
- update the Hosts file on both sides
- enable my Onecare firewall on the client to allow inbound traffic on port 135. (Thanks to Greg, this was one of those 'd*mn, I knew that!' problems.)
As I'm configuring this network for home and small office, I'd appreciate your take on this: I'd like to run a domain controller VM that also does DHCP, DNS and perhaps some filesharing for my network. Obviously I would love to add all boxes to the domain, including the Hyper-V server. Is this wise? (I had previously thought - because my AD would be extremely empty - that I'd be able to run it on the Hyper-V Server itself, but saw you advise against this..) Thanks in advance.
Regards,
Ralph.
Hey John, this is a great article but I am stuck. I have a dev Hyper-V box that was unplugged from the domain and shipped to me without being removed from the domain. Now it is in my Dev lab with no connectivity to the domain. When I try to add the account from my lab workgroup to the DCOM Users I get a "The trust relationship between this workstation and the primary domain failed."
How do I get around this?
I'm in the finish line, and the last step is driving me crazy.
I was trying to edit InitialStore.xml at the server (Win2008 R2) by invoking notepad in an administrator level command prompt.
When I'm trying to save the updated file it says : "Access denied!"
This message is the true symbolic meaning to Microsoft's virtualization strategy. "It's free but You can't use it."
I spent so much time to get this working, that I wont stop near the finish line. Using Win2008 R2 Hyper-V and Vista SP2.
Your script warns about "Cannot connect to rootcimv2 on server"
Thanks for hvremote.wsf !
Istvan
great work but why is it so difficult to remote manage a Hyper V server. The time I have spent messing around trying to get this working is a bit of a joke. Both Microsofts main rivals have a product that instals and is manageable without any fuss ? please please sort this out
Hey Guys, I cannt add my User to some group in Core Hyper-V server, but after run in "cmd.exe" powershell and try in there type command to add user... All works.. Thanks All, sorry for my bad English....
Wow! I'm so unimpressed with how difficult it is to setup remote management of core Hyper-V. I currently use VMware ESX and wanted to see how Hyper-V compares. When I first used ESX & vSphere, I had it up and running in around 20 minutes with full remote management from any Windows box I choose. I had a VM running Windows in 30 minutes. After 3 hours mucking around with core Hyper-V I'm getting a little frustrated. 🙁 Fortunately I do have a Windows Server 2008 R2 Enterprise license, so I will see if that's any better.
You forgot to repeatedly remind me to make sure my passwords were the same on client and server .... shame on you ... 😉
"Wow! I need some time off! " said the author of the blog, a gentleman who seemingly sports the title:
"Senior Program Manager in the Hyper-V team at Microsoft"
This is perhaps the richest definition of irony I've ever seen. The hyper-v manager DOESN'T WORK without some major tweaking. And when, after much trial and tribulation one does get the tool to connect, the virtual disk creator hangs - until you kill it manually. Perhaps this forum is not the time/place to plug for a vacation given the hardship your release is causing.
Thank you for this guide.
After I removed that d..... stored password, I could connect via Hyper-V Manager.
This is rediculous... all this to get a virtual platform working? ESX requires 'certain' hardware also, so that's a no go! I think someone needs to re-write the VM platform book, and create a 'one size fits all' HVRemote.
Question is this tool supports Hyper-v server 2012
Could it be used foc connection windows7 -> hyper-v server 2012
Thank you very much for answers.
Hello John!
First of all, I'm an absolute and complete beginner in Hyper-V, who has no experience in this field, and I have tried to follow the steps, which are mentioned on this page.
Here's my (short) story: I'm currently running Hyper-V Manager as a client (with administrative rights / required permissions of course) an my (physical) machine (OS: Windows 7 Professional x86/32bit) and I've also installed Hyper-V on a virtual machine (OS: Windows Server 2008 SP1 Core x64, that is being used as a server. This "virtual server" is hosted on a PC/machine running "VMWare vSphere Hypervisor (EXSi) 5.1 Update 1". (VMware vSphere Client is working well on Windows 7 Prof. 32bit and I can also log into the "Windows Server 2008 core Virtual Machine" with no hassles. I can log into the virtual machine via a "Remote Desktop Connection" with ease.) My issue is that in the properties window of "WMI Control" in the Security-tab, I don't have a namespace called "Rootvirtualization" despite the fact that Hyper-V was successfully installed on my physical computer. According to the Windows Powershell, "Rootvirtualization" is even considered an invalid WMI query. Is there a "rookie-friedly" way to easily create/restore this "virtualization" namespace, so I can add the apropriate permission(s)?
Thank you in advance!
Yours sincerely
A freshman to Hyper-V
Why doesn't Microsoft release something like vmware did with their client? | https://blogs.technet.microsoft.com/jhoward/2008/03/30/part-3-hyper-v-remote-management-you-do-not-have-the-required-permission-to-complete-this-task-contact-the-administrator-of-the-authorization-policy-for-the-computer-computername/ | CC-MAIN-2017-30 | en | refinedweb |
ashwinderSinghMembers
Content count5
Joined
Last visited
Community Reputation115 Neutral
About YashwinderSingh
- RankNewbie created a sample code for the same problem I was facing to set alpha and have given the complete code above
YashwinderSingh posted a topic in Graphics and GPU ProgrammingI am trying to set alpha value of color as color.a = 0.2 in my pixel shader but it is not showing any effect. If I set color.r, color.g, color.b then they work fine according to the values set in the pixel shader. Simple pixel shader code is given below that I am using sampler2D ourImage : register(s0); float4 main(float2 locationInSource : TEXCOORD) : COLOR { float4 color = tex2D( ourImage , locationInSource.xy); color.a = 0.2; return color; } My complete rendering code is as below using System;using System.Drawing; using System.IO; using System.Runtime.InteropServices; using System.Text; using System.Threading; using System.Windows; using System.Windows.Interop; using SlimDX; using SlimDX.Direct3D9; using Point = System.Windows.Point; namespace AlphaBlendTesting { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { #region Private Variables private Device _device; private VertexBuffer _vertexBuffer; private static VertexDeclaration _vertexDeclaration; private Texture _texture; private Surface _renderTarget; #endregion public MainWindow() { InitializeComponent(); this.Loaded += MainWindow_Loaded; } void MainWindow_Loaded(object sender, RoutedEventArgs e) { InitializeDevice(); InitializeVertices(); ThreadPool.QueueUserWorkItem(delegate { RenderEnvironment(); }); } private void InitializeDevice() { var direct3D = new Direct3D(); var windowHandle = new WindowInteropHelper(this).Handle; var presentParams = new PresentParameters { Windowed = true, BackBufferWidth = (int)SystemParameters.PrimaryScreenWidth, BackBufferHeight = (int)SystemParameters.PrimaryScreenHeight, // Enable Z-Buffer // This is not really needed in this sample but real applications generaly use it EnableAutoDepthStencil = true, AutoDepthStencilFormat = Format.D16, // How to swap backbuffer in front and how many per screen refresh BackBufferCount = 1, SwapEffect = SwapEffect.Copy, BackBufferFormat = direct3D.Adapters[0].CurrentDisplayMode.Format, PresentationInterval = PresentInterval.Default, DeviceWindowHandle = windowHandle }; _device = new Device(direct3D, 0, DeviceType.Hardware, windowHandle, CreateFlags.SoftwareVertexProcessing | CreateFlags.Multithreaded, presentParams); var shaderByteCode = ShaderBytecode.Compile(File.ReadAllBytes(@"EdgeBlenDing.fx"), "main", "ps_2_0", ShaderFlags.None); var pixelShader = new PixelShader(_device, shaderByteCode); _device.PixelShader = pixelShader; } private void InitializeVertices() { _renderTarget = _device.GetRenderTarget(0); var vertexBuffer = new VertexBuffer(_device, 6 * Vertex.SizeBytes, Usage.WriteOnly, VertexFormat.Normal, Pool.Managed); using (DataStream stream = vertexBuffer.Lock(0, 0, LockFlags.None)) { stream.WriteRange(BuildVertexData()); vertexBuffer.Unlock(); } _vertexBuffer = vertexBuffer; //Setting the vertex elements var vertexElems = new[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position , 0), new VertexElement(0, 12 , DeclarationType.Float2, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 0), VertexElement.VertexDeclarationEnd }; //Declaring the vertex _vertexDeclaration = new VertexDeclaration(_device, vertexElems); SetRenderState(); _texture = Texture.FromFile(_device, @"C:\Users\Public\Pictures\Sample Pictures\Chrysanthemum.jpg"); } private void SetRenderState() { // Turn off culling, so we see the front and back of the triangle _device.SetRenderState(RenderState.CullMode, Cull.None); // Turn off lighting _device.SetRenderState(RenderState.Lighting, false); } private void RenderEnvironment() { while (true) { try { _device.BeginScene(); _device.Clear(ClearFlags.ZBuffer, Color.Blue, 1.0f, 0); _device.SetTexture(0, _texture); _device.SetRenderTarget(0, _renderTarget); _device.VertexDeclaration = _vertexDeclaration; _device.SetStreamSource(0, _vertexBuffer, 0, Vertex.SizeBytes); _device.DrawPrimitives(PrimitiveType.TriangleList, 0, 2); _device.EndScene(); //Show what we draw _device.Present(); } catch (Exception e) { } } } private Vertex[] BuildVertexData() { var bottomLeftVertex = new Point(0, 0); var topLeftVertex = new Point(0, 1); var bottomRightVertex = new Point(1, 0); var topRightVertex = new Point(1, 1); var vertexData = new Vertex[6]; vertexData[0].Position = new Vector3(-1.0f, 1.0f, 0.0f); vertexData[0].TextureCoordinate = new Vector2((float)bottomLeftVertex.X, (float)bottomLeftVertex.Y); vertexData[1].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[1].TextureCoordinate = new Vector2((float)topLeftVertex.X, (float)topLeftVertex.Y); vertexData[2].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[2].TextureCoordinate = new Vector2((float)bottomRightVertex.X, (float)bottomRightVertex.Y); vertexData[3].Position = new Vector3(-1.0f, -1.0f, 0.0f); vertexData[3].TextureCoordinate = new Vector2((float)topLeftVertex.X, (float)topLeftVertex.Y); vertexData[4].Position = new Vector3(1.0f, -1.0f, 0.0f); vertexData[4].TextureCoordinate = new Vector2((float)topRightVertex.X, (float)topRightVertex.Y); vertexData[5].Position = new Vector3(1.0f, 1.0f, 0.0f); vertexData[5].TextureCoordinate = new Vector2((float)bottomRightVertex.X, (float)bottomRightVertex.Y); return vertexData; } [StructLayout(LayoutKind.Sequential)] struct Vertex { public Vector3 Position; public Vector2 TextureCoordinate; public static int SizeBytes { get { return Marshal.SizeOf(typeof(Vertex)); } } } } I am unable to figure out why alpha value is not getting set. Any kind of help is appreciated.
YashwinderSingh posted a topic in General and Gameplay Programming[color=#000000][font=Arial,]I have coded a video player using directshow and EVR filter. I am having a problem of slow playback in a machine with configuration:[/font][/color] [color=#000000][font=Arial,]Processor: AMD E-350 1.60 GHz Ram: 2 GB[/font][/color] [color=#000000][font=Arial,]The videos with resolution 1440*1080 or 1920*1080 and above plays fine in windows media player and Media player classic-HC but not the way I am rendering it.[/font][/color] PLayback is smooth in other machines with processors like intel dual core and above. [color=#000000][font=Arial,]I am just adding the evr filter to the graph i have made of the video and playing it. Is there any way to improve the playback quality of the video on this machine. any help is appreciated.[/font][/color] | https://www.gamedev.net/profile/198487-yashwindersingh/ | CC-MAIN-2017-30 | en | refinedweb |
LINQ Overview, part zero
LINQ Overview, part one (Extension Methods)
LINQ Overview, part two (Lambda Expressions)
Note: I realize it has been a really long time since I've posted anything. It is both exciting and humbling that I continue to receive such positive feedback on these articles. In fact, that is why I am trying to put in the effort and finish off this series before moving on to more recent topics. This nomad has been on some interesting journeys these past months, and I am really excited to share what I've learned.
In the world of computer programming data typing is one of the most hotly debated issues. There are two major schools of thought as far as type systems go.
The first is static typing in which the programmer is forced by the programming environment to write her code so that the type of data being manipulated is known at compile-time. The purpose of course being that if the environment knows the type of data being manipulated it can, in theory, better enforce the rules that govern how the data should be manipulated and produce an error if the rules are not followed. It would be a mistake to assume exactly when the environment knows the types involved, however. For example, the .NET CLR is a statically typed programming environment but it provides facilities for reflection and therefore sometimes type checking is done at run-time as opposed to compile-time. Take a look at the following code sample:
static void TypeChecking()
{
string s = "a string for all seasons";
double d = s;
object so = s;
d = (double)so;
}
If you look at line two, you'll see we are trying to assign a string value to a double variable. This trips up the compiler since string can not be implicitly cast to double. This is an example of compile-time type checking and demonstrates why it can be useful as in this case it would have saved us a few minutes of debugging time.
On line four we are assigning our string value to an object variable. Since anything can be stored in an object variable we are essentially masking the type of the value from the compiler. This means that on the subsequent line our explicit cast of so to double doesn't cause a compilation error. Basically, the compiler doesn't have enough information to determine if the cast will fail because it doesn't know for certain what data type you will be storing in so until runtime. Of course, since C# is a statically typed language the cast will generate an exception when it is executed. Don't minimize the damage that this can pose. What if a line like this was laying around in some production code and that method was never properly tested? You'd wind up with an exception in your production application and that's bad!
The second major school of thought in regards to typing is referred to as dynamic typing and as its name implies, dynamic type systems are a little bit more flexible about what they will accept at compile-time. Dynamic type systems, especially if you have a solid background in traditionally static typed environments, may be a little harder to grok at first, but the most important thing to understand when working with a dynamic type system is that it doesn't consider a variable as having a type; only values have types.
As with anything in modern programming languages, everything is shades of gray. For example even within the realm of static type systems there are those that are considered to be strongly typed and those that are considered to be weakly typed. The difference being that weakly typed languages allow for programs to be written where the exact type of a variable isn't known at compile-time.
The "why they matter" bit should be self evident. You simply can not write effective code in a high-level language like C#, Python, or even C without interacting with the type system and therefore you can't write effective code without understanding the type system. As a language, C# is currently tied to the CLR and therefore its syntax is designed to help you write statically typed code, more often than not code intended to provide compile-time type checks. However, as the language and developer community using it have evolved there has been a greater call to at least simulate some aspects of dynamic languages.
Introduced in C# 3.0 the var keyword can be used in place of the type in local variable definitions. It can't be used as a return type, the type of a parameter in a method or delegate definition, or in place of a property or field's type declaration. The var keyword relies on C#3.0's type inference feature. We first encountered type inference in part one where we saw that we didn't have to explicitly state our generic parameters as long as we made it obvious to the compiler what the types involved were. A simple example is as follows:
static void UseVar()
{
var list = new List<Book>();
var book = new Book();
book.Title = "The C Programming Language";
list.Add(book);
foreach (var b in list)
Console.WriteLine(b.Title);
}
In the above we are using var on three separate lines, but all for the same purpose. We've simply used the var keyword in place of the data type in our variable declarations and let the compiler handle figuring out what the correct type should be. The first two uses are pretty straightforward and will save you a lot of typing. The third use of var is also common, but be forewarned as it will only work well with collections that implement IEnumerable<T>. Collections that implement the non-generic IEnumerable interface do not provide the compiler with enough information to infer the type of b.
You have to be careful with var as even though the compiler can tell what type you meant when you wrote it, you or the other developers you work with might not be able to in three or four months. You also have to look out for situations like the following:
static void UnexpectedInference()
{
var d = 2.5;
float f = d;
}
As you can see you may have meant for number to be a float, but the compiler decided that you were better off with a double. Naturally, if you wrote the following the compiler would have more information as to your intent and act appropriately:
static void UnexpectedInference()
{
var d = 2.5f;
float f = d;
}
The var keyword is nice and all, but was it added to the language just to stave off carpal tunnel? No, they needed to provide the var keyword so that they could give you something more powerful, anonymous types.
To look at why var was introduced into the language, lets look at the following program.
class Program
{
static void Main(string[] args)
{
var books = new List<Book>() {
new Book() {
Title = "The Green Mile",
Genre = "Drama",
Author = new Author() {
Name = "Stephen King",
Age = 62,
Titles = 1000
}
},
new Book() {
Title = "Pandora Star",
Genre = "Science Fiction",
Author = new Author() {
Name = "Peter F. Hamilton",
Age = 49,
Titles = 200
}
}
};
var kings = from b in books
where b.Author.Name == "Stephen King"
select new { Author = b.Author.Name, BookTitle = b.Title };
foreach (var k in kings)
{
Console.WriteLine("{0} wrote {1}", k.Author, k.BookTitle);
}
Console.ReadLine();
}
}
public class Author
{
public string Name { get; set; }
public int Age { get; set; }
public int Titles { get; set; }
}
public class Book
{
public Author Author { get; set; }
public string Title { get; set; }
public string Genre { get; set; }
}
The above code is pretty straight forward so let's focus on the LINQ query :
var kings = from b in books
where b.Author.Name == "Stephen King"
select new { Author = b.Author.Name, BookTitle = b.Title };
Do you notice anything interesting? WOAH! We just used the new operator to instantiate an object without saying what type it was, in fact, you couldn't tell the compiler what type it was even if you wanted to. If you can't write out the name of the type, then how are you supposed to declare a variable of that type? Oh, right, we have the var keyword which uses type inference to figure out the correct data type to use!
So, how does C# 3.0 provide such a cool feature? It is supposed to be a statically typed language, right? Well, the answer to that isn't actually complicated in the least. Think about it. What do compilers do? They read in a program definition and then generate code in, typically, a lower level language like machine code or CLR byte code. We have already established that type inference is being used to determine the correct data type for our var variable above, but the obvious problem is that we didn't define the type it winds up using. After reading in our query the compiler can tell we are defining a class that has two properties, Author and BookTitle. Further, it knows we are assigning System.String values to both properties. Therefore, it can deduce the exact class definition required for our code to work in a statically type checked way. If you were to fire up reflector you'd be able to find the following class definition:
[DebuggerDisplay(@"\{ Author = {Author}, BookTitle = {BookTitle} }", Type="<Anonymous Type>"), CompilerGenerated]
internal sealed class <>f__AnonymousType0<<Author>j__TPar, <BookTitle>j__TPar>
{
// Fields
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private readonly <Author>j__TPar <Author>i__Field;
[DebuggerBrowsable(DebuggerBrowsableState.Never)]
private readonly <BookTitle>j__TPar <BookTitle>i__Field;
// Methods
[DebuggerHidden]
public <>f__AnonymousType0(<Author>j__TPar Author, <BookTitle>j__TPar BookTitle);
[DebuggerHidden]
public override bool Equals(object value);
[DebuggerHidden]
public override int GetHashCode();
[DebuggerHidden]
public override string ToString();
// Properties
public <Author>j__TPar Author { get; }
public <BookTitle>j__TPar BookTitle { get; }
}
First of all, the above looks pretty funky. In fact, if you copy and paste it into visual studio it isn't going to compile. The import thing to realize is that the class was built for you at compile-time not at runtime. Another interesting point is that the compiler is smart enough to reuse this class if your code calls for a second anonymous type with the same properties.
All in all, anonymous types work like any other class with the same constraint (i.e. internal sealed). See, I told you that this stuff isn't magic!
Anonymous types are cool and all, but due to the restrictions on their use I am sure you'll find, as I have, that they are most useful in LINQ queries. The way in which you would use an anonymous type in conjunction with LINQ is as follows:
As you can see above, we are using an anonymous type in the select part of the query to define what our result set looks like. In database terminology this is called a projection, and our example above is really no different than selecting a subset of columns from a table in a relational database.
At this point you may be saying, "so what? In the example you could have just selected a book object and gotten access to the author using normal dot notation". You'd be correct in that instance, however consider the LINQ query below:
var kings = from b in books
join a in authors on b.AuthorID equals a.ID
where a.Name == "Stephen King"
select new { Author = a.Name, BookTitle = b.Title };
In the above, we are essentially joining two collections of in memory objects on what amounts to a foreign key, i.e. they both know the ID of the author. Hopefully you can now see the utility of anonymous types. If we had simply selected the books, we would then need to do a subsequent query in order to find the corresponding author.
The only other place I see a lot of value in using anonymous types is to flatten out a set of related objects for the purpose of databinding or other UI rendering.
We have now examined the anonymous type feature added to C#3.0 as well as how to use the new var keyword. It is my guess that you'll find yourself using var quite frequently, but only if you do a lot of LINQ will you being using it for anonymous types. The next segment will focus on the actual architecture of LINQ, once you understand that there really isn't anything mysterious left. | http://geekswithblogs.net/dotnetnomad/archive/2009/10/29/135842.aspx | CC-MAIN-2017-30 | en | refinedweb |
Journal Journal: Verbiage: Android is a horrible platform to code for 5
Because i hate Java, by extension, i hate Android development. The language is insane as it is, but the verbosity that Android adds is ridiculous. Of course, the namespace is equally retarded. But these all make sense to Java coders, so, who cares? | https://slashdot.org/users2.pl?page=1&uid=981&view=usertag&fhfilter=%22user%3AChacham%22+%22tag%3Anotfud%22 | CC-MAIN-2017-30 | en | refinedweb |
Working with large data using datashader ¶
The various plotting backends supported by HoloViews (such as Matplotlib and Bokeh) each have limitations on the amount of data that is practical to work with, for a variety of reasons. For instance, Bokeh mirrors your data directly into an HTML page viewable in your browser, which can cause problems when data sizes approach the limited memory available for each web page in your browser.
Luckily, a visualization of even the largest dataset will be constrained by the resolution of your display device, and so one approach to handling such data is to pre-render or rasterize the data into a fixed-size array or image before sending it to the backend. The Datashader package provides a high-performance big-data rasterization pipeline that works seamlessly with HoloViews to support datasets that are orders of magnitude larger than those supported natively by the plotting backends.
import numpy as np import holoviews as hv import datashader as ds from holoviews.operation.datashader import aggregate, shade, datashade, dynspread from holoviews.operation import decimate hv.extension('bokeh') decimate.max_samples=1000 dynspread.max_px=20 dynspread.threshold=0.5 def random_walk(n, f=5000): """Random walk in a 2D space, smoothed with a filter of length f""" xs = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() ys = np.convolve(np.random.normal(0, 0.1, size=n), np.ones(f)/f).cumsum() xs += 0.1*np.sin(0.1*np.array(range(n-1+f))) # add wobble on x axis xs += np.random.normal(0, 0.005, size=n-1+f) # add measurement noise ys += np.random.normal(0, 0.005, size=n-1+f) return np.column_stack([xs, ys]) def random_cov(): """Random covariance for use in generating 2D Gaussian distributions""" A = np.random.randn(2,2) return np.dot(A, A.T) def time_series(T = 1, N = 100, mu = 0.1, sigma = 0.1, S0 = 20): """Parameterized noisy time series""" dt = float(T)/N return S | http://holoviews.org/user_guide/Large_Data.html | CC-MAIN-2017-30 | en | refinedweb |
1 /*2 * $Id: Wrapper.java,v 1.4.2.6 2003/04/11 00:24:39 pietsch.flow;52 53 // FOP54 import org.apache.fop.fo.*;55 import org.apache.fop.apps.FOPException;56 57 /**58 * Implementation for fo:wrapper formatting object.59 * The wrapper object serves as60 * a property holder for it's children objects.61 *62 * Content: (#PCDATA|%inline;|%block;)*63 * Properties: id64 */65 public class Wrapper extends FObjMixed {66 67 public static class Maker extends FObj.Maker {68 public FObj make(FObj parent, PropertyList propertyList,69 String systemId, int line, int column)70 throws FOPException {71 return new Wrapper(parent, propertyList, systemId, line, column);72 }73 }74 75 public static FObj.Maker maker() {76 return new Wrapper.Maker();77 }78 79 public String getName() {80 return "fo:wrapper";81 }82 83 public Wrapper(FObj parent, PropertyList propertyList,84 String systemId, int line, int column)85 throws FOPException {86 super(parent, propertyList, systemId, line, column);87 // check that this occurs inside an fo:flow88 }89 90 }91
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/apache/fop/fo/flow/Wrapper.java.htm | CC-MAIN-2017-30 | en | refinedweb |
SeamFramework.orgCommunity Documentation
WebSphere Application Server v7 is IBM's application server offering. This release is fully Java EE 5 certified.
WebSphere AS being a commercial product, we will not discuss the details of its installation. At best, we will instruct you to follow the directions provided by your particular installation type and license.
First, we will go over some basic considerations on how to run Seam applications under WebSphere AS v7. We will go over the details of these steps using the JEE5 booking example. We will also deploy the JPA (non-EJB3) example application.
All of the examples and information in this chapter are based on WebSphere AS v7. A trial version can be downloaded here : WebSphere Application Server V7
WebSphere v7.0.0.5 is the minimal version of WebSphere v7 to use with Seam. WAS v7.0.0.9 is highly recommended. Earlier versions of WebSphere have bugs in the EJB container that will cause various exceptions to occur at runtime.
EJBContext may only be looked up by or injected into an EJB
This is a bug in WebSphere v7.0.0.5. WebSphere does not conform to the EJB 3.0 specs as it does not allow to perform a lookup on "java:comp/EJBContext" in callback methods.
This problem is associated with APAR PK98746 at IBM and is corrected in v7.0.0.9.
NameNotFoundException: Name "comp/UserTransaction" not found in context "java:"
Another bug in WebSphere v7.0.0.5. This occurs when an HTTP session expires. Seam correctly catches the exception when necessary and performs the correct actions in these cases. The problem is that even if the exception is handled by Seam, WebSphere prints the traceback of the exception in SystemOut. Those messages are harmless and can safely be ignored.
This problem is associated with APAR PK97995 at IBM and is corrected in v7.0.0.9.
The following sections in this chapter assume that WebSphere is correctly installed and is functional, and a WebSphere "profile" has been successfully created.
This chapter explains how to compile, deploy and run some sample applications in WebSphere. These sample applications require
a database. WebSphere comes by default with a set of sample applications called "Default Application". This set of sample applications
use a Derby database running on the Derby instance installed within WebSphere. In order to keep this simple we'll use this Derby database created
for the "Default Applications". However, to run the sample application with the Derby database "as-is", a patched Hibernate
dialect must be used (The patch changes the default "auto" key generation strategy) as explained in Chapter 41, Seam on GlassFish application server.
If you want to use another database, it's just a matter of creating a connection pool in WebSphere pointing to this database,
declare the correct Hibernate dialect and set the correct JNDI name in
persistence.xml.
This step is mandatory in order to have Seam applications run with WebSphere v7. Two extra properties must be added to the Web Container. Please refer to the IBM WebSphere Information Center for further explanations on those properties.
To add the extra properties:
Servers/Server Types/WebSphere Application Serversin the left navigation menu
server1)
Web Container Settings/Web container
custom propertiesand add the following properties:
prependSlashToResource = true
com.ibm.ws.webcontainer.invokefilterscompatibility = true
In order to use component injection, Seam needs to know how to lookup for session beans bound to the JNDI name space. Seam provides two mechanisms to configure the way it will search for such resources:
jndi-patternswitch on the
<core:init>tag in
components.xml. The switch can use a special placeholder "
#{ejbName}" that resolves to the unqualified name of the EJB
@JndiNameannotation
Section 30.1.5, “Integrating Seam with your EJB container” gives detailed explanations on how those mechanisms work.
By default, WebSphere will bind session beans in
its local JNDI name space under a "short" binding name that adheres to the following pattern
ejblocal:<package.qualified.local.interface.name>.
For a detailed description on how WebSphere v7 organizes and binds EJBs in its JNDI name spaces, please refer to the WebSphere Information Center.
As explained before, Seam needs to lookup for session bean as they appear in JNDI. Basically, there are three strategies, in order of complexity:
@JndiNameannotation in the java source file,
jndi-patternattribute,
@JndiName("ejblocal:<package.qualified.local.interface.name>)annotation to each session bean that is a Seam component.
components.xml, add the following line:
<core:init
WEB-INF/classes/seam-jndi.propertiesin the web module with the following content:
com.ibm.websphere.naming.hostname.normalizer=com.ibm.ws.naming.util.DefaultHostnameNormalizer java.naming.factory.initial=com.ibm.websphere.naming.WsnInitialContextFactory com.ibm.websphere.naming.name.syntax=jndi com.ibm.websphere.naming.namespace.connection=lazy com.ibm.ws.naming.ldap.ldapinitctxfactory=com.sun.jndi.ldap.LdapCtxFactory com.ibm.websphere.naming.jndicache.cacheobject=populated com.ibm.websphere.naming.namespaceroot=defaultroot com.ibm.ws.naming.wsn.factory.initial=com.ibm.ws.naming.util.WsnInitCtxFactory com.ibm.websphere.naming.jndicache.maxcachelife=0 com.ibm.websphere.naming.jndicache.maxentrylife=0 com.ibm.websphere.naming.jndicache.cachename=providerURL java.naming.provider.url=corbaloc:rir:/NameServiceServerRoot java.naming.factory.url.pkgs=com.ibm.ws.runtime:com.ibm.ws.naming
web.xml, add the following lines:
<ejb-local-ref>
<ejb-ref-name>EjbSynchronizations</ejb-ref-name>
<ejb-ref-type>Session</ejb-ref-type>
<local-home></local-home>
<local>org.jboss.seam.transaction.LocalEjbSynchronizations</local>
</ejb-local-ref>
That's all folks! No need to update any file during the development, nor to define any EJB to EJB or web to EJB reference!
Compared to the other strategies, this strategy has the advantage to not have to manage any EJBs reference and also to not have to maintain extra files.
The only drawback is one extra line in the java source code with the
@JndiName annotation
To use this strategy:
META-INF/ibm-ejb-jar-bnd.xmlin the EJB module and add an entry for each session bean like this:
<?xml version="1.0" encoding="UTF-8"?>WebSphere will then bind the
<ejb-jar-bnd
xmlns=""
xmlns:xsi=""
xsi:
<session name="AuthenticatorAction" simple-
<session name="BookingListAction" simple-
</ejb-jar-bnd>
AuthenticatorActionEJB to the
ejblocal:AuthenticatorActionJNDI name
components.xml, add the following line:
<core:init
WEB-INF/classes/seam-jndi.propertiesas described in strategy 1
web.xml, add the following lines (Note the different
ejb-ref-namevalue):
<ejb-local-ref>
<ejb-ref-name>ejblocal:EjbSynchronizations</ejb-ref-name>
<ejb-ref-type>Session</ejb-ref-type>
<local-home></local-home>
<local>org.jboss.seam.transaction.LocalEjbSynchronizations</local>
</ejb-local-ref>
Compared to the first strategy, this strategy requires to maintain an extra file
(
META-INF/ibm-ejb-jar-ext.xml),
where a line must be added each time a new session bean is added to the application),
but still does not require to maintain EJB reference between beans.
components.xml, add the following line:
<core:init
This is the most tedious strategy as each session bean referenced by another session bean (i.e. "injected") as to be declared in
ejb-jar.xml file.
Also, each new session bean has to be added to the list of referenced bean in
web.xml
META-INF/ibm-ejb-jar-ext.xmlin the EJB module, and declare the timeout value for each bean:
<?xml version="1.0" encoding="UTF-8"?>
<ejb-jar-ext
xmlns=""
xmlns:xsi=""
xsi:
<session name="BookingListAction"><time-out</session>
<session name="ChangePasswordAction"><time-out</session>
</ejb-jar-ext>
The
time-out is expressed in seconds and must be higher than the Seam conversation expiration timeout
and a few minutes higher than the user's HTTP session timeout (The session expiration timeout can trigger a few minutes
after the number of minutes declared to expire the HTTP session).
The
jee5/bookingexample is based on the Hotel Booking example (which runs on JBoss AS).
Out of the box, it is designed to run on Glassfish, but with the following steps, it can be deployed on
WebSphere. It is located in the
$SEAM_DIST/examples/jee5/booking directory.
The example already has a breakout of configurations and build scripts for WebSphere. First thing, we are going to do is build and deploy this example. Then we'll go over some key changes that we needed.
The tailored configuration files for WebSphere use the second JNDI mapping strategy ("Override the default names generated by WebSphere")
as the goal was to not change any java code to add the
@JndiName annotation as in the first strategy.
Building it only requires running the correct ant command:
ant -f build-websphere7.xml
This will create container specific distribution and exploded archive directories with the
websphere7 label.
The steps below are for the WAS version stated above.The ports are the default values, if you changed them, you must substitute the values. your userid and/or your password if security is enabled for the console.
WebSphere enterprise applicationsmenu option under the
Applications --> Application Typeleft side menu.
At the top of the
Enterprise Applications table select
Install.
Below are installation wizard pages and what needs to done on each:
Preparing for the application installation
Browse to the
examples/jee5/booking/dist-websphere7/jboss-seam-jee5.ear
file using the file upload widget.
Select the
Next button.
Select the
Fast Path button.
Select the
Next button.
Select installation options
Select the "
Allow EJB reference targets to resolve automatically"
check boxes at the bottom of the page. This will let WebSphere use its simplified JNDI reference mapping.
Select the
Next button.
Map modules to servers
No changes needed here as we only have one server. Select the
Nextbutton.
Map virtual hosts for Web modules
No changes needed here as we only have one virtual host. Select the
Nextbutton.
Summary
No changes needed here. Select the
Finish button.
Installation
Now you will see WebSphere installing and deploying your application.
When done, select the
Save link and you will be returned to the
Enterprise Applications table.
To start the application, select the application in the list, then click on the
Start
button at the top of the table.
You can now access the application at
resources-websphere7directory.
META-INF/ejb-jar.xml
— Removed all the EJB references
META-INF/ibm-ejb-jar-bnd.xml
— This WebSphere specific file has been added as we use the second JNDI mapping strategy.
It defines, for each session bean, the name WebSphere will use to bind it in its JNDI name space
META-INF/ibm-ejb-jar-ext.xml
— This WebSphere specific file defines the timeout value for each stateful bean
META-INF/persistence.xml
— The main changes here are for the datasource JNDI path,
switching to the WebSphere transaction manager lookup class,
turning off the
hibernate.transaction.flush_before_completion toggle,
and forcing the Hibernate dialect to be
GlassfishDerbyDialect
as we are using the integrated Derby database
WEB-INF/components.xml
— the change here is
jndi-pattern
to use
ejblocal:#{ejbname} as using the second
JNDI matching strategy
WEB-INF/web.xml
— Remove all the
ejb-local ref except the one for
EjbSynchronizations bean.
Changed the ref fo this bean to
ejblocal:EjbSynchronizations
import.sql
— due to the customized hibernate Derby dialect, the
ID
column can not be populated by this file and was removed.
Also the build procedure has been changed to include the
log4j.jar file
and exclude the
concurrent.jar and
jboss-common-core.jar files.
This is the Hotel Booking example implemented in Seam POJOs and using Hibernate JPA with JPA transactions. It does not use EJB3.7
This will create container specific distribution and exploded archive directories with the
websphere7 label.
Deploying
jpa application is very similar to the
jee5/booking
example at Section 40.5.2, “Deploying the jee5/booking example”.
The main difference is, that this time, we will deploy a war file instead of an ear file,
and we'll have to manually specify the context root of the application.
Follow the same instructions as for the
jee5/booking sample. Select the
examples/jpa/dist-websphere7/jboss-seam-jpa.war file on the first page and on the
Map context roots for Web modules page (after the
Map virtual host for Web module),
enter the context root you want to use for your application in the
Context Root input field.
When started, you can now access the application at the<context root>.
resources-websphere7directory.
META-INF/persistence.xml
— The main changes here are for the datasource JNDI path,
switching to the WebSphere transaction manager look up class,
turning off the
hibernate.transaction.flush_before_completion toggle,
and forcing the Hibernate dialect to be
GlassfishDerbyDialect
how as using the integrated Derby database
import.sql
— due to the customized hibernate Derby dialect, the
ID
column can not be populated by this file and was removed.
Also the build procedure have been changed to include the
log4j.jar file
and exclude the
concurrent.jar and
jboss-common-core.jar files. | http://docs.jboss.org/seam/2.3.0.Beta1/html/websphere.html | CC-MAIN-2017-30 | en | refinedweb |
I have to develop new arduino projects each week for my young pupil, who is passionate about electronics and hardware. For that, I found our last project creative and interesting, and I bet some of you can use it as a geek experiment.
What we need here is graphite based pencils (or directly graphite, works better), arduino, resistors, a led, a metallic clip, and a white regular paper. How does it works?
First we should know how this works. In arduino, we can use several sensors as INPUTS, such as a button, a light sensor, an humidity sensor, etc. But we can also attach home-made INPUTS using conductor materials. Steel and metal are common conductors materials (you can try this experiment with a coin, too) and so is graphite.
For making this work in arduino, we are using a special library called CapacitiveSensor04. Once our library is added, we can start designing the circuit. This is an example with steel paper, works the same with graphite. Only draw something (very dense), attach a paper clip to the drawing (be careful, it should be a single-line drawing) and a cable to the paper pin, which is the one connected to the resistor + 4 and 2 pins.
And this is our code:
#include <CapacitiveSensor.h> CapacitiveSensor capSensor = CapacitiveSensor(4,2); int threshold = 1000; const int ledPin = 12; void setup() { Serial.begin(9600); pinMode(ledPin, OUTPUT); } void loop() { long sensorValue = capSensor.capacitiveSensor(30); Serial.println(sensorValue); if(sensorValue > threshold) { digitalWrite(ledPin, HIGH); } else { digitalWrite(ledPin, LOW); } delay(10); }
We might have to calibrate the threshold, in which case you will only have to open the Monitor and test. And... tadaah! interactive drawing hat lights a led. You can now do other things. Just experiment!
In case you are a tutor, teacher or parent, here's the content of my class in spanish ready for the students (answers, incomplete code for them and complete code with comments for the teachers and activities).
Posted on by:
Paula
Offensive security, into privacy and digital rights. I give speeches, write articles and founded a digital privacy awareness association called Interferencias in Spain. Japanese style tattooing.
Discussion
Amazing! Thanks for sharing this :D | https://practicaldev-herokuapp-com.global.ssl.fastly.net/terceranexus6/creating-an-interactive-drawing-with-arduino-2jnp | CC-MAIN-2020-40 | en | refinedweb |
nit
// header1.h
// depends on contents of header2.h for compilation
#include "header2.h"
// source.c
// depends on contents of header1.h for compilation
#include "header1.h"
Daniel Pfeffer wrote:I admit that my method could cause a file to be included multiple times.
speedbump99 wrote: circular references in which one include is depending on another include file. These errors can be hard to find especially with a large project.
C++
C
#include <stdbool.h> // C standard unit needed for bool and true/false
#include <stdint.h> // C standard unit needed for uint8_t, uint32_t, etc
#include <stdarg.h> // C standard unit needed for variadic type and functions
#include <string.h> // C standard unit needed for strlen
#include <windows.h> // Windows standard header needed for HWND on calls
void MyUnitFunction (HWND window);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Messages/5591489/Re-Cplusplus-Typedefs.aspx | CC-MAIN-2020-40 | en | refinedweb |
- Type:
Bug
- Status: Open
- Priority:
P1: Critical
- Resolution: Unresolved
- Affects Version/s: 5.12.3, 5.13, 5.14, 5.15
- Fix Version/s: None
- Component/s: Quick: Core Declarative QML
- Labels:None
- Environment:Linux, Debian 9.11 with Qt 5.11.3 gcc64 (also seen on windows/msys)
- Platform/s:
The formatting of text in a QML Text object that has `textFormat: Text.RichText` is wrong for lines that follow a unordered list <ul> with a <br> that is indented itself. See the following example:
import QtQuick 2.12 import QtQuick.Window 2.12 Window { visible: true width: 640 height: 480 title: qsTr("Hello World") Text { textFormat: Text.RichText text: "<ul><li>one</li><li>two</li></ul> <br> Hello world indented <br> Hello world also indented <ul><li>one</li><li>two</li></ul> <br> Hello world not indented" } }
Observed is that the first two "hello world ..." lines are indented on the same level as the closed <ul>.
I would not expect any of the "hello world ..." lines to be indented since the HTML standard prescribes that whitespace in the source is displayed as a single whitespace. | https://bugreports.qt.io/browse/QTBUG-81662 | CC-MAIN-2020-40 | en | refinedweb |
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video!
gstreamer0.10-ffmpeg
gstreamer0.10-plugins-goodpackages.
Everything should be working... but nope! We've got this
jQuery is not defined
error... but it's not from our code! It's coming from inside of
autocomplete.jquery.js - that third party package we installed!
This is the second jQuery plugin that we've used. The first was bootstrap... and that worked brilliantly! Look inside
app.js:
We imported bootstrap and, yea... that was it. Bootstrap is a well-written jQuery plugin, which means that inside, it imports
jquery - just like we do - and then modifies it.
But this Algolia
autocomplete.js plugin? Yea, it's not so well-written. Instead of detecting that we're inside Webpack and importing
jQuery, it just says...
jQuery! And expects it to be available as a global variable. This is why jQuery plugins are a special monster: they've been around for so long, that they don't always play nicely in the modern way of doing things.
So... are we stuck? I mean, this 3rd-party package is literally written incorrectly! What can we do?
Well... it's Webpack to the rescue! Open up
webpack.config.js and find some commented-out code:
autoProvidejQuery(). Uncomment that:
Then, go restart Encore:
yarn watch
When it finishes, move back over and... refresh! No errors! And if I start typing in the autocomplete box... it works! What black magic is this?!
The
.autoProvidejQuery() method... yea... it sorta is black magic. Webpack is already scanning all of our code. When you enable this feature, each time it finds a
jQuery or
$ variable- anywhere in any of the code that we use - that is uninitialized, it replaces it with
require('jquery'). It basically rewrites the broken code to be correct.
While we're here, there's an organizational improvement I want to make. Look inside
admin_article_form.js. Hmm, we include both the JavaScript file and the CSS file for Algolia autocomplete:
But if you think about it, this CSS file is meant to support the
algolia-autocomplete.js file. To say it differently: the CSS file is a dependency of
algolia-autocomplete.js: if that file was ever used without this CSS file, things wouldn't look right.
Take out the
import and move it into
algolia-autocomplete.js. Make sure to update the path:
That's nice! If we want to use this autocomplete logic somewhere else, we only need to import the JavaScript file: it takes care of importing everything else. The result is the same, but cleaner.
Well, this file still isn't as clean as I want it. We're importing the
algolia-autocomplete.js file... but it's not really a "module". It doesn't export some reusable function or class: it just runs code. I really want to start thinking of all of our JavaScript files - except for the entry files themselves - as reusable components.
Check it out: instead of just "doing" stuff, let's
export a new function that can initialize the autocomplete logic. Replace
$(document).ready() with
export default function() with three arguments: the jQuery
$elements that we want to attach the autocomplete behavior to, the
dataKey, which will be used down here as a way of a defining where to get the data from on the Ajax response, and
displayKey - another config option used at the bottom, which is the key on each result that should be displayed in the box:
Basically, we're taking out all the specific parts and replacing them with generic variables.
Now we can say
$elements.each():
And for
dataKey, we can put a bit of logic:
if (dataKey), then
data = data[dataKey], and finally just
cb(data):
Some of this is specific to exactly how the Autocomplete library itself works - we set that up in an earlier tutorial. Down at the bottom, set
displayKey to
displayKey:
Beautiful! Instead of doing something, this file returns a reusable function. That should feel familiar if you come from the Symfony world: we organize code by creating files that contain reusable classes, instead of files that contain procedural code that instantly does something.
Ok! Back in
admin_article_form.js, let's
import autocomplete from './components/algolia-autocomplete':
Oooo. And then,
const $autoComplete = $('.js-user-autocomplete') - to find the same element we were using before:
Then, if not
$autoComplete.is(':disabled'), call
autocomplete() - because that's the variable we imported - and pass it
$autoComplete,
users for
dataKey and
displayKey:
I love it! By the way, the reason I'm added this
:disabled logic is that we originally set up our forms so that the
author field that we're adding this autocomplete to is disabled on the edit form. So, there's no reason to try to add the autocomplete stuff in that case.
Ok, refresh... then type
admi... it works! Double-check that we didn't break the edit page: go back to
/admin/article, edit any article and, yea! Looks good! The field is disabled, but nothing is breaking.
Hey! We have no more JavaScript files in our
public/ directory. Woo! But, we do still have 2 CSS files. Let's handle those } } | https://symfonycasts.com/screencast/webpack-encore/autoprovide-jquery-modules | CC-MAIN-2020-40 | en | refinedweb |
organize data according to scalar values (used to accelerate contouring operations) More...
#include <vtkSimpleScalarTree.h>
organize data according to scalar values (used to accelerate contouring operations)
vtkSimpleScalarTree creates a pointerless binary tree that helps search for cells that lie within a particular scalar range. This object is used to accelerate some contouring (and other scalar-based techniques).
The tree consists of an array of (min,max) scalar range pairs per node in the tree. The (min,max) range is determined from looking at the range of the children of the tree node. If the node is a leaf, then the range is determined by scanning the range of scalar data in n cells in the dataset. The n cells are determined by arbitrary selecting cell ids from id(i) to id(i+n), and where n is specified using the BranchingFactor ivar. Note that leaf node i=0 contains the scalar range computed from cell ids (0,n-1); leaf node i=1 contains the range from cell ids (n,2n-1); and so on. The implication is that there are no direct lists of cell ids per leaf node, instead the cell ids are implicitly known. Despite the arbitrary grouping of cells, in practice this scalar tree actually performs quite well due to spatial/data coherence.
This class has an API that supports both serial and parallel operation. The parallel API enables the using class to grab arrays (or batches) of cells that potentially intersect the isocontour. These batches can then be processed in separate threads.
Definition at line 56 of file vtkSimpleScalarTree.h.
Standard type related macros and PrintSelf() method.
Definition at line 69 of file vtkSimpleScalarTree.h.
Instantiate scalar tree with maximum level of 20 and branching factor of three.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkScalarTree.
Reimplemented from vtkScalarTree.
Methods invoked by print to print information about the object including superclasses.
Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from vtkScalarTree.
This method is used to copy data members when cloning an instance of the class.
It does not copy heavy data.
Reimplemented from vtkScalarTree.
Set the branching factor for the tree.
This is the number of children per tree node. Smaller values (minimum is 2) mean deeper trees and more memory overhead. Larger values mean shallower trees, less memory usage, but worse performance.
Get the level of the scalar tree.
This value may change each time the scalar tree is built and the branching factor changes.
Set the maximum allowable level for the tree.
Construct the scalar tree from the dataset provided.
Checks build times and modified time from input and reconstructs the tree if necessary.
Implements vtkScalarTree.
Initialize locator.
Frees memory and resets object as appropriate.
Implements vtkScalarTree.
Begin to traverse the cells based on a scalar value.
Returned cells will likely have scalar values that span the scalar value specified.
Implements vtkScalarTree.
Return the next cell that may contain scalar value specified to initialize traversal.
The value nullptr is returned if the list is exhausted. Make sure that InitTraversal() has been invoked first or you'll get erratic behavior.
Implements vtkScalarTree.
Get the number of cell batches available for processing as a function of the specified scalar value.
Each batch contains a list of candidate cells that may contain the specified isocontour value.
Implements vtkScalarTree.
Return the array of cell ids in the specified batch.
The method also returns the number of cell ids in the array. Make sure to call GetNumberOfCellBatches() beforehand.
Implements vtkScalarTree.
Definition at line 153 of file vtkSimpleScalarTree.h.
Definition at line 154 of file vtkSimpleScalarTree.h.
Definition at line 155 of file vtkSimpleScalarTree.h.
Definition at line 156 of file vtkSimpleScalarTree.h.
Definition at line 157 of file vtkSimpleScalarTree.h.
Definition at line 158 of file vtkSimpleScalarTree.h. | https://vtk.org/doc/nightly/html/classvtkSimpleScalarTree.html | CC-MAIN-2020-40 | en | refinedweb |
Why you are non programmer and you want to learn any programming language, than python is best choice for you.
If you want to learn new language, you will find python interesting.
Why you should learn python | Benefits of Python
1. Python is easy to learn :
Python programming language is easy to learn. If you are coming from java background or c or c++ back ground, you must write few codes before printing hello world.
in python just write.
print("Hello World")
It will print hello world. You don’t need to write classes, just like Java does. You don’t need to include anything. It is as simple as that.
You don’t need to worry about curly braces or semi colon’s, Python does not use curly braces( expect in dictionary).
When you write function, you don’t need to use curly braces,like other programming languages.
def hello_function(): print("Hello from a function") hello_function()
In python, you have you worry about indentation, all the code must be indented. Other wise it will throw an error.
2.) Portable and Extensible :
Python code is portable, you can use python code in any operating system without any error. If you have written your code in macOS, you can run that code in windows OS or in Linux. You don’t have to worry about anything.
Python allows you to integrate with Java or .NET. It also invokes C or C++ libraries. How cool is this.
This cool feature excites other developer to learn python more.
3.) Python is used in Artificial Intelligence Field:
Artificial Intelligence is the future of this man kind. Its the future of ours. Python is the main language used in Artificial Intelligence.
Fields like Machine Learning and Deep Learning are most popular now a days. Everyone uses python for this kind of technology.
Tensorflow, pytorch, Scikit learn and many more libraries available for Deep Learning and machine learning.
4.) Python pays well :
Python jobs are high paying jobs across the globe. Fields like data science and deep learning, pays huge amount of money to developers.
In US, Python developers are the 2nd highest salaried people, with average of 103,000 $ per year. This is hell lot of money.
5.) Big Companies uses python :
Companies like Google, Netflix, Facebook, Instagram, Mozilla, Dropbox etc. they use python and they pay a lot of money.
Most of them use Python for machine learning and Deep Learning. Netflix company uses recommendation system, to recommend you movies. more than 75% of movies/web series you watch on Netflix is recommended by Python.
6.) Python is used in Web Development :
Django is an awesome framework for web development. If you want to make web application quickly than Django is for you.
Django is based on Model, View, Template architecture.
It’s primary goal is to ease the creation of complex, database-driven websites.
The framework emphasizes reusability and “pluggability” of components, less code, low coupling, rapid development, and the principle of don’t repeat yourself.
It gives default admin panel.
7.) Python has amazing Ecosystem:
The Python Package Index (PyPI) hosts thousands of third-party modules for Python. Both Python’s standard library and the community-contributed modules allow for endless possibilities.
Python developer keeps updating python packages all the time. Even Big companies like Google. Google made tensorflow which is maintained by them.
8.) Python is used in everywhere :
Python is used everywhere. You can make websites, softwares, GUI applications, android applications, games and many more.
1.) Data Science
2.) Scientific Computing and Mathematical Computing
3.) Finance and Trading
4.) Web Development
5.) Gaming
6.) GUI application
7.) Security and Penetration testing
8.) Scripting
9.) GIS software
10.) Micro controllers.
Conclusion : Why you should learn python
This is my list of why you should learn python. I am also a python developer. I hope you liked my list. if you find any error, don’t forget to mail me.
Thank you for reading. | http://www.geekyshadow.com/2020/07/07/top-10-reasons-learn-python/ | CC-MAIN-2020-40 | en | refinedweb |
Lutron Homeworks Series 4 and 8 interface over Ethernet
Project description
RFK101 Package
Package to connect to Lutron Homeworks Series-4 and Series-8 systems. The controller is connected by an RS232 port to an Ethernet adaptor (NPort).
Example:
from time import sleep from pyhomeworks import Homeworks def callback(msg,data): print(msg,data) hw = Homeworks( 'host.test.com', 4008, callback ) # Sleep for 10 seconds waiting for a callback sleep(10.) # Close the interface hw.close()
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/pyhomeworks/ | CC-MAIN-2020-40 | en | refinedweb |
[
]
stack commented on HBASE-14614:
-------------------------------
Pushed first cut at core assignment made from patches pulled from [~mbertozzi]'s repo (as
per his guidance).
Here's the change log:
{code}
HBASE-14614 Procedure v2 - Core Assignment Manager (Matteo Bertozzi)
Below are commits from Matteo's repo adding in core AM.
Adds running of remote procedure. Adds batching of remote calls.
Adds support for assign/unassign in procedures. Adds version info
reporting in rpc. Adds start of an AMv2.
First is from
Also adding in RS version info from
And remote dispath
A hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/RemoteProcedureDispatcher.java
Dispatch remote procedures every 150ms or 32 items -- which ever
happens first (configurable). Runs a timeout thread.
Carries notion of a remote procedure and of a buffer full of these.
M hbase-protocol-shaded/src/main/protobuf/Admin.proto b/hbase-protocol-shaded/src/main/protobuf/Admin.proto
Add execute procedures call.
M hbase-protocol-shaded/src/main/protobuf/MasterProcedure.proto
Adds assign and unassign support.
M hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java
Adds getting RS version out of RPC
Examples: (1.3.4 is 0x0103004, 2.1.0 is 0x0201000)
M hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
Add start/stop of remote precedure engine. Add reference to AM2.
M hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java
Extract version number of the server making rpc.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignProcedure.java
Add new assign procedure.
There can only be one RegionTransitionProcedure per region running at the time,
since each procedure takes a lock on the region.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/AssignmentManager.java
A procedure-based AM (AMv2).
TODO
- handle region migration
- handle meta assignment first
- handle sys table assignment first (e.g. acl, namespace)
- handle table priorities
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/MoveRegionProcedure.java
Adds new move region procedure.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStateStore.java
Store region state (in hbase:meta by default).
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionStates.java
In-memory state of all regions.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/RegionTransitionProcedure.java
Base RIT procedure.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/UnassignProcedure.java
Unassign procedure.
A hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/RSProcedureDispatcher.java
Run region assignement in a manner that pays attention to target
server version.
{code}
>1.4#6332) | http://mail-archives.apache.org/mod_mbox/hbase-issues/201701.mbox/%3CJIRA.12905249.1444927973000.622735.1483320118641@Atlassian.JIRA%3E | CC-MAIN-2018-09 | en | refinedweb |
Dan Morrill, Google Developer Relations Team
Updated January 2009
It is a sad truth that JavaScript applications are easily left vulnerable to several types of security exploits, if developers are unwary. Because the GWT produces JavaScript code, we GWT developers are no less vulnerable to JavaScript attacks than anyone else. However, because the goal of GWT is to allow developers to focus on their users' needs instead of JavaScript and browser quirks, it's easy to let our guards down. To make sure that GWT developers have a strong appreciation of the risks, we've put together this article.
GWT's mission is to provide developers with the tools they need to build AJAX apps that make the web a better place for end users. However, the apps we build have to be secure as well as functional, or else our community isn't doing a very good job at our mission.
This article is a primer on JavaScript attacks, intended for GWT developers. The first portion describes the major classes of attacks against JavaScript in general terms that are applicable to any AJAX framework. After that background information on the attacks, the second portion describes how to secure your GWT applications against them.
These problems, like so many others on the Internet, stem from malicious programmers. There are people out there who spend a huge percentage of their lives thinking of creative ways to steal your data. Vendors of web browsers do their part to stop those people, and one way they accomplish it is with the Same-Origin Policy.
The Same-Origin Policy (SOP) says that code running in a page that was loaded from Site A can't access data or network resources belonging to any other site, or even any other page (unless that other page was also loaded from Site A.) The goal is to prevent malicious hackers from injecting evil code into Site A that gathers up some of your private data and sends it to their evil Site B. This is, of course, the well-known restriction that prevents your AJAX code from making an XMLHTTPRequest call to a URL that isn't on the same site as the current page. Developers familiar with Java Applets will recognize this as a very similar security policy.
There is, however, a way around the Same-Origin Policy, and it all starts with trust. A web page owns its own data, of course, and is free to submit that data back to the web site it came from. JavaScript code that's already running is trusted to not be evil, and to know what it's doing. If code is already running, it's too late to stop it from doing anything evil anyway, so you might as well trust it.
One thing that JavaScript code is trusted to do is load more content. For example, you might build a basic image gallery application by writing some JavaScript code that inserts and deletes <img> tags into the current page. When you insert an <img> tag, the browser immediately loads the image as if it had been present in the original page; if you delete (or hide) an <img> tag, the browser removes it from the display.
Essentially, the SOP lets JavaScript code do anything that the original HTML page could have done -- it just prevents that JavaScript from sending data to a different server, or from reading or writing data belonging to a different server.
The text above said, "prevents JavaScript from sending data to a different server." Unfortunately, that's not strictly true. In fact it is possible to send data to a different server, although it might be more accurate to say "leak."
JavaScript is free to add new resources -- such as <img> tags -- to the current page. You probably know that you can cause an image hosted on foo.com to appear inline in a page served up by bar.com. Indeed, some people get upset if you do this to their images, since it uses their bandwidth to serve an image to your web visitor. But, it's a feature of HTML, and since HTML can do it, so can JavaScript.
Normally you would view this as a read-only operation: the browser requests an image, and the server sends the data. The browser didn't upload anything, so no data can be lost, right? Almost, but not quite. The browser did upload something: namely, the URL of the image. Images use standard URLs, and any URL can have query parameters encoded in it. A legitimate use case for this might be a page hit counter image, where a CGI on the server selects an appropriate image based on a query parameter and streams the data to the user in response. Here is a reasonable (though hypothetical) URL that could return a hit-count image showing the number '42':
In the static HTML world, this is perfectly reasonable. After all, the server is not going to send the client to a web site that will leak the server's or user's data -- at least, not on purpose. Because this technique is legal in HTML, it's also legal in JavaScript, but there is an unintended consequence. If some evil JavaScript code gets injected into a good web page, it can construct <img> tags and add them to the page.
It is then free to construct a URL to any hostile domain, stick it in an <img> tag, and make the request. It's not hard to imagine a scenario where the evil code steals some useful information and encodes it in the <img> URL; an example might be a tag such as:
<img src=""/>
If
private_user_data is a password, credit card number, or something similar, there'd be a major problem. If the evil code sets the size of the image to 1 pixel by 1 pixel, it's very unlikely the user will even notice it.
The type of vulnerability just described is an example of a class of attacks called "Cross-Site Scripting" (abbreviated as "XSS"). These attacks involve browser script code that transmits data (or does even worse things) across sites. These attacks are not limited to <img> tags, either; they can be used in most places the browser lets script code access URLs. Here are some more examples of XSS attacks:
Clearly, if evil code gets into your page, it can do some nasty stuff. By the way, don't take my examples above as a complete list; there are far too many variants of this trick to describe here.
Throughout all this there's a really big assumption, though: namely, that evil JavaScript code could get itself into a good page in the first place. This sounds like it should be hard to do; after all, servers aren't going to intentionally include evil code in the HTML data they send to web browsers. Unfortunately, it turns out to be quite easy to do if the server (and sometimes even client) programmers are not constantly vigilant. And as always, evil people are spending huge chunks of their lives thinking up ways to do this.
The list of ways that evil code can get into an otherwise good page is endless. Usually they all boil down to unwary code that parrots user input back to the user. For instance, this Python CGI code is vulnerable:
import cgi f = cgi.FieldStorage() name = f.getvalue('name') or 'there' s = '<html><body><div>Hello, ' + name + '!</div></body></html>' print 'Content-Type: text/html' print 'Content-Length: %s' % (len(s),) print print s
The code is supposed to print a simple greeting, based on a form input. For instance, a URL like this one would print "Hello, Dan!":
However, because the CGI doesn't inspect the value of the "name" variable, an attacker can insert script code in there.
Here is some JavaScript that pops up an alert window:
<script>alert('Hi');</script>
That script code can be encoded into a URL such as this:
That URL, when run against the CGI above, inserts the <script> tag directly into the <div> block in the generated HTML. When the user loads the CGI page, it still says "Hello, Dan!" but it also pops up a JavaScript alert window.
It's not hard to imagine an attacker putting something worse than a mere JavaScript alert in that URL. It's also probably not hard to imagine how easy it is for your real-world, more complex server-side code to accidentally contain such vulnerabilities. Perhaps the scariest thing of all is that an evil URL like the one above can exploit your servers entirely without your involvement.
The solution is usually simple: you just have to make sure that you escape or strip the content any time you write user input back into a new page. Like many things though, that's easier said than done, and requires constant vigilance.
It would be nice if we could wrap up this article at this point. Unfortunately, we can't. You see, there's a whole other class of attack that we haven't covered yet.
You can think of this one almost as XSS in reverse. In this scenario, the attacker lures one of your users to their own site, and uses their browser to attack your server. The key to this attack is insecure server-side session management.
Probably the most common way that web sites manage sessions is via browser cookies. Typically the server will present a login page to the user, who enters credentials like a user name and password and submits the page. The server checks the credentials and if they are correct, sets a browser session cookie. Each new request from the browser comes with that cookie. Since the server knows that no other web site could have set that cookie (which is true due to the browsers' Same-Origin Policy,) the server knows the user has previously authenticated.
The problem with this approach is that session cookies don't expire when the user leaves the site (they expire either when the browser closes or after some period of time). Since the browsers will include cookies with any request to your server regardless of context, if your users are logged in, it's possible for other sites to trigger an action on your server. This is frequently referred to as "Cross-Site Request Forging" or XSRF (or sometimes CSRF).
The sites most vulnerable to XSRF attacks, perhaps ironically, are those that have already embraced the service-oriented model. Traditional non-AJAX web applications are HTML-heavy and require multi-page UI operations by their very nature. The Same-Origin Policy prevents an XSRF attacker from reading the results of its request, making it impossible for an XSRF attacker to navigate a multi-page process. The simple technique of requiring the user to click a confirmation button -- when properly implemented -- is enough to foil an XSRF attack.
Unfortunately, eliminating those sorts of extra steps is one of the key goals of the AJAX programming model. AJAX lets an application's UI logic run in the browser, which in turn lets communications with the server become narrowly defined operations. For instance, you might develop corporate HR application where the server exposes a URL that lets browser clients email a user's list of employee data to someone else. Such services are operation-oriented, meaning that a single HTTP request is all it takes to do something.
Since a single request triggers the operation, the XSRF attacker doesn't need to see the response from an XMLHTTPRequest-style service. An AJAX-based HR site that exposes "Email Employee Data" as such a service could be exploited via an XSRF attack that carefully constructed a URL that emails the employee data to an attacker. As you can see, AJAX applications are a lot more vulnerable to an XSRF attack than a traditional web site, because the attacking page doesn't need to navigate a multi-page sequence after all.
So far we've seen the one-two punch from XSS and XSRF. Sadly, there's still more. These days, JSON (JavaScript Object Notation) is the new hotness -- and indeed, it's very hot. It's a clever, even elegant, technique. It also performs well, since it uses low-level (meaning: fast) browser support to handle parsing. It's also easy to program to, since the result is a JavaScript object, meaning you get object serialization almost for free. Unfortunately, with this powerful technique comes very substantial risks to your code; if you choose to use JSON with your GWT application, it's important to understand those risks.
At this point, you'll need to understand JSON; check out the json.org site if you aren't familiar with it yet. A cousin of JSON is "JSON with Padding" or JSONP, so you'll also want to be familiar with that. Here's the earliest discussion of JSONP that we could find: Remote JSON - JSONP.
As bad as XSS and XSRF are, JSON gives them room to breathe, so to speak, which makes them even more dangerous. The best way to explain this is just to describe how JSON is used. There are three forms, and each is vulnerable to varying degrees:
[ 'foo', 'bar' ]
{ 'data': ['foo', 'bar'] }
var result = { 'data': ['foo', 'bar'] };
handleResult({'data': ['foo', 'bar']});
The last two examples are most useful when returned from a server as the response to a <script> tag inclusion. This could use a little explanation. Earlier text described how JavaScript is permitted to dynamically add <img> tags pointing to images on remote sites. The same is true of <script> tags: JavaScript code can dynamically insert new <script> tags that cause more JavaScript code to load.
This makes dynamic <script> insertion a very useful technique, especially for mashups. Mashups frequently need to fetch data from different sites, but the Same-Origin Policy prevents them from doing so directly with an XMLHTTPRequest call. However, currently-running JavaScript code is trusted to load new JavaScript code from different sites -- and who says that code can't actually be data?
This concept might seem suspicious at first since it seems like a violation of the Same-Origin restriction, but it really isn't. Code is either trusted or it's not. Loading more code is more dangerous than loading data, so since your current code is already trusted to load more code, why should it not be trusted to load data as well? Meanwhile, <script> tags can only be inserted by trusted code in the first place, and the entire meaning of trust is that... you trust it to know what it's doing. It's true that XSS can abuse trust, but ultimately XSS can only originate from buggy server code. Same-Origin is based on trusting the server -- bugs and all.
So what does this mean? How is writing a server-side service that exposes data via these methods vulnerable? Well, other people have explained this a lot better than we can cover it here. Here are some good treatments:
Go ahead and read those -- and be sure to follow the links! Once you've digested it all, you'll probably see that you should tread carefully with JSON -- whether you're using GWT or another tool.
But this is an article for GWT developers, right? So how are GWT developers affected by these things? The answer is that we are no less vulnerable than anybody else, and so we have to be just as careful. The sections below describe how each threat impacts GWT in detail.
Also see SafeHtml – Provides coding guidelines with examples showing how to protect your application from XSS vulnerabilities due to untrusted data
XSS can be avoided if you rigorously follow good JavaScript programming practices. Since GWT helps you follow good JavaScript practices in general, it can help you with XSS. However, GWT developers are not immune, and there simply is no magic bullet.
Currently, we believe that GWT isolates your exposure to XSS attacks to these vectors:
innerHTMLon GWT Widget objects
document.write, etc.)
Don't take our word for it, though! Nobody's perfect, so it's important to always keep security on your mind. Don't wait until your security audit finds a hole, think about it constantly as you code.
Read on for more detail on the four vectors above.
Many developers use GWT along with other JavaScript solutions. For instance, your application might be using a mashup with code from several sites, or you might be using a third-party JavaScript-only library with GWT. In these cases, your application could be vulnerable due to those non-GWT libraries, even if the GWT portion of your application is secure.
If you are mixing other JavaScript code with GWT in your application, it's important that you review all the pieces to be sure your entire application is secure.
It's a common technique to fill out the bodies of tables, DIVs, frames, and similar UI elements with some static HTML content. This is most easily accomplished by assigning to the innerHTML attribute on a JavaScript object. However, this can be risky since it allows evil content to get inserted directly into a page.
Here's an example. Consider this basic JavaScript page:
<html> <head> <script language="JavaScript"> function fillMyDiv(newContent) { document.getElementById('mydiv').innerHTML = newContent; } </script> </head> <body> <p>Some text before mydiv.</p> <div id="mydiv"></div> <p>Some text after mydiv.</p> </body> </html>
The page contains a placeholder <div> named 'mydiv', and a JavaScript function that simply sets innerHTML on that div. The idea is that you would call that function from other code on your page whenever you wanted to update the content being displayed. However, suppose an attacker contrives to get a user to pass in this HTML as the 'newContent' variable:
<div onmousemove="alert('Hi!');">Some text</div>
Whenever the user mouses over 'mydiv', an alert will appear. If that's not frightening enough, there are other techniques -- only slightly more complicated -- that can execute code immediately without even needing to wait for user input. This is why setting innerHTML can be dangerous; you've got to be sure that the strings you use are trusted.
It's also important to realize that a string is not necessarily trusted just because it comes from your server! Suppose your application contains a report, which has "edit" and "view" modes in your user interface. For performance reasons, you might generate the custom-printed report in plain-old HTML on your server. Your GWT application would display it by using a
RequestCallback to fetch the HTML and assign the result to a table cell's innerHTML property. You might assume that that string is trusted since your server generated it, but that could be a bad assumption. If the user is able to enter arbitrary input in "edit" mode, an attacker could use any of a variety of attacks to get the user to store some unsafe HTML in a record. When the user views the record again, that record's HTML would be evil.
Unless you do an extremely thorough analysis of both the client and server, you can't assume a string from your server is safe. To be truly safe, you may want to always assume that strings destined for innerHTML or eval are unsafe, but at the very least you've got to Know Your Code.
This is a very similar scenario to setting innerHTML, although with arguably worse implications. Suppose that you have the same example as the one just described, except that instead of returning HTML content, the server sends the report data to the browser as a JSON string. You would normally pass that string to GWT's JSONParser class. For performance reasons, though, that string calls eval(). It's important to be sure that the code you are passing doesn't contain evil code.
An attacker could again use one of several attacks to cause the user to save carefully-constructed JavaScript code into one of your data records. That code could contain evil side effects that take effect immediately when the JSON object is parsed. This is just as severe as innerHTML but is actually easier to do since the attacker doesn't need to play tricks with HTML in the evil string -- he can just use plain JavaScript code.
As with innerHTML, it's not always correct to assume that a JSON string is safe simply because it came from your server. At the very least, it is important to think carefully before you use any JSON service, whether it's yours or a third party's.
GWT has little control over or insight into JSNI code you write. If you write JSNI code, it's important to be especially cautious. Calling the eval function or setting innerHTML should set off red flags immediately, but you should always think carefully as you write code.
For instance, if you're writing a custom Widget that includes a hyperlink, you might include a
setURL(String) method. If you do, though, you should consider adding a test to make sure that the new URL data doesn't actually contain a
"javascript:" URL. Without this test, your setURL method could create a new vector for XSS code to get into your application. This is just one possible example; always think carefully about unintended effects when you use JSNI.
As a GWT user, you can help reduce XSS vulnerabilities in your code by following these guidelines:
The GWT team is considering adding support for standard string inspection to the GWT library. You would use this to validate any untrusted string to determine if it contains unsafe data (such as a <script> tag.) The idea is that you'd use this method to help you inspect any strings you need to pass to innerHTML or eval. However, this functionality is only being considered right now, so for the time being it's still important to do your own inspections. Be sure to follow the guidelines above -- and be sure to be paranoid!
Also see GWT RPC XSRF protection – Explains how to protect GWT RPCs against XSRF attacks using RPC tokens introduced in GWT 2.3.
You can take steps to make your GWT application less vulnerable to XSRF attacks. The same techniques that you might use to protect other AJAX code will also work to protect your GWT application.
A common countermeasure for XSRF attacks involves duplicating a session cookie. Earlier, we discussed how the usual cookie-based session management model leaves your application open to XSRF attacks. An easy way to prevent this is to use JavaScript to copy the cookie value and submit it as form data along with your XMLHTTPRequest call. Since the browser's Same-Origin Policy will prevent a third-party site from accessing the cookies from your site, only your site can retrieve your cookie. By submitting the value of the cookie along with the request, your server can compare the actual cookie value with the copy you included; if they don't match, your server knows that the request is an XSRF attempt. Simply put, this technique is a way of requiring the code that made the request to prove that it has access to the session cookie.
If you are using the RequestBuilder and RequestCallback classes in GWT, you can implement XSRF protection by setting a custom header to contain the value of your cookie. Here is some sample code:
RequestBuilder rb = new RequestBuilder(RequestBuilder.POST, url); rb.setHeader("X-XSRF-Cookie", Cookies.getCookie("myCookieKey")); rb.sendRequest(null, myCallback);
If you are using GWT's RPC mechanism, the solution is unfortunately not quite as clean. However, there are still several ways you can accomplish it. For instance, you can add an argument to each method in your RemoteService interface that contains a String. That is, if you wanted this interface:
public interface MyInterface extends RemoteService { public boolean doSomething(); public void doSomethingElse(String arg); }
...you could actually use this:
public interface MyInterface extends RemoteService { public boolean doSomething(String cookieValue); public void doSomethingElse(String cookieValue, String arg); }
When you call the method, you would pass in the current cookie value that you fetch using
Cookies.getCookie(String).
If you prefer not to mark up your
RemoteService interfaces in this way, you can do other things instead. You might modify your data-transfer objects to have a field name containing the
cookieValue, and set that value whenever you create them. Perhaps the simplest solution is to simply add the cookie value to your URL as a
GET parameter. The important thing is to get the cookie value up to the server, somehow.
In all of these cases, of course, you'll have to have your server-side code compare the duplicate value with the actual cookie value and ensure that they're the same.
The GWT team is also considering enhancing the RPC system to make it easier to prevent XSRF attacks. Again though, that will only appear in a future version, and for now you should take precautions on your own.
Attacks against JSON and JSONP are pretty fundamental. Once the browser is running the code, there's nothing you can do to stop it. The best way to protect your server against JSON data theft is to avoid sending JSON data to an attacker in the first place.
That said, some people advise JSON developers to employ an extra precaution besides the cookie duplication XSRF countermeasure. In this model, your server code would wrap any JSON response strings within JavaScript block comments. For example, instead of returning
['foo', 'bar'] you would instead return
/*['foo', 'bar']*/.
The client code is then expected to strip the comment characters prior to passing the string to the eval function.
The primary effect of this is that it prevents your JSON data from being stolen via a <script> tag. If you normally expect your server to export JSON data in response to a direct XMLHTTPRequest, this technique would prevent attackers from executing an XSRF attack against your server and stealing the response data via one of the attacks linked to earlier.
If you only intend your JSON data to be returned via an XMLHTTPRequest, wrapping the data in a block comment prevents someone from stealing it via a <script> tag. If you are using JSON as the data format exposed by your own services and don't intend servers in other domains to use it, then there is no reason not to use this technique. It might keep your data safe even in the event that an attacker manages to forge a cookie.
You should also use the XSRF cookie-duplication countermeasure if you're exposing services for other mashups to use. However, if you're building a JSONP service that you want to expose publicly, the second comment-block technique we just described will be a hindrance.
The reason is that the comment-wrapping technique works by totally disabling support for <script> tags. Since that is at the heart of JSONP, it disables that technique. If you are building a web service that you want to be used by other sites for in-browser mashups, then this technique would prevent that.
Conversely, be very careful if you're building mashups with someone else's site! If your application is a "JSON consumer" fetching data from a different domain via dynamic <script> tags, you are exposed to any vulnerabilities they may have. If their site is compromised, your application could be as well. Unfortunately, with the current state of the art, there isn't much you can do about this. After all -- by using a <script> tag, you're trusting their site. You just have to be sure that your trust is well-placed.
In other words, if you have critical private information on your own server, you should probably avoid in-browser JSONP-style mashups with another site. Instead, you might consider building your server to act as a relay or proxy to the other site. With that technique, the browser only communicates with your site, which allows you to use more rigorous protections. It may also provide you with an additional opportunity to inspect strings for evil code.
Web 2.0 can be a scary place. Hopefully we've given you some food for thought and a few techniques you can implement to keep your users safe. Mostly, though, we hope we've instilled a good healthy dose of paranoia in you. If Benjamin Franklin were alive today, he might add a new "certainty" to his famous list: death, taxes... and people trying to crack your site. The only thing we can be sure of is that there will be other exploits in the future, so paranoia will serve you well over time.
As a final note, we'd like to stress one more time the importance of staying vigilant. This article is not an exhaustive list of the security threats to your application. This is just a primer, and someday it could become out of date. There may also be other attacks which we're simply unaware of. While we hope you found this information useful, the most important thing you can do for your users' security is to keep learning, and stay as well-informed as you can about security threats.
As always, if you have any feedback for us or would like to discuss this issue — now or in the future — please visit our GWT Developer Forum. | http://www.gquery.org/articles/security_for_gwt_applications.html | CC-MAIN-2018-09 | en | refinedweb |
An ANTLR-based Dart parser.
Eventual goal is compliance with the ECMA Standard.
Right now, it will need a lot more tests to prove it works.
Special thanks to Tiago Mazzutti for this port of ANTLR4 to Dart.
dependencies: dart_parser: ^1.0.0-dev
This will automatically install
antlr4dart as well. addition, I had to change the way strings are lexed. This is out of
line with the specification. The
stringLiteral now looks like this:
stringLiteral: (SINGLE_LINE_STRING | MULTI_LINE_STRING)+;
To handle the contents of strings, you will have to do it manually, like via Regex. Sorry!
In addition, I modified the rules of external declarations, so that
you could include metadata before the keyword
external. The rule
defined in the spec didn't permit that, although that's accepted by
dartanalyzer, dart2js, etc.(parser.compilationUnit()); }
This package includes a web app that diagrams your parse trees.
Alternatively, you can use
grun, which ships with the ANTLR tool
itself.?).
As always, thanks for using this.
Feel free to follow me on Twitter, or to check out my blog.
Add this to your package's pubspec.yaml file:
dependencies: dart_parser: "^1.0.0-dev+7"
You can install packages from the command line:
with pub:
$ pub get
with Flutter:
$ flutter packages get
Alternatively, your editor might support
pub get or
packages get.
Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:dart_parser/dart_parser.dart';
We analyzed this package, and provided a score, details, and suggestions below.
Detected platforms: Flutter, web, other
No platform restriction found in primary library
package:dart_parser/dart_parser.dart.
Fix
lib/src/dartlang_parser.dart.
Strong-mode analysis of
lib/src/dartlang_parser.dartfailed with the following error:
line: 1197 col: 27
The method 'adaptivePredict' isn't defined for the class 'AtnSimulator'.
Maintain
CHANGELOG.md.
Changelog entries help clients to follow the progress in your code.
dart_parser.dart. | https://pub.dartlang.org/packages/dart_parser | CC-MAIN-2018-09 | en | refinedweb |
.
<property name="prefix" value="/WEB-INF/jsp/"/>
and
p:prefix="/WEB-INF/jsp/"
??
Basically, either version is acceptable (in Netbeans or any other development environment) as they are just two different alternatives to do the same thing. The only requirement is that for the second version, the "p:" namespace prefix needs to be declared in your top level "beans" xml element.
So yeah, you can choose whichever way you are happier with.
Note, I do realise that there are other differences in the two xml snippets above (the bean's class and the viewClass property). If that is what you are referring to, then there really isn't a path to translate the first into the second, they are two different (albeit very similar) things.
If I have missed the point somewhere, can you please elaborate! | https://www.experts-exchange.com/questions/28317132/Spring-XML-document.html | CC-MAIN-2018-09 | en | refinedweb |
A component is a nonvisual class designed specifically to integrate with a design-time environment such as Visual Studio .NET. WinForms provides several standard components, and .NET lets you build your own, gaining a great deal of design-time integration with very little work.
On the other hand, with a bit more effort, you can integrate nonvisual components and controls very tightly into the design-time environment, providing a rich development experience for the programmer using your custom components and controls.
Components
Recall from Chapter 8: Controls that controls gain integration into VS.NET merely by deriving from the Control base class in the System.Windows.Forms namespace. That's not the whole story. What makes a control special is that it's one kind of component: a .NET class that integrates with a design-time environment such as VS.NET. A component can show up on the Toolbox along with controls and can be dropped onto any design surface. Dropping a component onto a design surface makes it available to set the property or handle the events in the Designer, just as a control is. Figure 9.1 shows the difference between a hosted control and a hosted component.
Figure 9.1. Locations of Components and Controls Hosted on a Form
Standard Components
It's so useful to be able to create instances of nonvisual components and use the Designer to code against them that WinForms comes with several components out of the box:
Standard dialogs. The ColorDialog, FolderBrowserDialog, FontDialog, OpenFileDialog, PageSetupDialog, PrintDialog, PrintPreviewDialog, and SaveFileDialog classes make up the bulk of the standard components that WinForms provides. The printing-related components are covered in detail in Chapter 7: Printing.
Menus. The MainMenu and ContextMenu components provide a form's menu bar and a control's context menu. They're both covered in detail in Chapter 2: Forms.
User information. The ErrorProvider, HelpProvider, and ToolTip components provide the user with varying degrees of help in using a form and are covered in Chapter 2: Forms.
Notify icon. The NotifyIcon component puts an icon on the shell's TaskBar, giving the user a way to interact with an application without the screen real estate requirements of a window. For an example, see Appendix D: Standard WinForms Components and Controls.
Image List. The ImageList component keeps track of a developer-provided list of images for use with controls that need images when drawing. Chapter 8: Controls shows how to use them.
Timer. The Timer component fires an event at a set interval measured in milliseconds.
Using Standard Components
What makes components useful is that they can be manipulated in the design-time environment. For example, imagine that you'd like a user to be able to set an alarm in an application and to notify the user when the alarm goes off. You can implement that using a Timer component. Dropping a Timer component onto a Form allows you to set the Enabled and Interval properties as well as handle the Tick event in the Designer, which generates code such as the following into InitializeComponent:
void InitializeComponent() { this.components = new Container(); this.timer1 = new Timer(this.components); ... // timer1 this.timer1.Enabled = true; this.timer1.Tick += new EventHandler(this.timer1_Tick); ... }
As you have probably come to expect by now, the Designer-generated code looks very much like what you'd write yourself. What's interesting about this sample InitializeComponent implementation is that when a new component is created, it's put on a list with the other components on the form. This is similar to the Controls collection that is used by a form to keep track of the controls on the form.
After the Designer has generated most of the Timer-related code for us, we can implement the rest of the alarm functionality for our form:
DateTime alarm = DateTime.MaxValue; // No alarm void setAlarmButton_Click(object sender, EventArgs e) { alarm = dateTimePicker1.Value; } // Handle the Timer's Tick event void timer1_Tick(object sender, System.EventArgs e) { statusBar1.Text = DateTime.Now.TimeOfDay.ToString(); // Check to see whether we're within 1 second of the alarm double seconds = (DateTime.Now - alarm).TotalSeconds; if( (seconds >= 0) && (seconds <= 1) ) { alarm = DateTime.MaxValue; // Show alarm only once MessageBox.Show("Wake Up!"); } }
In this sample, when the timer goes off every 100 milliseconds (the default value), we check to see whether we're within 1 second of the alarm. If we are, we shut off the alarm and notify the user, as shown in Figure 9.2.
Figure 9.2. The Timer Component Firing Every 100 Milliseconds
If this kind of single-fire alarm is useful in more than one spot in your application, you might choose to encapsulate this functionality in a custom component for reuse.
Custom Components
A component is any class that implements the IComponent interface from the System.ComponentModel namespace:
interface IComponent : IDisposable { ISite Site { get; set; } event EventHandler Disposed; } interface IDisposable { void Dispose(); }
A class that implements the IComponent interface can be added to the Toolbox1 in VS.NET and dropped onto a design surface. When you drop a component onto a form, it shows itself in a tray below the form. Unlike controls, components don't draw themselves in a region on their container. In fact, you could think of components as nonvisual controls, because, just like controls, components can be managed in the design-time environment. However, it's more accurate to think of controls as visual components because controls implement IComponent, which is where they get their design-time integration.
A Sample Component
As an example, to package the alarm functionality we built earlier around the Timer component, let's build an AlarmComponent class. To create a new component class, right-click on the project and choose Add | Add Component, enter the name of your component class, and press OK. You'll be greeted with a blank design surface, as shown in Figure 9.3.
Figure 9.3. A New Component Design Surface
The design surface for a component is meant to host other components for use in implementing your new component. For example, we can drop our Timer component from the Toolbox onto the alarm component design surface. In this way, we can create and configure a timer component just as if we were hosting the timer on a form. Figure 9.4 shows the alarm component with a timer component configured for our needs.
Figure 9.4. A Component Design Surface Hosting a Timer Component
Switching to the code view2 for the component displays the following skeleton generated by the component project item template and filled in by the Designer for the timer:
using System; using System.ComponentModel; using System.Collections; using System.Diagnostics; namespace Components { /// <summary> /// Summary description for AlarmComponent. /// </summary> public class AlarmComponent : System.ComponentModel.Component { private Timer timer1; private System.ComponentModel.IContainer components; public AlarmComponent(System.ComponentModel.IContainer container) { /// <summary> /// Required for Windows.Forms Class Composition Designer support /// </summary> container.Add(this); InitializeComponent(); // // TODO: Add any constructor code after InitializeComponent call // } public AlarmComponent() { /// <summary> /// Required for Windows.Forms Class Composition Designer support /// </summary> InitializeComponent(); // // TODO: Add any constructor code after InitializeComponent call // } #region Component Designer generated code /// <summary> /// Required method for Designer support - do not modify /// the contents of this method with the code editor. /// </summary> private void InitializeComponent() { this.components = new System.ComponentModel.Container(); this.timer1 = new System.Windows.Forms.Timer(this.components); // // timer1 // this.timer1.Enabled = true; } #endregion } }
Notice that the custom component derives from the Component base class from the System.ComponentModel namespace. This class provides an implementation of IComponent for us.
After the timer is in place in the alarm component, it's a simple matter to move the alarm functionality from the form to the component by handling the timer's Tick event:
public class AlarmComponent : Component { ... DateTime alarm = DateTime.MaxValue; // No alarm public DateTime Alarm { get { return this.alarm; } set { this.alarm = value; } } // Handle the Timer's Tick event public event EventHandler AlarmSounded; void timer1_Tick(object sender, System.EventArgs e) { // Check to see whether we're within 1 second of the alarm double seconds = (DateTime.Now - this.alarm).TotalSeconds; if( (seconds >= 0) && (seconds <= 1) ) { this.alarm = DateTime.MaxValue; // Show alarm only once if( this.AlarmSounded != null ) { AlarmSounded(this, EventArgs.Empty); } } } }
This implementation is just like what the form was doing before, except that the alarm date and time are set via the public Alarm property; when the alarm sounds, an event is fired. Now we can simplify the form code to contain merely an instance of the AlarmComponent, setting the Alarm property and handling the AlarmSounded event:
public class AlarmForm : Form { AlarmComponent alarmComponent1; ... void InitializeComponent() { ... this.alarmComponent1 = new AlarmComponent(this.components); ... this.alarmComponent1.AlarmSounded += new EventHandler(this.alarmComponent1_AlarmSounded); ... } } void setAlarmButton_Click(object sender, EventArgs e) { alarmComponent1.Alarm = dateTimePicker1.Value; } void alarmComponent1_AlarmSounded(object sender, EventArgs e) { MessageBox.Show("Wake Up!"); }
In this code, the form uses an instance of AlarmComponent, setting the Alarm property based on user input and handling the AlarmSounded event when it's fired. The code does all this without any knowledge of the actual implementation, which is encapsulated inside the AlarmComponent itself.
Component Resource Management
Although components and controls are similar as far as their design-time interaction is concerned, they are not identical. The most obvious difference lies in the way they are drawn on the design surface. A less obvious difference is that the Designer does not generate the same hosting code for components that it does for controls. Specifically, a component gets extra code so that it can add itself to the container's list of components. When the container shuts down, it uses this list of components to notify all the components that they can release any resources that they're holding.
Controls don't need this extra code because they already get the Closed event, which is an equivalent notification for most purposes. To let the Designer know that it would like to be notified when its container goes away, a component can implement a public constructor that takes a single argument of type IContainer:
public AlarmComponent(IContainer container) { // Add object to container's list so that // we get notified when the container goes away container.Add(this); InitializeComponent(); }
Notice that the constructor uses the passed container interface to add itself as a container component. In the presence of this constructor, the Designer generates code that uses this constructor, passing it a container for the component to add itself to. Recall that the code to create the AlarmComponent uses this special constructor:
public class AlarmForm : Form { IContainer components; AlarmComponent alarmComponent1; ... void InitializeComponent() { this.components = new Container(); ... this.alarmComponent1 = new AlarmComponent(this.components); ... } }
By default, most of the VS.NET-generated classes that contain components will notify each component in the container as part of the Dispose method implementation:
public class AlarmForm : Form { ... IContainer components; ... // Overridden from the base class Component.Dispose method protected override void Dispose( bool disposing ) { if( disposing ) { if (components != null) { // Call each component's Dispose method components.Dispose(); } } base.Dispose( disposing ); } }
As you may recall from Chapter 4: Drawing Basics, the client is responsible for calling the Dispose method from the IDisposable interface. The IContainer interface derives from IDisposable, and the Container implementation of Dispose walks the list of components, calling IDisposable. Dispose on each one. A component that has added itself to the container can override the Component base class's Dispose method to catch the notification that is being disposed of:
public class AlarmComponent : Component { Timer timer1; IContainer components; ... void InitializeComponent() { this.components = new Container(); this.timer1 = new Timer(this.components); ... } protected override void Dispose(bool disposing) { if( disposing ) { // Release managed resources ... // Let contained components know to release their resources if( components != null ) { components.Dispose(); } } // Release unmanaged resources ... } }
Notice that, unlike the method that the client container is calling, the alarm component's Dispose method takes an argument. The Component base class routes the implementation of IDisposable.Dispose() to call its own Dispose(bool) method, with the Boolean argument disposing set to true. This is done to provide optimized, centralized resource management.
A disposing argument of true means that Dispose was called by a client that remembered to properly dispose of the component. In the case of our alarm component, the only resources we have to reclaim are those of the timer component we're using to provide our implementation, so we ask our own container to dispose of the components it's holding on our behalf. Because the Designer-generated code added the timer to our container, that's all we need to do.
A disposing argument of false means that the client forgot to properly dispose of the object and that the .NET Garbage Collector (GC) is calling our object's finalizer. A finalizer is a method that the GC calls when it's about to reclaim the memory associated with the object. Because the GC calls the finalizer at some indeterminate time—potentially long after the component is no longer needed (perhaps hours or days later)—the finalizer is a bad place to reclaim resources, but it's better than not reclaiming them at all.
The Component base class's finalizer implementation calls the Dispose method, passing a disposing argument of false, which indicates that the component shouldn't touch any of the managed objects it may contain. The other managed objects should remain untouched because the GC may have already disposed of them, and their state is undefined.
Any component that contains other objects that implement IDisposable, or handles to unmanaged resources, should implement the Dispose(bool) method to properly release those objects' resources when the component itself is being released by its container. | http://www.informit.com/articles/article.aspx?p=169528&seqNum=3 | CC-MAIN-2018-09 | en | refinedweb |
Unittesting Xtend Generators
Xtext offers nice Support for Unit Tests. But how to test a Xtend based Generator? This blogpost describes a simple approach for such a Test.
So let us take Xtext’s Hello World grammar as Starting point
Model: greetings+=Greeting*; Greeting: 'Hello' name=ID '!';
And following simple Generator
package org.xtext.example.mydsl.generator import org.eclipse.emf.ecore.resource.Resource import org.eclipse.xtext.generator.IFileSystemAccess import org.eclipse.xtext.generator.IGenerator import org.xtext.example.mydsl.myDsl.Greeting class MyDslGenerator implements IGenerator { override void doGenerate(Resource resource, IFileSystemAccess fsa) { for (g : resource.allContents.toIterable.filter(typeof(Greeting))) { fsa.generateFile(g.name+".java", ''' public class «g.name» { } ''') } } }
And here the Test
import org.junit.Test import org.junit.runner.RunWith import org.eclipse.xtext.junit4.XtextRunner import org.eclipse.xtext.junit4.InjectWith import org.xtext.example.mydsl.MyDslInjectorProvider import org.eclipse.xtext.generator.IGenerator import com.google.inject.Inject import org.eclipse.xtext.junit4.util.ParseHelper import org.xtext.example.mydsl.myDsl.Model import org.eclipse.xtext.generator.InMemoryFileSystemAccess import static org.junit.Assert.* import org.eclipse.xtext.generator.IFileSystemAccess @RunWith(typeof(XtextRunner)) @InjectWith(typeof(MyDslInjectorProvider)) class GeneratorTest { @Inject IGenerator underTest @Inject ParseHelper<Model> parseHelper @Test def test() { val model = parseHelper.parse(''' Hello Alice! Hello Bob! ''') val fsa = new InMemoryFileSystemAccess() underTest.doGenerate(model.eResource, fsa) println(fsa.files) assertEquals(2,fsa.files.size) assertTrue(fsa.files.containsKey(IFileSystemAccess::DEFAULT_OUTPUT+"Alice.java")) assertEquals( ''' public class Alice { } '''.toString, fsa.files.get(IFileSystemAccess::DEFAULT_OUTPUT+"Alice.java").toString ) assertTrue(fsa.files.containsKey(IFileSystemAccess::DEFAULT_OUTPUT+"Bob.java")) assertEquals( ''' public class Bob { } '''.toString, fsa.files.get(IFileSystemAccess::DEFAULT_OUTPUT+"Bob.java").toString) } }
But how does that work?
Xtext offers a specific
org.junit.runner.Runner. For Junit4 it is
org.junit.runner.Runner. This Runner allows in combination with a
org.eclipse.xtext.junit4.IInjectorProvider language specific injections within the test.
Since we have
fragment = junit.Junit4Fragment {} in our workflow
Xtext already Generated the Class
org.xtext.example.mydsl.MyDslInjectorProvider.
If we would not use Xtext at all we would have to create such a InjectorProvider manually.
To wire these things up we annotate your Test with
@RunWith(typeof(XtextRunner)) and
@InjectWith(typeof(MyDslInjectorProvider))
Now we can write our Test. This Basically consists of 3 steps
(1) read a model
(2) call the Generator
(3) Capture the Result
We solve Step (1) using Xtext’s
org.eclipse.xtext.junit4.util.ParseHelper and Step (3) by using a special kind of IFileSystemAccess that keeps the files InMemory and does not write them to the disk.
I hope this gives you a start writing you Xtext/Xtend Generator Tests.
Thanks Christian.
Thanks for the comprehensive overview, Christian.
With Xtext 2.3 it becomes even simpler for languages that use the Xbase expressions. The CompilationTestHelper from org.eclipse.xtext.xbase.compiler encapsulates a great bunch of the stuff that you had to do by yourself in your test code.
I am unsure if testing the generator output is a good idea. We have some of these tests in Spray, and they tend to break often, since templates change rather often. I would only test small pieces of the generator in that way. Most logic is normally in the Xtend functions that are used by the generator, I would test them more intensive. But anyway, if you would like to test it such, your approach is the right.
Hi,
guess this highly depends on what you want to achieve: if you have some extensions with some certain (business) logic it of course makes sense to write unit tests for them. never the less one might do some integration tests as well. especially in early phases of development when you might do bigger refactorings to the generator i find it easier to go with integration tests as with unit tests (one reason might be that refactoring support in Xtend is not yet that good as it is in JDT)
i just wanted to give some hints/a starting on how to do such a test and how to test with Xtext in general.
Regards, Christian
Hi Christian! Sure, this was no criticism. It really depends on the case. And often the problem is that just too few is tested, and especially the code generator not.
~Karsten
Hi Christian, thanks for this post. Is there a simple way to enhance this test if you have to add another resource to the ResourceSet before testing the code generation, e.g. if there are links from the DSL I want to test to another DSL?
Hi Annette,
use org.eclipse.xtext.junit4.util.ParseHelper.parse(InputStream, URI, Map, ResourceSet) multiple times with suitable uris and a single resourceset
if you use Xtext 2.3.0 be aware of
Christian, thanks for answering so fast. In my case I use two different DSLs and have to inject two ParseHelpers: one for ModelA of DSL A and one for ModelB of DSL B. If I am currently testing DSL B how can I inject the ParseHelper for DSL A?
Hmmm i think you need an InjectorProvider that inits both languages
public class ExtendedMyDslBInjectorProvider extends MyDslBInjectorProvider {
@Override
protected Injector internalCreateInjector() {
MyDslAStandaloneSetup.doSetup();
return super.internalCreateInjector();
}
}
then you need to fix the parsehelper (a bugzilla for that would be nice)
public class ParseHelper2 extends ParseHelper {
@Override
public T parse(InputStream in, URI uriToUse, Map options,
ResourceSet resourceSet) {
Resource resource = resourceSet.createResource(uriToUse);
resourceSet.getResources().add(resource);
try {
resource.load(in, options);
final T root = (T) (resource.getContents().isEmpty() ? null : resource.getContents().get(0));
return root;
} catch (IOException e) {
throw new WrappedException(e);
}
}
}
Thanks, Christian, for your reply. It is working fine and I added your solution to Bugzilla (Bug 386302).
FYI This doesn’t natively work in windows because of the carriage return character. I had to append
.replaceAll(“[\r]”, “”) to line 38 and 46 after each toString method call to get the assertion to not fail.
Hi if you use the system specific line breaks in the test file as well it should work anyway. At least in Xtext 2.3.0. Before it was a kind of hard coded \n
Hi Christian, very interesting post. Got one question: I’m using xtext 2.3.1 and the statement “underTest.doGenerate(model.eResource, fsa)” seems cannot be resolved by xtend. It complains the doGenerate() cannot be resolved. What is working is to change as followings:
val IGenerator g = undertest
g.doGenerate(model.eResource, fsa)
Could you give me short info why?
Thanks, hq
Hi,
are you sure the imports are right?
i dont know of any api changes at IGenerator in 2.3.1
maybe this i a bug in the xtend itself.
Hi Christian,
I also want to write a test where I use two different DSLs.
I implemented an ExtendedDslInjectorProvider as you suggested and have two CharSequences – each for the specifid DSLs – which I parsed with two ParseHelper-classes.
But when I access the resourceSet there is no connection between the two DSLs. The referring DSL only has Proxy-Objects for its counterpart. I can not navigate between them.
For example in the “Application-DSL”:
In the entity-Object I only see a Proxy and I can not get to the entityAttributes. The Set is returned empty.
val resource = resourceSet.getResource(URI.createURI(“test.platform”), true);
val app = resource.getContents().get(0) as Application
val entity = app.productCategory.head.entity
val entityAttributes = entity.attributes;
Hi if the resource exists and you call resource.load it should work perfectly. If not please share a complete runnable reproduceable example
Hi Christian,
thanks for the fast answer. I already call the resource.load-Method in the “ExtendedParseHelper”-class which I copied from your post with the “ParseHelper2”. But this does have not the desired effect.
I published the Test-Project under following link. Hopefully you can take a look?
The class where I try to connect the two DSLs is the class “ProductCategoryServiceTest”
I found the reason… the CharSequence of one DSL was not valid.
Hi Chris,
How to write Junit test for a grammar which has reference to another grammar .below is a sample example for a grammar
generate ad “”
import “” as adsym
Model:
greetings+=Greeting*;
Greeting:
‘Hello’ name=ID ‘!’;
=====================================
My Problem is when I try to write Junit for above model assertNotError of ValidationTestHelper does not recognise the rules imported by AdSym.
Please help here 🙂
Cheers
Kunal
Hi,
can you please elaborate what you mean by “does not recognize”?
I mean when I say “does not recognize is that” when I try to test Model using Junit like below code :
===================================================
@RunWith(typeof(XtextRunner))
@InjectWith(typeof (MyDslInjectorProvider))
class MyDslParserTest{
@Inject extension ParseHelper modelParserHelper
@Inject extension ValidationTestHelper
@Test
def testMyDslParser(){
var model=modelParserHelper.parse(”’
Hello Test!!
viewName:Player
”’)
model.assertNoErrors()
}
}
==========================================
I get below result when I run above Junit :
java.lang.AssertionError: Expected no errors, but got :ERROR (org.eclipse.xtext.diagnostics.Diagnostic.Linking) ‘Couldn’t resolve reference to ViewType ‘Player’.’ on ViewDefinition
=========================================
here View Name is Grammar that has been imported from other grammar file AdSym.
The problem is in such scenario, how do I write Junit to test grammar included by other grammars also .
Cheers
Kunal
P.S: if you actually want to read files of the imported stuff you have to
adapt org.xtext.example.mydsl.MyDslInjectorProvider.internalCreateInjector()
to call OtherDslStandaloneSetup.doSetup before returning the actual injector.
and you have to add all relevant resources to the resourceset involved (if you have multiple).
Hi,
I am also trying to write unit tests including a dependent DSL. The solution described using a modified InjectorProvider and a modified ParseHelper does not work for me. The overridden parse method in the ParseHelper still just returns one DSL model (T), and I need access to both. Can someone maybe elaborate on this solution?
Cheers,
Sebastian
Please create a thread in the Xtext forum with all necessary stuff attached
Thanks, Christian, will do.
Thanks for the post! I have a question regarding resource sets in IGenerator unit tests. In my generator I am doing some analysis involving the validation of referenced resources. For this I am using IResourceDescriptions (injected) in my IGenerator implementation; actually only for getting the IResourceDescription of a Resource. However if executed from within the unit test, the context (resourceSet) of the injected IResourceDescriptions is never set and hence null.
Is this a bad approach in general or is there a simple trick/configuration to make it work in unit tests?
For now I use a hack in my generator, actually casting down to the actual implementation of IResourceDescriptions (ResourceSetBasedResourceDescriptions) and manually setting the context if not already set; but this seems to be very hacky and something you definitely do not want in production code.
Did you try to inject and query a ResourceDescriptionsProvider instead
That did it. Runtime and unit test execution now work consistently. Thank you very much for that quick answer! | https://christiandietrich.wordpress.com/2012/05/08/unittesting-xtend-generators/ | CC-MAIN-2018-09 | en | refinedweb |
Signals and Slots - Closed
Hi there,
For my university assignment, I have created three classes. Film class, FilmGUI class (it is a form that accepts info about the Film) and FilmWriter class (this class writes to a file on the disk).
When I click on Save button in the FilmGUI class, I want the FilmWriter class to write to the text file on the disk.
I have declared a signal in the FilmGUI class that emits when the button is clicked. But I am not able to declare a slot in the Main class that would then pass the Film object to the FilmWriter class so that it can write to the file.
I want to declare a slot in the Main class and connect it with the signal in the FilmGUI class.
Please assist.
Thanks
- raven-worx Moderators
what do you have so far? What exactly do you mean "you're not able to declare a slot"?
Hi and welcome to devnet,
Have a look at Qt's documentation examples to see how in works.
As for the Main class, do you mean the main function ? If so, you can't declare a slot in there.
Just curious, how many are you in that class ?
Ok. Let me take it one step at a time.
Below is the Form that I have made
#include <QDialog>
#include "filmwriter.h"
namespace Ui {
class Dialog;
}
class Dialog : public QDialog
{
Q_OBJECT
public:
explicit Dialog(QWidget *parent = 0);
~Dialog();
QString title;
QString director;
int duration;
QDate releaseDate;
singals:
write();
private slots:
void on_pushButton_clicked();
private:
Ui::Dialog *ui;
};
#endif // DIALOG_H
When I try to compile it I get C3861 error. Identifier not found.
Below is the CPP file:
#include "dialog.h"
#include "ui_dialog.h"
#include "film.h"
#include "filmwriter.h"
Dialog::Dialog(QWidget *parent) :
QDialog(parent),
ui(new Ui::Dialog)
{
ui->setupUi(this);
}
Dialog::~Dialog()
{
delete ui;
}
void Dialog::on_pushButton_clicked()
{
title = ui->lineEdit->text();
director = ui->lineEdit_2->text();
duration = ui->lineEdit_3->text().toInt();
releaseDate = ui->dateEdit->date();
emit write();
}
- Jeroentjehome
Hi, welcome to devnet!
A quick comment about your post, for code examples/post always use the code insert option, that is"@code@" so it becomes readable to other programmers.
Second when stating an error, the compiler usually gives a line number where the error is detected. That shortens searching for us.
Did you read the tutorial of signal/slots? "here!":
- raven-worx Moderators
oh no ... post messages got lost again ... :/
To your question where you should define a slot:
Do this in every QObject subclass. In your case probably in the FilmWriter class.
Ok. Lets say I do define the slot in FilmWriter.
- I click the button in the GUI class.
- In the button clicked event, I make a Film class with the data entered in the form.
- When the button is clicked in the GUI class then it should emit a signal to write the info to the file.
- First I need to create the FilmWriter class to use its slot. The FilmWriter class takes a Film as parameter in the constructor.
My question then is where should I do this part?
Thanks
As I was saying in my lost message:
Why not do the writing in on_pushButton_clicked ? Would be a lot simpler
- Jeroentjehome
Hi,
You can't emit a signal to a slot that's not there yet. If you still need to create the FilmWriter class the slot will not exist when the signal is emitted. Like SGalst says is probably the easiest way to do so. In the on_pushbutton create a FilmWriter class (function scope), handle the write to file there, and exit the function (FilmWriter class) get's deleted.
Greetz
Thanks guys done that. Now a new problem.
The header for FilmWriter:
@#include <QtCore>
#include <QTextStream>
#include <QFile>
#include <QString>
#include "film.h"
class FilmWriter
{
public:
FilmWriter();
FilmWriter(Film myFilm);
private:
};
@
CPP for filmWriter
Film.getDirector(); out<< myFilm.getDuration(); out<< myFilm.getTitle(); mFile.flush(); mFile.close();
}@
I am getting two errors:
c:\qt\qt5.0.2\tools\qtcreator\bin\assignment1ques1\filmwriter.h:17: error: C2061: syntax error : identifier 'Film'
c:\qt\qt5.0.2\tools\qtcreator\bin\assignment1ques1\filmwriter.h:17: error: C2535: 'FilmWriter::FilmWriter(void)' : member function already defined or declared
Please help.
Sorted. Circular dependency. I had declared a FilmWriter class in Film class and Film class in FilmWriter Class.
Thanks all for the help. Much appreciated.
If it's all good now, don't forget to update the thread's title to closed so other forum users may know a solution has been found
I am trying to update the title to closed but not happening. Will keep trying.
Sorry, I meant "solved" not "closed" | https://forum.qt.io/topic/30712/signals-and-slots-closed | CC-MAIN-2018-09 | en | refinedweb |
Append an IPv6 hop-by-hop or destination option to an ancillary data object
#include <netinet/in.h> int inet6_option_append(struct cmsghdr *cmsg, const u_int8_t *typep, int multx, int plusy);
The option type must have a value from 2 to 255, inclusive. (0 and 1 are reserved for the Pad1 and PadN options, respectively.)
The option data length must be between 0 and 255, inclusive, and is the length of the option data that follows.
The inet6_option_append() function appends a hop-by-hop option or a destination option to an ancillary data object that has been initialized by inet6_option_init().
See also: | http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.lib_ref/topic/i/inet6_option_append.html | CC-MAIN-2018-09 | en | refinedweb |
Raise the SIGABRT signal to terminate program execution
#include <stdlib.h> void abort( void );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The abort() function causes abnormal process termination to occur by means of the function call raise(SIGABRT), unless the signal SIGABRT is caught and the signal handler doesn't return. If the signal SIGABRT is caught and the signal handler returns, the signal handler is removed and raise(SIGABRT) is called again. Note that prior to calling raise(), abort() ensures that SIGABRT isn't ignored or blocked.
The abort() function doesn't return to its caller.
#include <stdlib.h> int main( void ) { int major_error = 1; if( major_error ) abort(); /* You'll never get here. */ return EXIT_SUCCESS; }
A strictly-conforming POSIX application can't assume that the abort() function is safe to use in a signal handler on other platforms. | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/a/abort.html | CC-MAIN-2018-09 | en | refinedweb |
Generate a readable string from an IPsec policy specification
#include <netinet6/ipsec.h> char* ipsec_dump_policy(char *buf, char *delim);
The ipsec_dump_policy() function generates a readable string from an IPSEC policy specification. Refer to ipsec_set_policy() for details about the policies.
The ipsec_dump_policy() function converts IPsec policy structure into a readable form. Therefore, ipsec_dump_policy() is the inverse of ipsec_set_policy(). If you set delim to NULL, a single whitespace is assumed. The function ipsec_dump_policy() returns a pointer to a dynamically allocated string. It is the caller's responsibility to reclaim the region, by using free().
A pointer to dynamically allocated string, or NULL if an error occurs.
See ipsec_set_policy(). | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/i/ipsec_dump_policy.html | CC-MAIN-2018-09 | en | refinedweb |
This chapter provides information for administering OC4J in standalone mode for development purposes. Chapter 1, "Configuration and Deployment", discusses the easiest method for configuring, developing, and deploying a J2EE application. However, if you want to use other services, such as JMS, you must know how to manipulate the XML configuration files.
This chapter discusses the following topics:
Overview of OC4J and J2EE XML Files
Manually Adding Applications in a Development Environment
Building and Deploying Within a Directory
OC4J Automatic Deployment for Applications
Changing XML Files After Deployment
Designating a Parent of Your Application
Developing Startup and Shutdown Classes
Setting Performance Options
This section contains the following topics:
XML Configuration File Overview
XML File Interrelationships
Because OC4J is configured solely through XML files, you must understand the role and method for a set of XML files. Each XML file exists to satisfy a certain role; thus, if you have need of that role, you will understand which XML file to modify and maintain.
Figure 2-1.
OC4J server configuration files exist under the
j2ee/home/config/ directory — These files configure the OC4J server and point to other key configuration files. The settings in the OC4J configuration files are not related to the deployed J2EE applications directly, but to the server itself. 2-1 OC4J and J2EE Application Files
Table 2-1 describes the role and function for each XML file that was displayed in the preceding figure.
Some of these XML files are interrelated. That is, some of these XML files reference other XML files—both OC4J configuration and J2EE application (see Figure 2-3).
Here are the interrelated files:
server.xml—contains references to the following:
All
*-web-site files for each Web site for this OC4J server, including the default
http-web-site.xml file.
The location of each of the other OC4J server configuration files, except
principals.xml, which is defined in the global
application.xml, shown in Figure 2-1
The location of each
application.xml file for each J2EE application that has been deployed in OC4J
http-web-site.xml—references applications by name, as defined in the
server.xml file. And this file references an application-specific EAR file.
application.xml—contains a reference to the
principals.xml file.
The
server.xml file is the keystone that contains references to most of the files used within the OC4J server. Figure 2-2 shows the XML files that can be referenced in the
server.xml file:
Figure 2-2 XML Files Referenced Within server.xml
Figure 2-3> element denotes the name and location of the
rmi.xml file.
The
<jms-config> element denotes the name and location of the
jms.xml file.
The
<global-application> element denotes the name and location of the global
application.xml file.
The
<global-web-app-config> element denotes the name and location of the
global-web-application.xml file.
The
<web-site> element. You can deploy applications through the
admin.jar command using the
-deploy option or by modifying the
server.xml file directly. Each deployed application is denoted by the
<application> element. See "Manually Adding Applications in a Development Environment" for more information on directly editing the
server.xml file.
Figure 2-3 Server.xml File and Related XML Files
Other elements.
When you are in a development environment, it is easier to modify XML files than to use the admin.jar command for each iteration of development. The following sections help you understand how to modify your XML configuration files:
Configuring J2EE Applications
Each OC4J server is configured to listen on HTTP or RMI protocols for incoming requests. Each OC4J Web server is configured within its own
*-web-site.xml file.
HTTP protocol listener—HTTP clients can access an OC4J HTTP listener directly. This involves configuring an
http-web-site.xml file, which indicates the HTTP listener port. The default HTTP port is 8888. The following shows the entry in the
http-web-site.xml for an HTTP listener with a port number of 8888:
<web-site
RMI protocol listener—EJB clients and the OC4J tools, such as
admin.jar, access the OC4J server through a configured RMI port. This involves configuring the
rmi.xml file. The default RMI port is 23791. The following shows the default RMI port number configured in the
rmi.xml file:
<rmi-server
To configure and deploy your J2EE applications, modify the
server.xml and
http-web-site.xml files with your application information.
In
server.xml, add a new or modify the existing
<application name=... path=... entry for each application that you want automatically started when OC4J starts. The path points to either the location of the EAR file to be deployed or the exploded directory where the application has been built. See "Deployment In a Production Environment Using ADMIN.JAR" or "Building and Deploying Within a Directory" for more information.
In
http-web-site.xml, add a
<web-app...> entry for each Web application you want bound to the Web site upon OC4J startup. Because the
name attribute is the WAR filename (without the
.war extension), you must have one line for each WAR file included in your J2EE application.
For Web application binding using a WAR file, add the following:
://
oc4j_host
:8888", then to initiate the application, point your browser at "
http://
oc4j_host
:8888/myapp". Development Application Directory Structure, 2-4 displays the necessary directory structure.
Figure 2-4 Development Application Directory Structure
Development Application Directory Structure
To deploy EJB or complex J2EE applications in an expanded directory format, complete the following steps:
Place the files in any directory. Figure 2-4 2-4 2-4, this is
j2ee/home/applications/
appname/.
You can specify the path in one of two manners:
Specifying the full path from root to the parent directory.
In the example in Figure 2-4, if
appname is
"myapp", then the fully-qualified path is as follows:
<
Specifying the relative path. The path is relative to where the
server.xml file exists to where the parent directory lives.
In the example in Figure 2-4, 2-4
OC4J automatically deploys an application if the timestamp on an EAR file has changed. Restarting OC4J to deploy or redeploy applications is not necessary. Automatic deployment is not enabled in all cases, but deployment occurs in the following cases:
changes to EAR files are checked
If you change the EAR file, OC4J automatically detects the change. OC4J detects the timestamp change and redeploys the application.
change in timestamp of certain XML files in the exploded directory format (The
appname directory) that is discussed in "Building and Deploying Within a Directory". For automatic deployment of exploded directory applications, you must do the following:
Modify the classes in the
<module> and touch its J2EE deployment descriptor to change the timestamp on the XML file. For example, if you modify servlet classes, you must touch its
web.xml file. This notifies OC4J that changes occurred in this
<module>.
Touch the
application.xml of this application. Changing the timestamp of the
application.xml starts the automatic deployment. Once started, OC4J checks which modules to redeploy by noticing which module deployment descriptors have timestamp changes.
When OC4J does not check for updates, redeploy by either using the
admin.jar command-line tool or restarting the OC4J server manually. See "Options for the OC4J Administration Management JAR" for a description of the
-deploy option. 2-5.
Figure 2-5 Development Application Directory Structure
A child application can see the namespace of its parent application. Thus, setting up an application as a parent is used to share services among children. The default parent is the global application.
To set up an application as a parent of another, you can do one of the following:
Use the
-parent option of the
admin.jar command when deploying the originating application. This option allows you to designate what application will be the parent of the deploying application.
Specify the parent in the application definition line in the
server.xml file. Each application is defined by an
<application> element in the
server.xml file. In this element, a
parent attribute designates the parent application.
<application ... parent="applicationWithCommonClasses" .../> 2 "Standalone OC4J Command-Line Options and. Idle threads In the pool are used first before a new thread is spawned. 2 2 element has a set of log files, as shown in Table 2-3. If there are multiple processes running for an OC4J instance, there is a multiple set of log files.
There are two types of log files:
Text Log Files: The messages logged in these files are text-based and not in XML format. You can read these messages with any editor. This is the default. Normally, those who use OC4J standalone would benefit from viewing their log messages in a text format.
Oracle Diagnostic Logging (ODL) Log Files: The messages logged in these files use an XML format that can be read by a GUI tool, such as the Oracle Enterprise Manager 10g GUI. We recommend that you use this format for your logging when you are using OC4J within Oracle Application Server. XML files in Table 2-3. Text messaging is enabled in the
<file> subelement the
<log> element of the XML files, except the
http-web-site.xml file. For the
http 2-4, but you can specify the location and filename within the path attribute of the
<log> or
<access-log> elements.
Table 2-4 shows the default location for the log files for a standalone OC4J. You can modify the location and names of these files by modifying the configuration files described in Table 2-3.>
The ODL log entries are each written out in XML format in its respective log file. Each XML message can be read 2-3, you enable ODL logging by uncommenting the ODL configuration line, as follows:
Uncomment the
<odl> element within the
<log> element in all XML files listed in Table 2-3, except for the
http-web-site.xml file.
Add the
<odl-access-log> element in the
http
<install-dir>/j2ee/home/log/server directory in the
server.xml file, configure the following:
<log> <odl path="../log/server/" max- </log>
When OC4J is executing, all log messages that are server oriented are logged in the
<install-dir>/j2ee
http-web-site.xml file. For the
http-web-site.xml file, turn off the text logging by commenting out the
<access-log> element.
Many developers use the
System.out.println() and
System.err.println() methods in their applications to generate debug information. Normally, the output from these method calls are printed to the console where the OC4J process is started. However, you can specify command-line options when starting OC4J to direct the
STDOUT and
STDERR output directly to files. The
-out and
-err parameters inform OC4J where to direct the error messages. The following startup command includes and example of the -out and -err parameters:
$ java -jar oc4j.jar -out d:\log-files\oc4j.out -err d:\log-files\oc4j.err
In this case, all information written to
STDOUT and
STDERR is printed to the files
d:\log-files\oc4j.out and
d:\log-files\oc4j.err respectively..
OC4J provides several debug properties for generating additional information on the operations performed by the various sub-systems of OC4J. These debug properties can be set for a particular sub-system while starting up OC4J.
The following table provides useful debug options available with OC4J. These debug options have two states either true or false. By default these are set to false. For a complete list of debug properties, see "OC4J System Properties".
For example, if you want to generate debug information on HTTP session events then you start OC4J, as follows:
java -Dhttp.session.debug=true -jar oc4j.jar
After OC4J is started with a specific debug option, debug information is generated and routed to standard output. In the above example, you would see HTTP session information on your OC4J console, as follows:
Oracle Application Server Containers for J2EE initialized Created session with id '36c04d8a1cd64ef2b6a9ba6e2ac6637e' at Mon Apr 15 12:24:20 PDT 2002, secure-only: false Created session with id '36c04d8a1cd64ef2b6a9ba6e2ac6637e' at Mon APR 15 12:36:06 PDT 2002, secure-only: false Invalidating session with id '36c04d8a1cd64ef2b6a9ba6e2ac6637e' at Mon APR 15 12:44:32 PDT 2002 (created at Mon APR 15 12:24:23 PDT 2002) due to timeout
If you want to save this debug information, then you can redirect your standard output to a file using the
-out or
-err command-line options, as follows:
java -Dhttp.session.debug=true -jar oc4j.jar -out oc4j.out -err oc4j.err OC4J command-line options. The following examples show the output with and without verbosity:
Example 2-3 Error Messages Displayed Without Veribosity
D:\oc4j903\j2ee\home>java -jar oc4j.jar Oracle Application Server Containers for J2EE initialized
Example 2.
java -Dhttp.session.debug=true -Ddatasource.verbose=true -jar oc4j.jar
Then, re-execute your servlet and see the following type of debug information in the standard output for the OC4J process:
DataSource logwriter activated... jdbc:oracle:thin:@localhost:1521/MYSERVICE: Started jdbc:oracle:thin:@localhost:1521/MYSERVICE: OC4J Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE null: Connection XA XA OC4J Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE allocated (Pool size: 0) jdbc:oracle:thin:@localhost:1521/MYSERVICE: Opened connection Created new physical connection: Pooled oracle.jdbc.driver.OracleConnection@5f18 Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE: Connection Pooled oracle.jdbc.driver.OracleConnection@5f1832 allocated (Pool size: 0) Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE: Releasing connection Pooled oracle.jdbc.driver.OracleConnection@5f1832 to pool (Pool size: 1) null: Releasing connection XA XA OC4J Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE to pool (Pool size: 1) OC4J Pooled jdbc:oracle:thin:@localhost:1521/MYSERVICE: Cache timeout, closing connection (Pool size: 0) com.evermind.sql.OrionCMTDataSource/default/jdbc/OracleDS: Cache timeout, closing connection (Pool size: 0) | https://docs.oracle.com/cd/B14099_19/web.1012/b14361/advanced.htm | CC-MAIN-2018-09 | en | refinedweb |
#include <sys/scsi/scsi.h> void scsi_vu_errmsg(struct scsi_pkt *pktp, char *drv_name, int severity, int err_blkno, struct scsi_key_strings *cmdlist, struct scsi_extended_sense *sensep, struct scsi_asq_key_strings *asc_list, char **decode_fru struct scsi_device*, char *, int, char);
Solaris DDI specific (Solaris DDI).
The following parameters are supported:.
A pointer to a array of asc and ascq message list.The list must be terminated with -1 asc value.
This is a function pointer that will be called after the entire sense information has been decoded. The parameters will be the scsi_device structure to identify the device. Second argument will be a pointer to a buffer of length specified by third argument. The fourth argument will be the FRU byte. decode_fru might be NULL if no special decoding is required. decode_fru is expected to return pointer to a char string if decoding possible and NULL if no decoding is possible.
This function is very similar to scsi_errmsg(9F) but allows decoding of vendor-unique ASC/ASCQ and FRU information.
The scsi_vu_errmsg() function interprets the request sense information in the sensep pointer and generates a standard message that is displayed using scsi_log(9F). It first searches the list array for a matching vendor unique code if supplied. If it does not find one in the list then the standard list is searched. table below:
Severity Value: String: SCSI_ERR_ALL All SCSI_ERR_UNKNOWN Unknown SCSI_ERR_INFO Information SCSI_ERR_RECOVERED Recovered SCSI_ERR_RETRYABLE Retryable SCSI_ERR_FATAL Fatal_vu_errmsg() function may be called from user, interrupt, or kernel context.
struct scsi_asq_key_strings cd_slist[] = { 0x81, 0, "Logical Unit is inaccessable", -1, 0, NULL, }; scsi_vu_errmsg(devp, pkt, "sd", SCSI_ERR_INFO, bp->b_blkno, err_blkno, sd_cmds, rqsense, cd_list, my_decode_fru);
This generates the following console warning:
WARNING: /sbus@1,f8000000/esp@0,800000/sd@1,0 (sd1): Error for Command: read Error Level: Informational Requested Block: 23936 Error Block: 23936 Vendor: XYZ Serial Number: 123456 Sense Key: Unit Attention ASC: 0x81 (Logical Unit is inaccessable), ASCQ: 0x0 FRU: 0x11 (replace LUN 1, located in slot 1)
cmn_err(9F), scsi_errmsg(9F), scsi_log(9F), scsi_errmsg(9F), scsi_asc_key_strings(9S), scsi_device (9S), scsi_extended_sense(9S), scsi_pkt(9S)
Writing Device Drivers for Oracle Solaris 11.2
STREAMS Programming Guide | https://docs.oracle.com/cd/E36784_01/html/E36886/scsi-vu-errmsg-9f.html | CC-MAIN-2018-09 | en | refinedweb |
Opened 5 years ago
Closed 5 years ago
#517 closed enhancement (fixed)
remove dependency on structmember.h
Description
We could get rid of structmember.h by using the a code transform to manage cdef public/readonly members via properties.
Pros:
1) structmember.h has no namespace protection using a Py_ prefix for the many 'T_XXX' definitions. This can conflict with user code (BTW, and this issue was reported in cython-users ML).
2) when using PyMemberDef? tables from structmember.h, C <-> Python conversions are outside Cython's control of to_py/from_py converters. Using a transform+property mechanism is more consistent, and actually extend (for free, with no special casing at all) the types that can be made cdef public/readonly members (e.g. complex numbers, C struct's)
Cons:
1) Generated code is less readable and larger
Attachments (1)
Change History (3)
comment:1 Changed 5 years ago by dalcinl
- Status changed from new to assigned
- Summary changed from remove dependency on CPython's structmember.h to remove dependency on structmember.h
Changed 5 years ago by dalcinl
comment:2 Changed 5 years ago by dalcinl
- Resolution set to fixed
- Status changed from assigned to closed
Fixed: | http://trac.cython.org/ticket/517 | CC-MAIN-2015-27 | en | refinedweb |
It's time to write some Scala code. Before we start on the in-depth Scala tutorial, we put in two chapters that will give you the big picture of Scala, and most importantly, get you writing code. We encourage you to actually try out all the code examples presented in this chapter and the next as you go. The best way to start learning Scala is to program in it.
To run the examples in this chapter, you should have a standard Scala installation. To get one, go to and follow the directions for your platform. You can also use a Scala plug-in for Eclipse, IntelliJ, or NetBeans, but for the steps in this chapter, we'll assume you're using the Scala distribution from scala-lang.org.[1]
If you are a veteran programmer new to Scala, the next two chapters should give you enough understanding to enable you to start writing useful programs in Scala. If you are less experienced, some of the material may seem a bit mysterious to you. But don't worry. To get you up to speed quickly, we had to leave out some details. Everything will be explained in a less "fire hose" fashion in later chapters. In addition, we inserted quite a few footnotes in these next two chapters to point you to later sections of the book where you'll find more detailed explanations.
The easiest way to get started with Scala is by using the Scala interpreter, an interactive "shell" for writing Scala expressions and programs. Simply type an expression into the interpreter and it will evaluate the expression and print the resulting value. The interactive shell for Scala is simply called scala. You use it by typing scala at a command prompt:[2]
$ scala Welcome to Scala version 2.7.2. Type in expressions to have them evaluated. Type :help for more information.
scala>
After you type an expression, such as 1 + 2, and hit enter:
scala> 1 + 2
The interpreter will print:
res0: Int = 3
This line includes:
The type Int names the class Int in the package scala. Packages in Scala are similar to packages in Java: they partition the global namespace and provide a mechanism for information hiding.[3] Values of class Int correspond to Java's int values. More generally, all of Java's primitive types have corresponding classes in the scala package. For example, scala.Boolean corresponds to Java's boolean. scala.Float corresponds to Java's float. And when you compile your Scala code to Java bytecodes, the Scala compiler will use Java's primitive types where possible to give you the performance benefits of the primitive types.
The resX identifier may be used in later lines. For instance, since res0 was set to 3 previously, res0 * 3 will be 9:
scala> res0 * 3 res1: Int = 9
To print the necessary, but not sufficient, Hello, world! greeting, type:
scala> println("Hello, world!") Hello, world!The println function prints the passed string to the standard output, similar to System.out.println in Java.
Scala has two kinds of variables, vals and vars. A val is similar to a final variable in Java. Once initialized, a val can never be reassigned. A var, by contrast, is similar to a non-final variable in Java. A var can be reassigned throughout its lifetime. Here's a val definition:
scala> val msg = "Hello, world!" msg: java.lang.String = Hello, world!This statement introduces msg as a name for the string "Hello, world!". The type of msg is java.lang.String, because Scala strings are implemented by Java's String class.
If you're used to declaring variables in Java, you'll notice one striking difference here: neither java.lang.String nor String appear anywhere in the val definition. This example illustrates type inference, Scala's ability to figure out types you leave off. In this case, because you initialized msg with a string literal, Scala inferred the type of msg to be String. contrast to Java, where you specify a variable's type before its name, in Scala you specify a variable's type after its name, separated by a colon. For example:
scala> val msg2: java.lang.String = "Hello again, world!" msg2: java.lang.String = Hello again, world!
Or, since java.lang types are visible with their simple names[4] in Scala programs, simply:
scala> val msg3: String = "Hello yet again, world!" msg3: String = Hello yet again, world!
Going back to the original msg, now that it is defined, you can use it as you'd expect, for example:
scala> println(msg) Hello, world!
What you can't do with msg, given that it is a val, not a var, is reassign it.[5] For example, see how the interpreter complains when you attempt the following:
scala> msg = "Goodbye cruel world!" <console>:5: error: reassignment to val msg = "Goodbye cruel world!" ^
If reassignment is what you want, you'll need to use a var, as in:
scala> var greeting = "Hello, world!" greeting: java.lang.String = Hello, world!
Since greeting is a var not a val, you can reassign it later. If you are feeling grouchy later, for example, you could change your greeting to:
scala> greeting = "Leave me alone, world!" greeting: java.lang.String = Leave me alone, world!
To enter something into the interpreter that spans multiple lines, just keep typing after the first line. If the code you typed so far is not complete, the interpreter will respond with a vertical bar on the next line.
scala> val multiLine = | "This is the next line." multiLine: java.lang.String = This is the next line.
If you realize you have typed something wrong, but the interpreter is still waiting for more input, you can escape by pressing enter twice:
scala> val oops = | | You typed two blank lines. Starting a new command.
scala>
In the rest of the book, we'll leave out the vertical bars to make the code easier to read (and easier to copy and paste from the PDF eBook into the interpreter).
Now that you've worked with Scala variables, you'll probably want to write some functions. Here's how you do that in Scala:
scala> def max(x: Int, y: Int): Int = { if (x > y) x else y } max: (Int,Int)IntFunction definitions start with def. The function's name, in this case max, is followed by a comma-separated list of parameters in parentheses. A type annotation must follow every function parameter, preceded by a colon, because the Scala compiler (and interpreter, but from now on we'll just say compiler) does not infer function parameter types. In this example, the function named max takes two parameters, x and y, both of type Int. After the close parenthesis of max's parameter list you'll find another ": Int" type annotation. This one defines the result type of the max function itself.[6] Following the function's result type is an equals sign and pair of curly braces that contain the body of the function. In this case, the body contains a single if expression, which selects either x or y, whichever is greater, as the result of the max function. As demonstrated here, Scala's if expression can result in a value, similar to Java's ternary operator. For example, the Scala expression "if (x > y) x else y" behaves similarly to "(x > y) ? x : y" in Java. The equals sign that precedes the body of a function hints that in the functional world view, a function defines an expression that results in a value. The basic structure of a function is illustrated in Figure 2.1.
Sometimes the Scala compiler will require you to specify the result type of a function. If the function is recursive,[7] for example, you must explicitly specify the function's result type. In the case of max however, you may leave the result type off and the compiler will infer it.[8] Also, if a function consists of just one statement, you can optionally leave off the curly braces. Thus, you could alternatively write the max function like this:
scala> def max2(x: Int, y: Int) = if (x > y) x else y max2: (Int,Int)Int
Once you have defined a function, you can call it by name, as in:
scala> max(3, 5) res6: Int = 5
Here's the definition of a function that takes no parameters and returns no interesting result:
scala> def greet() = println("Hello, world!") greet: ()UnitWhen you define the greet() function, the interpreter will respond with greet: ()Unit. "greet" is, of course, the name of the function. The empty parentheses indicate the function takes no parameters. And Unit is greet's result type. A result type of Unit indicates the function returns no interesting value. Scala's Unit type is similar to Java's void type, and in fact every void-returning method in Java is mapped to a Unit-returning method in Scala. Methods with the result type of Unit, therefore, are only executed for their side effects. In the case of greet(), the side effect is a friendly greeting printed to the standard output.
In the next step, you'll place Scala code in a file and run it as a script. If you wish to exit the interpreter, you can do so by entering :quit or :q.
scala> :quit $
Although Scala is designed to help programmers build very large-scale systems, it also scales down nicely to scripting. A script is just a sequence of statements in a file that will be executed sequentially. Put this into a file named hello.scala:
println("Hello, world, from a script!")
$ scala hello.scala
And you should get yet another greeting:
Hello, world, from a script!
Command line arguments to a Scala script are available via a Scala array named args. In Scala, arrays are zero based, and you access an element by specifying an index in parentheses. So the first element in a Scala array named steps is steps(0), not steps[0], as in Java. To try this out, type the following into a new file named helloarg.scala:
// Say hello to the first argument println("Hello, "+ args(0) +"!")
$ scala helloarg.scala planet
In this command, "planet" is passed as a command line argument, which is accessed in the script as args(0). Thus, you should see:
Hello, planet!
Note that this script included a comment. The Scala compiler will ignore characters between // and the next end of line and any characters between /* and */. This example also shows Strings being concatenated with the + operator. This works as you'd expect. The expression "Hello, "+"world!" will result in the string "Hello, world!".[10]
To try out a while, type the following into a file named printargs.scala:
var i = 0 while (i < args.length) { println(args(i)) i += 1 }
Although the examples in this section help explain while loops, they do not demonstrate the best Scala style. In the next section, you'll see better approaches that avoid iterating through arrays with indexes.:$sn1234$
Scala is fun
For even more fun, type the following code into a new file with the name echoargs.scala:
var i = 0 while any of them, Scala does use semicolons to separate statements as in Java, except that in Scala the semicolons are very often optional, giving some welcome relief to your right little finger. If you had been in a more verbose mood, therefore, you could have written the echoargs.scala script as follows:
var i = 0; while (i < args.length) { if (i != 0) { print(" "); } print(args(i)); i += 1; } println();
Although you may not have realized it, when you wrote the while loops in the previous step, you were programming in an imperative style. In the imperative style, which is the style you normally use with languages like Java, C++, and C, you give one imperative command at a time, iterate with loops, and often mutate state shared between different functions. Scala enables you to program imperatively, but as you get to know Scala better, you'll likely often find yourself programming in a more functional style. In fact, one of the main aims of this book is to help you become as comfortable with the functional style as you are with imperative a function literal that takes one parameter named arg. The body of the function is println(arg). If you type the above code into a new file named pa.scala, and execute with the command:
$ scala pa.scala Concise is nice
You should see:
Concise is nice
In the previous example, the Scala interpreter infers the type of arg to be String, since String is the element type of the array on which you're calling foreach. If you'd prefer to be more explicit, you can mention the type name, but when you do you'll need to wrap the argument portion in parentheses (which is the normal form of the syntax anyway):
args.foreach((arg: String) => println(arg))
Running this script has the same behavior as the previous one.
If you're in the mood for more conciseness instead of more explicitness, you can take advantage of a special shorthand in Scala. If a function literal consists of one statement that takes a single argument, you need not explicitly name and specify the argument.[11] Thus, the following code also works:
args.foreach(println)
To summarize, the syntax for a function literal is a list of named parameters, in parentheses, a right arrow, and then the body of the function. This syntax is illustrated in Figure 2.2.
Now, by this point you may be wondering what happened to those trusty for loops you have been accustomed to using in imperative languages such as Java or C. In an effort to guide you in a functional direction, only a functional relative of the imperative for (called a for expression) is available in Scala. While you won't see their full power and expressiveness until you reach (or peek ahead to) Section 7.3, we'll give you a glimpse here. In a new file named forargs.scala, type the following:
for (arg <- args) println(arg)
The parentheses after the "for" contain arg <- args.[12] To the right of the <- symbol is the familiar args array. To the left of <- is "arg", the name of a val, not a var. (Because it is always a val, you just write "arg" by itself, not "val arg".) Although arg may seem to be a var, because it will get a new value on each iteration, it really is a val: arg can't be reassigned inside the body of the for expression. Instead, for each element of the args array, a new arg val will be created and initialized to the element value, and the body of the for will be executed.
If you run the forargs.scala script with the command:
$ scala forargs.scala for arg in args
You'll see:
for arg in argsScala's for expression can do much more than this, but this example is enough to get you started. We'll show you more about for in Section 7.3 and Chapter 23.
In this chapter, you learned some Scala basics and, hopefully, took advantage of the opportunity to write a bit of Scala code. In the next chapter, we'll continue this introductory overview and get into more advanced topics.
[1] We tested the examples in this book with Scala version 2.7.2.
[2] If you're using Windows, you'll need to type the scala command into the "Command Prompt" DOS box.
[3] If you're not familiar with Java packages, you can think of them as providing a full name for classes. Because Int is a member of package scala, "Int" is the class's simple name, and "scala.Int" is its full name. The details of packages are explained in Chapter 13.
[4] The simple name of java.lang.String is String.
[5] In the interpreter, however, you can define a new val with a name that was already used before. This mechanism is explained in Section 7.7.
[6] In Java, the type of the value returned from a method is its return type. In Scala, that same concept is called result type.
[7] A function is recursive if it calls itself.
[8] Nevertheless, it is often a good idea to indicate function result types explicitly, even when the compiler doesn't require it. Such type annotations can make the code easier to read, because the reader need not study the function body to figure out the inferred result type.
[9] You can run scripts without typing "scala" on Unix and Windows using a "pound-bang" syntax, which is shown in Appendix A.
[10] You can also put spaces around the plus operator, as in "Hello, " + "world!". In this book, however, we'll leave the space off between `+' and string literals.
[11] This shorthand, called a partially applied function, is described in Section 8.6.
[12] You can say "in" for the <- symbol. You'd read for (arg <- args), therefore, as "for arg in args." | http://www.artima.com/pins1ed/first-steps-in-scalaP.html | CC-MAIN-2015-27 | en | refinedweb |
So im a begineer in java and im trying to write a code to tell me how many days there are in a specific month but febuary is giving 29 no matter if the year is a leap year or not and i cant seem to find the reason I would really apreciate some help as i have been staring and revising this for at least an hour to no avail.Thanks to anyone who helps.
Code :
import java.util.Scanner; import static java.lang.System.out; import static java.lang.System.in; public class Calander { public static void main(String[] args) { Scanner sc = new Scanner(in); int year; int leapyear; int month; boolean leapyearTF = false; out.print("Enter the year: "); year = sc.nextInt(); out.print("Enter the month: "); month = sc.nextInt(); leapyear = year % 4; if(leapyear == 0){ leapyearTF = true; }else leapyearTF = false; leapyearTF = false; switch(month){ case 1: case 3: case 5: case 7: case 8: case 10: case 12: out.println("There are 31 days in this month"); break; case 4: case 6: case 9: case 11: out.println("There are 30 days in this month"); case 2: if(leapyearTF = true){ out.print("There are 29 days in this month"); }else out.print("There are 28 days in this month"); break; } } } | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/29395-code-not-giving-desired-output-printingthethread.html | CC-MAIN-2015-27 | en | refinedweb |
Page 1
MicroReview
The transferrin–iron import system from pathogenic
Neisseria species
Nicholas Noinaj,1Susan K. Buchanan1* and
Cynthia Nau Cornelissen2*
1National Institute of Diabetes and Digestive and Kidney
Diseases, National Institutes of Health, Bethesda, MD
20892, USA.
2Department of Microbiology and Immunology, Virginia
Commonwealth University Medical Center, PO Box
980678, Richmond, VA 23298, USA.
Summary
Two pathogenic species within the genus Neisseria
cause the diseases gonorrhoea and meningitis. While
vaccines are available to protect against four N. men-
ingitidis serogroups, there is currently no commercial
vaccine to protect against serogroup B or against
N. gonorrhoeae. Moreover, the available vaccines
have significant limitations and with antibiotic resist-
ance becoming an alarming issue, the search for
effective vaccine targets to elicit long-lasting protec-
tion respon-
sible.
Pathogenic Neisseria species
While at least 10 Neisseria species are associated with
humans, only N. gonorrhoeae and N. meningitidis are
pathogenic to humans (Marri et al., 2010). N. gonor-
rhoeae causes the sexually transmitted infection gonor-
rhoea. By contrast N. meningitidis is a frequent colonizer
of the human oropharynx, but can also cause invasive
disease manifested as meningitis or septicemia. The
reported incidence of gonorrhoea in USA is over 300 000
cases per year, in contrast to the incidence of invasive
meningococcal disease, which has been decreasing and
is currently below 1000 cases per year (CDC, 2012).
N. meningitidis may be carried asymptomatically by up to
10% of healthy humans, but in rare cases, the pathogen
can disseminate to cause rapidly progressing septicemia
as well as meningitis, both of which are potentially lethal
infections (Stephens et al., 2007). In contrast, gonococcal
infections are rarely life-threatening. Nonetheless, signifi-
cant morbidity is associated with gonococcal infections as
many are asymptomatic, particularly in women, which
facilitates ascension into the upper reproductive tract,
leading to salpingitis, pelvic inflammatory disease, infer-
tility, and ectopic pregnancy (Sparling, 1990). Ascending
gonococcal infections in men are uncommon but can lead
to prostatitis, epididymitis and infertility (Sparling, 1990).
Despite the distinct diseases caused by the pathogenic
Neisseria species, there are very few differences between
the pathogens at the genomic level. The primary virulence
factor employed by N. meningitidis, which is lacking in
N. gonorrhoeae, is the polysaccharide capsule. This
surface structure protects the meningococcus from des-
iccation, enhances serum resistance, and elicits a protec-
tive immune response (for review see Virji, 2009). The
gonococcus, which lacks a polysaccharide capsule, is
exquisitely sensitive to drying, leading to the necessity for
intimate contact for transmission. While occasional dis-
semination to the bloodstream occurs as a consequence
of gonococcal infections, serum resistance is mediated by
Accepted 15 August, 2012. *For correspondence. E-mail skbuchan@
helix.nih.gov; Tel. (+1) 301 594 9222; Fax (+1) 301 480 0597 or
828 9946.
Molecular Microbiology (2012) 86(2), 246–257 ?
doi:10.1111/mmi.12002
First published online 7 September 2012
© 2012 Blackwell Publishing Ltd
Page 2
factors other than encapsulation, including sialylation of
the outer membrane-localized lipooligosaccharide (LOS)
(Gulati et al., 2005). The polysaccharide capsule of
N. meningitidis is a protective antigen; the efficacious
vaccine that protects against meningococcal disease con-
tains capsular material from four of the 13 serogroups of
N. meningitidis. The capsule from serogroup B N. menin-
gitidis is a self-antigen, and thus not a component of the
current vaccine. However, a vaccine against serogroup B
N. meningitidis, employing sub-capsular protein antigens,
is in development (Gossger et al., 2012). In stark contrast,
N. gonorrhoeae lacks a capsule; therefore, this structure
cannot be utilized for vaccine development. Moreover,
many surface antigens, including LOS, the proteinaceous
pilus, and surface-deployed invasins called Opa proteins,
are subject to high-frequency phase and antigenic varia-
tion, making these targets unacceptable vaccine antigens
(Virji, 2009; Zhu et al., 2011). Even with many years of
effort, no successful vaccine has yet been developed to
prevent gonococcal infections.
Treatment of invasive meningococcal disease requires
rapidparenteral administration
N. meningitidis has yet to develop high-level resistance to
this front line, but still effective, antibiotic (Stephens et al.,
2007). In contrast, N. gonorrhoeae has evolved resist-
ance to every antimicrobial agent used to treat these
infections. In 2007, ciprofloxacin was removed from the
list of approved drugs for treatment of gonococcal infec-
tions (CDC, 2007), leaving only extended-spectrum
cephalosporins as the treatment of choice. By 2011,
however, resistance to the last line of defence, ceftriax-
one, had emerged (CDC, 2011). N. gonorrhoeae is now
recognized as a ‘superbug’ with an enormous capacity for
antigenic variation, against which there is no means of
immunoprophylaxis.
A primary focus of current therapeutic design has been
towards vaccine development to protect against infections
by the pathogenic Neisseria species. Given the reported
limitations of the existing vaccines, lack of a gonococcal
vaccine, and the emergence of antibiotic resistant strains,
there is an immediate need for rapid development of
protective vaccines to protect against neisserial infec-
tions. Since Neisseria species cannot survive without iron,
recent studies have targeted the iron import systems,
which tend to be relatively well conserved and are prom-
ising vaccine targets, having the potential to offer broad
protection against both species.
of benzylpenicillin.
Iron import systems in pathogenic Neisseria
Most bacterial pathogens must compete with their hosts
for iron, an essential nutrient for survival. For many patho-
gens, this process involves secretion of low-molecular
weight chelators called siderophores, which sequester
and solublize otherwise inaccessible ferric iron from the
environment within the host (for recent review see Braun
and Hantke, 2011). The ability to secrete siderophores
and subsequently to internalize ferric–siderophore com-
plexes is critical for the virulence of many bacterial patho-
gens (reviewed recently in Saha et al., 2012). In Gram-
negative bacteria, ferric–siderophores are internalized in
a conserved fashion utilizing a family of outer membrane
transporters,which share
similarity, called TonB-dependent transporters (TBDTs)
(Noinaj et al., 2011). The crystal structures of several of
these transporters have been reported (reviewed in
Noinaj et al., 2011), all sharing a TBDT fold characterized
by an N-terminal plug domain of ~ 160 residues (plug
domain) folded inside a C-terminal 22-stranded beta-
barrel domain (beta-domain). The plug domain prevents
entry of noxious substances into the periplasm until the
appropriate ligand is bound; subsequently, the transporter
is energized by TonB and the rest of the Ton system,
which includes ExbB and ExbD (for a recent review, see
Krewulak and Vogel, 2011). Although the precise details
are not known, the plug is proposed to undergo a confor-
mational change that leads to either partial or full ejection
of the plug domain into the periplasm, thereby forming an
entry pathway for the iron cargo directly through the outer
membrane transporter.
The pathogenic Neisseria species are somewhat
unusual in that they do not have the capacity to secrete
siderophores. Despite this, they do express TBDTs of
unknown function (TdfF, TdfG, TdfH and TdfJ; Turner
et al., 2001; Hagen and Cornelissen, 2006; Cornelissen
and Hollander, 2011) in addition to transporters such as
FetA that enable the bacteria to utilize siderophores pro-
duced by neighbouring bacteria (Carson et al., 1999;
Hollander et al., 2011); however, the contribution of these
transporters to neisserial pathogenesis has not been
tested (Fig. 1A). The pathogenic Neisseria species addi-
tionally express surface receptors that mediate direct
extraction and import of iron from the human host iron
binding proteins haemoglobin, lactoferrin and transferrin
(Cornelissen and Hollander, 2011). Haemoglobin is pre-
dominantly sequestered within red blood cells and is a
tetrameric protein with each subunit capable of binding
one molecule of haem. Lactoferrin can be found in secre-
tions, in milk, and in polymorphonuclear leucocytes and
is a glycoprotein composed of two structurally similar
domains (also called lobes), each of which has the capac-
ity to bind a single iron atom. Transferrin can be found
predominantly in serum and on inflamed mucosal sur-
faces and is structurally very similar to lactoferrin, binding
one iron atom per lobe. All strains of N. meningitidis have
the capacity to utilize haemoglobin, lactoferrin and trans-
ferrin (Marri et al., 2010). In contrast, approximately half of
gonococcal isolates have undergone a large deletion in
sequenceand structural
Iron import in pathogenic Neisseria
247
© 2012 Blackwell Publishing Ltd, Molecular Microbiology, 86, 246–257
Page 3
the locus encoding the lactoferrin–iron internalization
system, rendering this system inactive (Biswas et al.,
1999). Further, engineered gonococcal mutants unable to
utilize lactoferrin and transferrin as iron sources were
found to be avirulent in a human male infection model of
gonococcal disease (Cornelissen et al., 1998), attesting
to the importance of these iron transport systems in initi-
ating infection and proliferating in humans.
Unlike the siderophore transport system that contains
only an outer membrane transporter, iron transport
systems for the acquisition of iron from haemoglobin,
lactoferrin and transferrin, are comprised of a unique
system containing two types of surface-exposed recep-
tors having very different properties and roles in the iron
acquisition process (Cornelissen and Hollander, 2011). In
each case, the first receptor is a TBDT that serves as the
pore through which the iron or haem is directly trans-
ported. The second protein is a co-receptor that is lipid-
modified and entirely surface exposed (see Fig. 1B). The
combined activities of these two proteins allow for
species-specific binding of human iron binding proteins to
the neisserial cell surface, followed by iron extraction and
subsequent internalization of the iron cargo.
The neisserial iron import systems that utilize haemo-
globin, lactoferrin and transferrin are believed to share
many properties; however, the lack of structural informa-
tion has hindered efforts to determine the exact mecha-
nism for iron extraction and import. Recent studies
(Moraes et al., 2009; Calmettes et al., 2012; Noinaj et al.,
2012) have significantly advanced our understanding of
the transferrin–iron import system, which gives clues to
how other import systems may function. In this review, we
will examine the functional aspects of the neisserial
transferrin–iron acquisition system within the context of
the newly elucidated structural details of the system.
Conservation of and immunity to the components of the
transferrin–iron acquisition system
The transferrin–iron import system consists of two trans-
ferrin binding proteins: a TBDT (TbpA), and a lipoprotein
co-receptor (TbpB). Both proteins work in concert to bind
transferrin and then extract and import the iron across the
outer membrane. The two proteins are co-ordinately
expressed from a bicistronic operon, with the tbpB gene
preceding the tbpA gene (Ronpirin et al., 2001). The tbpB
transcript is approximately twice as prevalent as the tbpA
transcript (Ronpirin et al., 2001). The promoter that drives
expression of tbpBA operon is iron repressed by the regu-
latory protein, Fur, which transcriptionally silences the
genes in the presence of iron. The sequence of TbpA is
highly conserved among strains, and even between the
two pathogenic species (Cornelissen et al., 2000). Anti-
genic and sequence variability of TbpB proteins is more
extensive (Cornelissen et al., 1997a), but neither protein
is subject to high-frequency phase or antigenic variation,
as is the case with many other neisserial surface anti-
gens. Both transferrin binding proteins are antigenic when
animals are immunized with the purified proteins (Price
et al., 2005; 2007); however, natural gonococcal infec-
tions, which do not elicit protective immunity, also do not
generate high titre anti-Tbp antibodies (Price et al., 2004).
Meningococcal transferrin binding proteins are immuno-
genic in both animals (Rokbi et al., 1997) and humans
(Gorringe et al., 1995), which is consistent with the
hypothesis that gonococci are capable of immune sup-
pression during infection (Liu et al., 2011). These obser-
vations suggest that vaccination with gonococcal Tbps,
with an appropriate adjuvant, might be protective whereas
natural infections are not. Given the sequence conserva-
tion between the species, it is also possible that immuni-
zation with gonococcal Tbps could additionally be
protective against meningococcal infections.
The structure and function of the iron transporter TbpA
The structure–function relationships in the neisserial
transferrin–iron acquisition system have been most
OM
IM
TBDT
ABC
transporter
TonB
complex
TonB
TBDT
lipoprotein
carrier
protein
AB
Fig. 1. Iron import systems in Neisseria species.
A. Single component transporter systems contain only one surface
protein, a TBDT, which mediates iron-loaded siderophore transport
across the outer membrane. Examples include FetA, HmbR, and
TdfF.
B. Two component transporter systems contain both a TBDT, which
mediates iron transport, as well as a lipoprotein co-receptor, which
is anchored to the outer leaflet of the outer membrane (OM) and
participates in capturing iron-containing substrates. Examples
include HpuA/B (haemoglobin), LbpA/B (lactoferrin), and TbpA/B
(transferrin). In both systems, energy for transport is supplied by
the Ton system (TonB, ExbB, and ExbD) and substrates are then
shuttled across the periplasm by periplasmic carrier proteins to an
ATP binding cassette (ABC) transporter to be transported across
the inner membrane (IM) into the cytoplasm.
248
N. Noinaj, S. K. Buchanan and C. N. Cornelissen
?
© 2012 Blackwell Publishing Ltd, Molecular Microbiology, 86, 246–257
Page 4
thoroughly described, with less data available for those
systems utilizing haemoglobin and lactoferrin. Similar
studies on homologous transferrin–iron uptake systems
from porcine pathogens have also significantly contrib-
uted to elucidating the mechanism of transferrin–iron
import (Moraes et al., 2009). Neisserial mutants lacking
TbpA are incapable of iron uptake from transferrin (Cor-
nelissen et al., 1992; Irwin et al., 1993); however, isogenic
mutants lacking TbpB are still able to utilize transferrin as
a source of iron, albeit less efficiently (Anderson et al.,
1994; Renauld-Mongenie et al., 2004b).
Recently, the crystal structures of components of the
meningococcaltransferrin–iron
reported (Calmettes et al., 2012; Noinaj et al., 2012). The
report of the crystal structure of neisserial TbpA in
complex with apo-transferrin (Fig. 2A) represents a sig-
nificant advance in our understanding of TbpA structure/
function relationships (Noinaj et al., 2012). This structure
showed that, as predicted by a number of groups (Boulton
et al., 2000; Oakhill et al., 2005), TbpA was indeed a
TBDT, the largest with a defined structure, and had many
unique features including very long extracellular loops
uptakesystemwere
hTF C-lobe
hTF N-lobe
TbpA
L3 helix
finger
plug loop
E
EY
Fe
L2HA
L5HA
no effect
pLHA
L10HA
no effect
D722
L7HA
no effect
L9HA
periL2/L4HA
no effect
L11HA
L3HA
no hTf binding
EIEYE motif
iron utilization
βL9HA
no hTf binding
βL16HA
binds hTf, reduced
iron binding capacity
ΔL8
ΔL5/ΔL4
no hTf binding
reduced hTf binding,
can’t grow on hTf
reduced hTf binding,
can’t grow on hTf
binds hTf, requires
functional TbpB for
transport
binds hTf, requires
functional TbpB for
transport
binds hTf, requires
functional TbpB for
transport
periplasmic loop 8
putative docking site for FbpA
E
I
D251
K467
S253
AB
C
Fig. 2. Molecular details of the interactions between neisserial TbpA and human transferrin.
A. The complex crystal structure of neisserial TbpA and human transferrin (apo form) (PDB code 3V8X) is shown in ribbon representation with
the beta-domain of TbpA in green, the plug domain in red, the helix finger of loop 3 (L3) in purple, and transferrin in gold (C-lobe) and light
blue (N-lobe). The location of iron (red sphere) was modelled based on the diferric transferrin crystal structure (PDB code 3V83) and the
putative docking site for FbpA along disordered periplasmic loop 8 (dashed green line) is indicated.
B. 2D-topology diagram of TbpA highlighting selected mutations, regions of sequence conservation and sequence diversity (adapted from
Boulton et al., 2000). Plug domain residues are highlighted in yellow, beta-domain residues are in green, L3 helix finger residues are in cyan,
residues that affect hTF binding when mutated are in orange, sites of HA tag insertions are in blue, and the iron binding motif EIEYE is shown
as red squares. Deleted loops are shown in dark purple (loop 4 and loop 8) and in light blue (loop 5). The solid red bars indicate the start and
end-points for the loop 4 + 5 construct which retained hTF binding when individually expressed and purified, while the dashed red bar
indicates the start point for the loop 5 only construct. Boldface circled residues are aromatics and boldface squared residues are those that
are conserved among all TbpA proteins (even in other species).
C. The structure of TbpA (green ribbon) depicting the locations of HA-insertions (blue spheres), and deletions (purple and light blue) and the
resulting effect on transferrin binding and iron import. The locations of mutations that affected transferrin binding are indicated by orange
spheres and the putative iron binding motif EIEYE is shown by red spheres.
Iron import in pathogenic Neisseria
249
© 2012 Blackwell Publishing Ltd, Molecular Microbiology, 86, 246–257
Page 5
that interact with transferrin, a helix finger at the apex of
loop 3 that appears to be involved in catalysing iron
release, and an unusually long plug domain loop that may
act as a sensor for ligand binding. TbpA was shown to
bind transferrin at the very top of the beta-domain and
exclusively along the C-lobe of transferrin, producing an
extensive binding surface involving 81 residues of TbpA
(~ 2500 Å2of buried surface area). This surface is largely
electropositive, which complements the electronegative
surface of transferrin. Despite TbpA having ligand bound,
there were no obvious conformational changes within
the plug domain compared with other TBDTs. The struc-
ture did, however, provide the precise locations and
sequences of the long extracellular loops which could be
important antigens for vaccine development.
In 2000, before the crystal structure of TbpA was
reported, a hypothetical, yet remarkably accurate 2D
topology model of gonococcal TbpAwas generated based
upon similarity with other TBDTs of known structure
(Boulton et al., 2000). This 2D model (Fig. 2B, updated to
reflect the true structure of TbpA) was used to character-
ize the roles of the putative surface-exposed loops, trans-
membrane b-strands, and the N-terminal plug domain
(Fig. 2C). To determine the function of the putative
surface-exposed loops, loops 4 + 5 and loop 5 alone were
deleted leading to a loss of transferrin binding and iron
uptake (Boulton et al., 2000). Deletion of loop 8 of TbpA
resulted in reduced transferrin binding but no iron inter-
nalization from transferrin, consistent with this region
serving a necessary docking region for human transferrin
to enable iron extraction (Boulton et al., 2000). Further,
individual loops of TbpAwere expressed alone and tested
to see whether any retained ligand binding function (Masri
and Cornelissen, 2002). Surprisingly, loop 5 and the com-
bination of loops 4 + 5 retained the ability to bind trans-
ferrin, despite the rest of the beta-barrel domain being
absent. These observations are consistent with the crystal
structure of TbpA in complex with transferrin, which dem-
onstrates that loops 4 and 5 are in direct contact with
transferrin.
Using the 2D topological model of TbpA, surface expo-
sure was accurately determined for loops 2, 3, 5, 7 and
10, and even for the extended plug domain loop (Yost-
Daljev and Cornelissen, 2004). The function of the
HA-epitope insertion mutants were further investigated by
measuring transferrin binding and iron uptake. Surpris-
ingly, most of the insertion mutants retained function
(Yost-Daljev and Cornelissen, 2004), indicating that the
insertions did not significantly affect the overall structure
of TbpA. Exceptions were insertions into loop 3 and into
beta-strand 9, which resulted in the loss of transferrin
binding and iron import. Insertions into the plug domain
and into b-strand 16 resulted in decreased iron uptake
capacity, consistent with these regions playing important
roles in iron internalization. Interestingly, insertions into
loops 2, 9 and 11 resulted in a loss of iron uptake that
could be rescued by coexpression of TbpB, indicating that
these regions are required for some aspect of iron inter-
nalization that can be duplicated by TbpB (Yost-Daljev
and Cornelissen, 2004). A summary of these studies is
depicted in Fig. 2C.
Antibodies were designed to target specific surface-
exposed loops of TbpAto determine if they could block the
interactions with human transferrin (Masri and Cornelis-
sen, 2002). Early attempts using the 2D model for TbpAto
target gonococcal loops 2, 5 and 4 + 5 for their ability to
produce antibodies that could interfere with ligand binding
were unsuccessful (Masri, 2003). However, with the
benefit of the TbpA–transferrin complex structure, it was
demonstrated that antibodies against meningococcal
loops 3, 7 and 11 and the long extended plug loop (Fig. 2)
were able to block interactions with transferrin using in
vitro ELISA assays (Noinaj et al., 2012). However, more
studies are needed to determine the usefulness of the
TbpA structure for designing vaccines to protect against
neisserial infections.
The structure and function of the co-receptor TbpB
Unlike TbpA, which is required to mediate the transport of
iron across the outer membrane, TbpB is not required;
however, it does significantly increase the efficiency of the
import system in a number of ways. First, while TbpA
binds both apo- and holo-transferrin with similar affinity,
TbpB preferentially binds holo-transferrin (Cornelissen
and Sparling, 1996; Retzer et al., 1998), thereby saturat-
ing iron-loaded substrate on the neisserial cell surface. It
was found that an engineered gonococcal mutant lacking
TbpB internalizes approximately half of the wild-type
amount of iron from transferrin (Anderson et al., 1994).
Second, while the exact mechanism remains unknown, it
has been proposed that TbpB may participate in the
extraction of iron from transferrin (Siburt et al., 2009),
either by direct removal or in concert with TbpA. Third, the
presence of TbpB on the cell surface facilitates release of
transferrin (DeRocco et al., 2008), presumably after iron
has been extracted and transported. As for association
with the neisserial cell surface, enhanced dissociation
from the cell when TbpB is expressed is likely the result of
TbpB’s strict specificity for holo-transferrin (DeRocco
et al., 2008).
A number of studies have used various techniques to
probe the interactions of TbpB with transferrin and have
found that the N-lobe of TbpB is the primary domain
responsible for the interaction (Cornelissen et al., 1997a;
Sims and Schryvers, 2003; Renauld-Mongenie et al.,
2004a; Moraes et al., 2009; Calmettes et al., 2011). The
first glimpse of the fold in this co-receptor was reported in
250
N. Noinaj, S. K. Buchanan and C. N. Cornelissen
?
© 2012 Blackwell Publishing Ltd, Molecular Microbiology, 86, 246–257 | http://www.researchgate.net/publication/230835196_The_transferrin-iron_import_system_from_pathogenic_Neisseria_species | CC-MAIN-2015-27 | en | refinedweb |
Permission.Add method
Creates a new set of permissions on the current form for the specified user with the specified permissions and an expiration date.
Namespace: Microsoft.Office.Interop.InfoPath.SemiTrustNamespace: Microsoft.Office.Interop.InfoPath.SemiTrust
Assembly: Microsoft.Office.Interop.InfoPath.SemiTrust (in Microsoft.Office.Interop.InfoPath.SemiTrust.dll)
Parameters
- bstrUserId
- Type: System.String
The e-mail address in the format user@domain.com of the user to whom permissions on the current form are being granted. Required.
- varPermission
- Type: System.Object
The permissions on the current form that are being granted to the specified user as a combination of one or more MsoPermission values. Optional.
- varExpirationDate
- Type: System.Object
The expiration date for the permissions that are being granted as a System.DateTime value. Optional.
Return valueType: Microsoft.Office.Interop.InfoPath.SemiTrust.UserPermissionObject
A UserPermissionObject that represents the specified user.
To access the MsoPermission enumeration values for setting the varPermission parameter, you must set a reference to the Microsoft Office 14.0 Object Library using COM tab of the Add Reference dialog box in Visual Studio 2012 or Visual Studio. This will establish a reference to the members of the Microsoft.Office.Core namespace.
Because the Permission object and its members are new to Microsoft InfoPath, you must cast the object returned by the thisXDocument variable to the _XDocument3 type to access this object and its members. For more information, see How to: Use Microsoft.Office.Interop.InfoPath.SemiTrust Members That Are Not Compatible with InfoPath 2003.
This member can be accessed only by forms opened from a form template that has been configured to run with full trust using the Security and Trust category of the Form Options dialog box. This member requires full trust for the immediate caller and cannot be used by partially trusted code. For more information, see "Using Libraries from Partially Trusted Code" on MSDN.
In the following example, the Add method is used to add a new user to the form, assign that user to the Full Control access level, and set an expiration date of two days from the current date.
This example requires a using or Imports directive for the Microsoft.Office.Core namespace in the declarations section of the form module. | https://msdn.microsoft.com/en-us/library/microsoft.office.interop.infopath.semitrust.permission.add.aspx | CC-MAIN-2015-27 | en | refinedweb |
Search Type: Posts; User: vanderbill
Search: Search took 0.01 seconds.
- 27 Oct 2011 7:04 AM
Jump to post Thread: Menu Event using EXT MVC by vanderbill
- Replies
- 1
- Views
- 1,410
Good afternoon.
I have a Viewport with accordion menu, How do I get the click event of each menu item?
My Controller
Ext.define('aplicacao.controller.Usuarios', {
extend :...
- 11 Feb 2009 3:46 PM
- Replies
- 3
- Views
- 2,570
tks for answer, but its not a checkbox, is a CheckBoxSelectionModel in a grid...:)
- 11 Feb 2009 11:52 AM
- Replies
- 3
- Views
- 2,570
Public Events
Event Defined By beforerowselect : (...
- 11 Feb 2009 10:53 AM
- Replies
- 3
- Views
- 2,570
wich event o need implment to know???
tks :D:D
- 11 Feb 2009 10:51 AM
sorry i haved tested, im wrong here.. i have implemented in a wrong event! sry :">:">
- 11 Feb 2009 4:33 AM
ok, my list is = List<EmpresaData> mylist;
but, if i use
public void loaderLoad(LoadEvent le) {
getGridEmpresa().getSelectionModel().select(
- 11 Feb 2009 3:24 AM
this is a bug???
because i used the same code but iwth select(index) method
getGridEmpresa().getSelectionModel().select(1);
and work, but i need select my list!!!!
ty for all guys!:D
- 11 Feb 2009 2:23 AM
Hello guys, im tryng select my list, when a load my page(pagingtoolbar)
my grid
private Grid<EmpresaData> getGridEmpresa() {
if (gridEmpresa == null) {
gridEmpresa = new...
- 24 Jan 2009 10:47 AM
- Replies
- 2
- Views
- 1,400
hello guys, im trying implements basic login but....
the responseText is returning all code look.
/**
* @author Vander
*/
Ext.onReady(function(){
- 19 Jan 2009 9:03 AM
- Replies
- 3
- Views
- 1,750
ok i will look on net for gzipping, but have any tutorial how i can active this..
ty for answer :D
- 19 Jan 2009 8:57 AM
- Replies
- 3
- Views
- 1,750
Hello guys, i have an aplication on gxt, then i wll acess first time later deployed in tom cat it load a lot, between 15 and 30 seconds, any have the same issue???
tks guys sorry my bad english!
- 16 Jan 2009 9:20 AM
- Replies
- 7
- Views
- 2,779
look the code in examples, there are server side code paging implementation
:D:D:D:D:D
- 14 Jan 2009 4:36 PM
- Replies
- 41
- Views
- 97,830
I have same problem can any1 help???
- 12 Jan 2009 7:21 AM
- Replies
- 0
- Views
- 956
Hello guys im trying migrating, but not all project. I have any problems with datefield...
The trigger(Data picker i think ) dont show!!
[CODE]public...
- 8 Jan 2009 6:51 AM
- Replies
- 4
- Views
- 2,108
i get it :D:D:D:D:D
private void refreshChk(final List<ModelData> list) {
if (!getGrid().isRendered())
return;
if (list == null) {
if...
- 8 Jan 2009 6:49 AM
- Replies
- 3
- Views
- 3,157
Hello :))
I checkboxSelectionModel how i can take select/deselect checkbox???
sm = new CheckBoxSelectionModel<RamoData>();
sm
...
- 8 Jan 2009 3:40 AM
- Replies
- 4
- Views
- 2,108
Hello guys.
I making a test here and to select rows onload...but nothing happens.
public BasePagingLoader getLoader() {
if (loader == null) {
loader = new...
- 5 Jan 2009 2:52 AM
sorry i dont understand, can gimme a example???
tks for help... but i think dont have a solution for Ext gwt yet...but tks for all!!! im looking yet!!:D:D:D
- 30 Dec 2008 10:51 AM
Im trying override getEditor, but dont work :((:((
ColumnConfig colResposta = new ColumnConfig("resposta", "Resposta",
150) {
@Override
...
- 30 Dec 2008 5:26 AM
Hello guys.
How i can have more than one widget in the same column in a editorGrid???
example:
row Column A
1 TextField
2 ComboBox(options 1, 2)
3 CheckBox
4 ...
- 22 Dec 2008 4:14 AM
- Replies
- 3
- Views
- 1,143
Ty so much its Work =D
- 22 Dec 2008 3:52 AM
- Replies
- 3
- Views
- 1,143
Hello guys.
I have a query where have 12.000 rows, but i iwant paging it. Im tryng it
In Client side:
My model:
public class ConhecimentoModel extends BaseModelData {
private String...
- 17 Sep 2008 3:19 AM
hello guys.....the problem happen only in linux....in Windows its works :D:D:D:D
ty so much for the great work....cya!!!!
- 12 Sep 2008 8:52 AM
hello this key dont work too
- 12 Sep 2008 5:48 AM
when i hold delete or backspace the mask disconfigure...i need making any stuff???
the key '~' is not validate i think
the key '
Results 1 to 25 of 30 | https://www.sencha.com/forum/search.php?s=2860eb8fe30a05960da378d48eb10043&searchid=11964109 | CC-MAIN-2015-27 | en | refinedweb |
Issues
ZF-12173: Zend_Form and Zend_Form_Element prefix paths are not prefix agnostic (namespaces)
Description
I've migrated my personal library to namespaces and I've detected that the namespaced prefix paths are not properly handled by the plugin loader because both Zend_Form and Zend_Form_Element add "_$type" as suffix to the plugin path when they should detect if the namespace separator is _ or \
Posted by Antonio J García Lagar (ajgarlag) on 2012-04-26T07:50:46.000+0000
I've submitted two diff files, one for tests and one for the fix itself. Note that this issue depends on ZF-11330, so the fix for ZF-11330 should be applied in order to make this one work.
Posted by Frank Brückner (frosch) on 2012-04-26T08:18:51.000+0000
Hi Antonio, your patch does not include:
Posted by Antonio J García Lagar (ajgarlag) on 2012-04-26T08:51:48.000+0000
I've fixed the Zend_Form_Element_File and Zend_Form_Element_Captcha addPrefixPath methods too.
Posted by Rob Allen (rob) on 2012-05-31T19:29:06.000+0000
Fixed in SVN r24848. | http://framework.zend.com/issues/browse/ZF-12173?focusedCommentId=50374&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | CC-MAIN-2015-27 | en | refinedweb |
This forum is now read-only. Please use our new forums at discuss.codecademy.com.
cant view xml file
I cannot view the xml output file on this exercise. Anyone else having this problem? Using chrome on windows 7 pro x64
here is my code, is something wrong? The exercise passed me and i dont see anything wrong myself.
from urllib2 import urlopen from urllib import quote key = "API_KEY" url = '' url += key url += '&requiredAssets=audio' url += '&format=Podcast' url += '&numResults=3' url += '&action=Or' npr_id= raw_input("Enter comma-separated NPR IDs or leave blank.") search_string = raw_input("Enter your search string or leave blank.") feed_title = raw_input("What's your feed title?") if npr_id or search_string == True: raw_input("Hit Enter to download your podcast.") if npr_id: url += '&id=' + npr_id if search_string: url += '&searchTerm=' + quote(search_string) if feed_title: url += '&title=' + feed_title response = urlopen(url) output = response.read() my_feed = open('my_feed.xml', 'w') output = my_feed.write(output) my_feed.close() else: print "You must enter an NPR ID, a search term, or both."
if you go to npr's website and just register for an API key you can run these outside of code academy to see the files. | https://www.codecademy.com/en/forum_questions/518e8d8daa6908f29e00137f | CC-MAIN-2018-30 | en | refinedweb |
The Routing Information Protocol (RIP) is a classic distance vector Interior Gateway Protocol (IGP) designed to exchange information within an autonomous system (AS) of a small network.
This module describes the concepts and tasks to implement basic RIP routing. Cisco IOS XR software supports a standard implementation of RIP Version 2 (RIPv2) that supports backward compatibility with RIP Version 1 (RIPv1) as specified by RFC 2453.
For RIP.
RIP Version 1 (RIP v1) is a classful, distance-vector protocol that is considered the easiest routing protocol to implement. Unlike OSPF, RIP broadcasts User Datagram Protocol (UDP) data packets to exchange routing information in internetworks that are flat rather than hierarchical. Network complexity and network management time is reduced. However, as a classful routing protocol, RIP v1 allows only contiguous blocks of hosts, subnets or networks to be represented by a single route, severely limiting its usefulness.
RIP v2 allows more information carried in RIP update packets, such as support for:.
Routing information updates are advertised every 30 seconds by default, and new updates discovered from neighbor routers are stored in a routing table.
Only RIP Version 2 (RIP v2), as specified in RFC 2453, is supported on Cisco IOS XR software and, by default, the software only sends and receives RIP v2 packets. However, you can configure the software to send, or receive, or both, only Version 1 packets or only Version 2 packets or both version type packets per interface.
Here are some good reasons to use RIP:
Because of RIP’s ease of use, it is implemented in networks worldwide.
Normally, routers that are connected to broadcast-type IP networks and that use distance-vector routing protocols employ the split horizon mechanism to reduce the possibility of routing loops. Split horizon blocks information about routes from being advertised by a router out of any interface from which that information originated. This behavior usually optimizes communications among multiple routers, particularly when links are broken.
If an interface is configured with secondary IP addresses and split horizon is enabled, updates might not be sourced by every secondary address. One routing update is sourced per network number unless split horizon is disabled.
RIP uses several timers that determine such variables as the frequency of routing updates, the length of time before a route becomes invalid, and other parameters. You can adjust these timers to tune routing protocol performance to better suit your internetwork needs, by making the following timer adjustments to:
The first four timer adjustments are configurable by the timers basic command. The output-delay command changes the amount of time delay between RIP update packets. See Customizing RIP for configuration details.
It also is possible to tune the IP routing support in the software to enable faster convergence of the various IP routing algorithms and quickly drop back to redundant routers, if necessary. The total result is to minimize disruptions to end users of the network in situations in which quick recovery is essential.
Redistribution is a feature that allows different routing domains, to exchange routing information. Networking devices that route between different routing domains are called boundary routers, and it is these devices that inject the routes from one routing protocol into another. Routers within a routing domain only have knowledge of routes internal to the domain unless route redistribution is implemented on the boundary routers.
When running RIP in your routing domain, you might find it necessary to use multiple routing protocols within your internetwork and redistribute routes between them. Some common reasons are:
Further, route redistribution gives a company the ability to run different routing protocols in work groups or areas in which each is particularly effective. By not restricting customers to using only a single routing protocol, Cisco IOS XR route redistribution is a powerful feature that minimizes cost, while maximizing technical advantage through diversity.
When it comes to implementing route redistribution in your internetwork, it can be very simple or very complex. An example of a simple one-way redistribution is to log into a router on which RIP is enabled and use the redistribute static command to advertise only the static connections to the backbone network to pass through the RIP network. For complex cases in which you must consider routing loops, incompatible routing information, and inconsistent convergence time, you must determine why these problems occur by examining how Cisco routers select the best path when more than one routing protocol is running administrative cost.
Administrative distance is used as a measure of the trustworthiness of the source of the IP routing information. When a dynamic routing protocol such as RIP is configured, and you want to use the redistribution feature to exchange routing information, it is important to know the default administrative distances for other route sources so that you can set the appropriate distance weight.
An administrative distance is an integer from 0 to 255. In general, the higher the value, the lower the trust rating. An administrative distance of 255 means the routing information source cannot be trusted at all and should be ignored. Administrative distance values are subjective; there is no quantitative method for choosing them..
This section contains instructions for the following tasks:
This task enables RIP routing and establishes a RIP routing process.
Although you can configure RIP before you configure an IP address, no RIP routing occurs until at least one IP address is configured.
1. configure
2. router rip
3. neighbor ip-address
4. broadcast-for-v2
5. interface type interface-path-id
6. receive version { 1 | 2 | 1 2 }
7. send version { 1 | 2 | 1 2 }
8. Do one of the following:
This task describes how to customize RIP for network timing and the acceptance of route entries.
1. configure
2. router rip
3. auto-summary
4. timers basic update invalid holddown flush
5. output-delay delay
6. nsf
7. interface type interface-path-id
8. metric-zero-accept
9. split-horizon disable
10. poison-reverse
11. Do one of the following:
This task describes how to control or prevent routing update exchange and propagation.
Some reasons to control or prevent routing updates are:
1. configure
2. router rip
3. neighbor ip-address
4. interface type interface-path-id
5. passive-interface
6. exit
7. interface type interface-path-id
8. route-policy { in | out }
9. Do one of the following:
This task defines a route policy and shows how to attach it to an instance of a RIP process. Route policies can be used to:
A route policy definition consists of the route-policy command and name argument followed by a sequence of optional policy statements, and then closes with the end-policy command.
A route policy is not useful until it is applied to routes of a routing protocol.
1. configure
2. route-policy name
3. set rip-metric number
4. end-policy
5. Do one of the following:
6. configure
7. router rip
8. route-policy route-policy-name { in | out }
9. Do one of the following:
This section provides the following configuration examples:
The following example shows two Gigabit Ethernet interfaces configured with RIP.
interface GigabitEthernet0/6/0/0 ipv4 address 172.16.0.1 255.255.255.0 ! interface GigabitEthernet0/6/0/2 ipv4 address 172.16.2.12 255.255.255.0 ! router rip interface GigabitEthernet0/6/0/0 ! interface GigabitEthernet0/6/0/2 ! !
The following example shows how to configure basic RIP on the PE with two VPN routing and forwarding (VRF) instances.
router rip interface GigabitEthernet0/6/0/0 ! vrf vpn0 interface GigabitEthernet0/6/0/2 ! ! vrf vpn1 interface GigabitEthernet0/6/0/3 ! ! !
The following example shows how to adjust RIP timers for each VPN routing and forwarding (VRF) instance.
For VRF instance vpn0, the timers basic command sets updates to be broadcast every 10 seconds. If a router is not heard from in 30 seconds, the route is declared unusable. Further information is suppressed for an additional 30 seconds. At the end of the flush period (45 seconds), the route is flushed from the routing table.
For VRF instance vpn1, timers are adjusted differently: 20, 60, 60, and 70 seconds.
The output-delay command changes the interpacket delay for RIP updates to 10 milliseconds on vpn1. The default is that interpacket delay is turned off.
router rip interface GigabitEthernet0/6/0/0 ! vrf vpn0 interface GigabitEthernet0/6/0/2 ! timers basic 10 30 30 45 ! vrf vpn1 interface GigabitEthernet0/6/0/3 ! timers basic 20 60 60 70 output-delay 10 ! !
The following example shows how to redistribute Border Gateway Protocol (BGP) and static routes into RIP.
The RIP metric used for redistributed routes is determined by the route policy. If a route policy is not configured or the route policy does not set RIP metric, the metric is determined based on the redistributed protocol. For VPNv4 routes redistributed by BGP, the RIP metric set at the remote PE router is used, if valid.
In all other cases (BGP, IS-IS, OSPF, EIGRP, connected, static), the metric set by the default-metric command is used. If a valid metric cannot be determined, then redistribution does not happen.
route-policy ripred set rip-metric 5 end-policy ! router rip vrf vpn0 interface GigabitEthernet0/6/0/2 ! redistribute connected default-metric 3 ! vrf vpn1 interface GigabitEthernet0/6/0/3 ! redistribute bgp 100 route-policy ripred redistribute static default-metric 3 ! !
The following example shows how to configure inbound and outbound route policies that are used to control which route updates are received by a RIP interface or sent out from a RIP interface.
prefix-set pf1 10.1.0.0/24 end-set ! prefix-set pf2 150.10.1.0/24 end-set ! route-policy policy_in if destination in pf1 then pass endif end-policy ! route-policy pass-all pass end-policy ! route-policy infil if destination in pf2 then add rip-metric 2 pass endif end-policy ! router rip interface GigabitEthernet0/6/0/0 route-policy policy_in in ! interface GigabitEthernet0/6/0/2 ! route-policy infil in route-policy pass-all out
The following example shows how to configure passive interfaces and explicit neighbors. When an interface is passive, it only accepts routing updates. In other words, no updates are sent out of an interface except to neighbors configured explicitly.
router rip interface GigabitEthernet0/6/0/0 passive-interface ! interface GigabitEthernet0/6/0/2 ! neighbor 172.17.0.1 neighbor 172.18.0.5 !
The following example shows how to use the distance command to install RIP routes in the Routing Information Base (RIB). The maximum-paths command controls the number of maximum paths allowed per RIP route.
router rip interface GigabitEthernet0/6/0/0 route-policy polin in ! distance 110 maximum-paths 8 !
The following sections provide references related to implementing RIP. | http://www.cisco.com/c/en/us/td/docs/routers/xr12000/software/xr12k_r3-9/routing/configuration/guide/b_xr12krc39/b_xr12krc39_chapter_0110.html | CC-MAIN-2016-18 | en | refinedweb |
in reply to Re^2: What is the best way to compare variables so that different types are non-equal?in thread What is the best way to compare variables so that different types are non-equal?
My question about overriding the type-blind behavior of eq (#2) was primarily a question about Perl idiom. Before I settled on a solution, I wanted a better understanding of the Perl syntax for handling various definitions of equality.
I was also looking ahead to future coding scenarios. Part of good design is anticipating the environment around the design. Part of good testing is understanding exactly what one's test for equality is doing. Once I saw my mistake I was worried about what other magic and 'action at a distance' effects I need to consider when writing tests and developing algorithms that involve testing for equality.
So wouldn't it be natural to construct a class "Node", with an overloaded "eq" operator?
The code I'm testing is pretty well factored so the actual fix involves exactly two comparisons within a single subroutine. There isn't really a need for a global solution that will be "carried" with a "node object". Also the "node-i-ness" comes from the fact that the datum is part of larger structure, e.g. an array or a hash. It doesn't need an object wrapper to get that trait.
If there is no ready-made Perl idiom I will probably have my subroutine call the subroutine below for its two comparisons. The subroutine mentioned above needs a definition of equality that duplicates unoverloaded eq, except for the added constraint that like must be compared to like:
sub my_eq {
# make sure we are comparing like to like
my $xRef = ref($_[0]);
return '' unless ($xRef eq ref($_[1]));
# compare pure scalars and regex's using 'eq'
# compare reference addresses for the rest
return ($xRef and ($xRef ne 'Regexp'))
? (Scalar::Util::refaddr($_[0])
== Scalar::Util::refaddr($_[1]))
: ($_[0] eq $_[1]);
}
[download]
Best,. | http://www.perlmonks.org/?node_id=781494 | CC-MAIN-2016-18 | en | refinedweb |
Ajax CRUD with Struts2 and Tibco GI
By Brian Walsh
01 Apr 2007 | TheServerSide.com
Summary
’ll.
What follows is a step by step approach to developing the General Interface application, Struts2 components and configuring Struts to support the resulting Ajax view.
You will see how the JSP view is bypassed to expose an XML service and the relative ease with which you can use GI to create an Ajax RIA that communicates with Struts to provide a feature rich, high productivity, graphical user interface.
Getting Started
We won’t go into incredible hand holding detail on installing prerequisite products here. In order to try this live, you will need to be familiar with Java, Tomcat etc. If not, no worries, what follows is self contained
If you’re not already using Struts2, you will first need to download and configure Struts2 (see Resources section immediately below). Ensure that you can bring up the “Struts Showcase” sample applications. The CRUD case is one of these.
You will also need the latest GI distribution. If you are not familiar with GI, download it under the open source BSD license at the TIBCO Developer Network and run through the video tutorials provided.Resources
- Struts2 information
- Struts2 CRUD example:
- General Interface download and info @ TIBCO Developer Network
- Incremental source code for the solution in this article:
-
Struts 2
If you have worked with classic Struts, you will notice many changes. In fact, Struts2 looks like the “after” images from reality makeover shows—If Struts has been your significant other, you’ll find it looks a lot better after the makeover. The old work horse now has all the bells and whistles, so many that the latest features may be overwhelming at first. The two new features key to our use case are the Value Stack and Interceptors. The Value Stack comes from Strut2’s integration of XWork and OGNL. All values from the client are contained in the value stack and are accessible to the action and results via the expression language (EL). Our particular strategy to integrate GI and Struts will implement a custom interceptor to read values from the CDF document and populate the value stack. We will leverage these features to re-implement the struts showcase CRUD example using GI.
CRUD
The CRUD pattern (Create, Retrieve, Update and Delete) lies at the heart of many business applications. No matter how you slice it, your app will need to create new entities, validate, store and manage them. The smart folks behind Struts2 knew this and provided a CRUD application within the Struts Showcase examples. While their motivation was to illustrate proper layering for data access and dependency injection, we will extend this sample to show how an Ajax RIA and XML service interfaces can be added into the existing solution.
Take a second to familiarize yourself by using the sample CRUD application that came with your Sturs2 installation. The default location on your local system is and learn more about this app at.
Now look at the configuration file “struts.xml” at line 68 and 90. You are looking for the two packages named “skills” and “employees”.
Here we can see the urls, controller classes and views are related to each other.
Let’s translate the following excerpt:
<action name="list" class="org.apache.struts2.showcase.action.SkillAction" method="list"> <result>/empmanager/listSkills.jsp</result> <interceptor-ref </action>
The controller for the list action is the list method in the SkillAction class. Before the controller method is executed, the client’s request is passed into an interceptor chain defined by “basicStack” Any result will be sent to the listSkills.jsp for rendering.
Now, the meat of this application lies in the SkillAction class’s list method. Lots of good work happens in here: data access, mapping into POJOs, perhaps some business rules. The controller acts as a conductor, orchestrating the work or Model and Service classes that ideally should do the heavy lifting. This is the area of the code we want to leave untouched. We will use these controller classes in our Ajax application by working with interceptors and results.
Value Stack and Interceptors
Interceptors are key to understanding Struts2. Simply stated each request coming from the client triggers a defined set of interceptors that each independently interrogates the request either before or after the controller Action class executes. Struts2 comes with many pre-built interceptors including file upload, chaining, xslt, logging, exception handling etc. One of the interceptors, ParametersInterceptor, sets all parameters on the value stack.
The Value Stack is at the heart of Struts2. By leveraging the ognl value stack, the framework provides sophisticated, automatic type conversion. The Action Forms paradigm and laborious transfer of data from strings to domain objects so common to Struts1 application is effectively deprecated.
We will take advantage of the Interceptor and Value Stack later on when we introduce our own GI friendly interceptor.
It is worth pointing out that GI can handle data streams in a number of message formats including XML, SOAP, JSON, etc. GI includes powerful visual design time tools and a runtime environment for mapping messages to instances of its data and GUI objects.CDF – the Common Data Format of GI’s GUI controls
By GI friendly, we mean that we will deliver the data to the Ajax client in the xml format used by the Ajax framework.
The Common Data Format (CDF) format is a schema that defines the client side data used by GI’s components. All data aware GI elements use the CDF format. Briefly XML adhering to the CDF schema is in the form:
<data jsxid="X" > <record jsxid="A" .... /> <record jsxid="B" .... /> <record jsxid="C" .... /> ... </data>
In addition <record> nodes can be nested to represent hierarchical relationships. Therefore a <record> can represent a table row, a node in a tree, a menu item, and so on.
In our work here, we’ll create an XML service in Struts2 that returns CDF. In cases where a service returns something other than CDF, GI’s XML mapping utilities provide visual tools for client-side transformations to and from non-CDF formats, to and from CDF. In this way you can also use GI with existing XML, SOAP or other HTTP services.Overview of the extended server side processes
Figure 2 shows the “CDF aware” Interceptor and Result types we’ll add to the system. The addition of these two elements enables us to bypass the view Template and instead expose an XML service that conforms to the CDF schema used by GI.
Figure 2: Struts2 application modifications to expose XML service conforming to GI’s CDF schema.Create a CDF Interceptor type
We create a CDFInterceptor, a ServletRequestAware interceptor that captures the POX (plain old xml) message in CDF format and inserts the values into the ActionContext. This interceptor is responsible for mapping the attribute names on the CDF record to the parameter names used by the action.
<!--definition of the interceptor --> <interceptor name="fromCDF" class="org.apache.struts2.showcase.gi.CDFInterceptor"/> <!--use of the interceptor in a stack --> <interceptor-stack <interceptor-ref <interceptor-ref <interceptor-ref <interceptor-ref </interceptor-stack> <!--reference of the stack by an action, specifying parameter request name mappings --> <action name="delete" class="org.apache.struts2.showcase.action.EmployeeAction" method="delete"> <interceptor-ref <param name="fromCDF.alternateNames">#{ "empId" : "toDelete" }</param> </interceptor-ref> </action>Create a CDF Result type
We create a CDFResult, a POJO that implements the Result interface. It replaces the JSP and renders element(s) from the invocation stack as CDF compliant XML. It also sets HTTP status codes and renders validation errors.
<result-types> <result-type </result-types> ... <action name="save" class="org.apache.struts2.showcase.action.SkillAction" method="save"> <result name="success" type="cdf" > <param name="inputName">currentSkill </result> <interceptor-ref </action>Final Struts2 Configuration
We’ve included the GI version of the skill package below. See the struts.xml for the complete version. You can see how the original actions are wrapped in CDF specific interceptors and result.
<package name="skillGI" extends="default" namespace="/skillGI"> <action name="list" class="org.apache.struts2.showcase.action.SkillAction" method="list"> <result type="cdf" > <param name="inputName">availableItems <param name="propertyNameJSXId">id <param name="propertyNameJSXText">description </result> <interceptor-ref </action> <action name="save" class="org.apache.struts2.showcase.action.SkillAction" method="save"> <result name="input" type="cdf" > <param name="inputName">currentSkill <param name="propertyNameJSXId">id <param name="propertyNameJSXText">description </result> <result name="success" type="cdf" > <param name="inputName">currentSkill <param name="propertyNameJSXId">id <param name="propertyNameJSXText">description </result> <interceptor-ref </action> <action name="delete" class="org.apache.struts2.showcase.action.SkillAction" method="delete"> <result name="success" type="cdf" > <param name="inputName">currentSkill <param name="propertyNameJSXId">id <param name="propertyNameJSXText">description </result> <result name="input" type="cdf" > <param name="inputName">currentSkill <param name="propertyNameJSXId">id <param name="propertyNameJSXText">description </result> <interceptor-ref </action> </package>
General Interface
Ajax requirements
Well we are getting a little ahead of ourselves here. Let’s take a step back and look at the use case requirements and elements in the client-side environment that will provide the Ajax GUI.
Figure 3: CLIENT-SIDE SYSTEM DIAGRAM showing matrix, cache, mappings, communications and messages to/from CDF service in struts environment.Ajax Editable data grid
Our design calls for an editable data grid UI very similar to the MS Access where the last row in the grid will operate as an input row. The user can also click, arrow or tab their way from cell to cell. When the focus leaves a cell, validation will be run. If validation is successful, the row will be marked as dirty. When focus leaves a row, a dirty row will be saved to the server. A failure in local validation or a server error message will result in the focus returning to the cell in error.
The GI Ajax toolkit has a component class called Matrix that in addition to being configurable into multi-column lists, trees, and tree-tables, also supports the editable data grid use case. A look in the source code for this project will show the logic used to handle the validation, dirty row, create and update record processes.GI XML Mapping rules
Even though we have configured the server to return CDF records we’ll still use GI’s XML Mapping utilities to perform a few tasks for us including:
- Mapping of model names to request parameter names i.e. GI’s empId attribute for a CDF record representing an employee becomes Struts’ expected currentEmployee.id parameter.
- Resolving URLs since the mapping files allow us to store URLs independent of code
- Formatting data as its flows in and out of messages between the Ajax RIA client and the server.
Messages
Figure 4: Struts message constants re-implemented as GI resources.Validation
Figure 5: Struts validations and their GI counterpart.
To a greater or lesser degree most business Ajax implementations will require detailed information about the server model. In order to perform local, client side validation for a given property, the framework needs to know the description of the property; it’s type, parent, label and validation messages. In the package org.apache.struts2.showcase.gi.util we provide two “rough and ready” utilities to generate the GI xml components for matrixes, columns, validation and messages from the struts model objects and validation configuration.
Testing
Tibco GI provides a logging class that provides the client side JavaScript equivalent to loj4j/log4net. Within your JavaScript code, you can instantiate the logger and call myLog.debug(), myLog.error(), myLog.fatal() the logger will dispatch the message to its handler based on its effectiveLevel. The mapping tool uses logging to manage messages generated from the mapping rules test tool.
Figure 6: Error logging in the mapping utility.Deployment
From the web server’s point of view, our Ajax application is deployed as static files in the following directory structure:
WebContent/ /gi /JSX /JSXAPPS /struts2
We provide a single index.html file with the following div:
<div style="width:100%;height:400px;"> <script type"text/javascript" src="JSX/js/JSX30.js" jsxapppath="JSXAPPS/struts2" jsxlt="true"> </script> </div>
Final Product
More To Explore:
- Struts 2
- TIBCO General Interface
- CRUD | http://www.theserverside.com/news/1364136/Ajax-CRUD-with-Struts2-and-Tibco-GI | CC-MAIN-2016-18 | en | refinedweb |
Hey, I'm trying to make a simple net send application with no GUI. The program basically asks for the hostname, the message, then proceeds with filling out the information which is needed for the system command net, command option send. (net send host message).
I have this so far, but it just types out the whole string into the system command, not including the values within the variables. I am quite new to C so I don't have much of an idea of what I should be doing. Here's how it looks:
Code:
#include <stdio.h>
#include <dos.h>
int main()
{
char host, msg;
printf("\nEnter the hostname: ");
scanf("%d", &host);
printf("\nEnter your message: ");
scanf("%d", &msg);
system("net send %d %d", &host, &msg);
system("pause");
}
Tell me what you think, any help is greatly appretiated.
Gordon. | http://cboard.cprogramming.com/c-programming/69018-help-using-variable-system-command-printable-thread.html | CC-MAIN-2016-18 | en | refinedweb |
Name | Synopsis | Description | Usage | Files | See Also
#include <utmpx.h> /var/adm/utmpx /var/adm/wtmpx
The utmpx and wtmpx files are extended database files that have superseded the obsolete utmp and wtmp database files.
The utmpx database contains user access and accounting information for commands such as who(1), write(1), and login(1). The wtmpx database contains the history of user access and accounting information for the utmpx database.)
Name | Synopsis | Description | Usage | Files | See Also | http://docs.oracle.com/cd/E19253-01/816-5174/6mbb98ukt/index.html | CC-MAIN-2016-18 | en | refinedweb |
The Berkeley DB environment is created and described by the db_env_create() and DB_ENV->open() interfaces. In situations where customization is desired, such as storing log files on a separate disk drive or selection of a particular cache size, applications must describe the customization by either creating an environment configuration file in the environment home directory or by arguments passed to other DB_ENV handle methods.
Once an environment has been created, database files specified using relative pathnames will be named relative to the home directory. Using pathnames relative to the home directory allows the entire environment to be easily moved, simplifying restoration and recovery of a database in a different directory or on a different system.
Applications first obtain an environment handle using the db_env_create() method, then call the DB_ENV->open() method which creates or joins the database environment. There are a number of options you can set to customize DB_ENV->open() for your environment. These options fall into four broad categories:
Most applications either specify only the DB_INIT_MPOOL flag or they specify all four subsystem initialization flags (DB_INIT_MPOOL, DB_INIT_LOCK, DB_INIT_LOG, and DB_INIT_TXN). The former configuration is for applications that simply want to use the basic Access Method interfaces with a shared underlying buffer pool, but don't care about recoverability after application or system failure. The latter is for applications that need recoverability. There are situations in which other combinations of the initialization flags make sense, but they are rare.
The DB_RECOVER flag is specified by applications that want to perform any necessary database recovery when they start running. That is, if there was a system or application failure the last time they ran, they want the databases to be made consistent before they start running again. It is not an error to specify this flag when no recovery needs to be done.
The DB_RECOVER_FATAL flag is more special-purpose. It performs catastrophic database recovery, and normally requires that some initial arrangements be made; that is, archived log files be brought back into the filesystem. Applications should not normally specify this flag. Instead, under these rare conditions, the db_recover utility should be used.
The following is a simple example of a function that opens a database environment for a transactional program.
DB_ENV * db_setup(char *home, char *data_dir, FILE *errfp, char *progname) { DB_ENV *dbenv; int ret; /* * Create an environment and initialize it for additional error * reporting. */ if ((ret = db_env_create(&dbenv, 0)) != 0) { fprintf(errfp, "%s: %s\n", progname, db_strerror(ret)); return (NULL); } dbenv->set_errfile(dbenv, errfp); dbenv->set_errpfx(dbenv, progname); /* * Specify the shared memory buffer pool cachesize: 5MB. * Databases are in a subdirectory of the environment home. */ if ((ret = dbenv->set_cachesize(dbenv, 0, 5 * 1024 * 1024, 0)) != 0) { dbenv->err(dbenv, ret, "set_cachesize"); goto err; } if ((ret = dbenv->add_data_dir(dbenv, data_dir)) != 0) { dbenv->err(dbenv, ret, "add_data_dir: %s", data_dir); goto err; } /* Open the environment with full transactional support. */ if ((ret = dbenv->open(dbenv, home, DB_CREATE | DB_INIT_LOG | DB_INIT_LOCK | DB_INIT_MPOOL | DB_INIT_TXN, 0)) != 0) { dbenv->err(dbenv, ret, "environment open: %s", home); goto err; } return (dbenv); err: (void)dbenv->close(dbenv, 0); return (NULL); } | http://docs.oracle.com/cd/E17276_01/html/programmer_reference/env_create.html | CC-MAIN-2016-18 | en | refinedweb |
I got two objects:
RemoteDBConnections@da4b71
and
RemoteDBConnections@1f4cbee
Thanks
Elwin
> Date: Thu, 5 Jun 2008 17:06:00 +0200> From: miki@ceti.pl> To: users@tomcat.apache.org>
Subject: Re: Singleton in Tomcat 6.0 not working> > ktou Ho wrote:> > The problem
I am facing is the Singleton is not working at all in the servlet. I tried to synchnozed the
constructor or make it static. I still get two instances of the objects. How can I solve the
problem? > > > Is anyone able to reproduce this?> > > > public class
RemoteDBConnections> > {> > private static RemoteDBConnections sInstance = new
RemoteDBConnections();> > private static int counter=0;> > > > private RemoteDBConnections()>
> {> > System.out.println("counter =" + (counter++));> > > Add following
line (assuming you do not override default toString method):> > System.out.println(this.toString());>
> and compare results. > > > > -- >>
_________________________________________________________________
It’s easy to add contacts from Facebook and other social sites through Windows Live™ Messenger.
Learn how. | http://mail-archives.apache.org/mod_mbox/tomcat-users/200806.mbox/%3CBAY118-W5691C38452630394BC7701A2B40@phx.gbl%3E | CC-MAIN-2016-18 | en | refinedweb |
#include <proton/import_export.h>
#include <proton/io.h>
Go to the source code of this file.
Additional API for the Driver Layer.
These additional driver functions allow the application to supply a separately created socket to the driver library.
Create a connector using the existing file descriptor.
Create a listener using the existing file descriptor. | http://qpid.apache.org/releases/qpid-proton-0.7/protocol-engine/c/api/driver__extras_8h.html | CC-MAIN-2016-18 | en | refinedweb |
Is Java a pure object oriented language?
Is Java a pure object oriented language? Hi,
Is Java a pure object oriented language?
thanks
Hi
No, Java is an object oriented programming language but not purely a object oriented language. In OOPs programming
An Overview of Java Java is a programming
language Java is Object Oriented Programming
;
Java as a programming language
Java is an Object oriented...;
Java as an Object Oriented Language
In this section, we will discuss the OOPs...
Master java in a week
Where to learn java programming language
and want to learn Java and become master of the Java programming language? Where to learn java programming language?
Thanks
Hi,
Java is Object oriented programming language. It's easy to start learning Java. You can learn how
fully object oriented language - Java Beginners
fully object oriented language Is java is a fully object oriented language?if no,why? Hi Friend,
Java is not a fully object oriented language because of the following reasons:
1)It uses primitive data type like
Master Java In A Week
Java is an Object oriented application programming language developed...
language.
Java as an Object Oriented Language...Master Java In A Week
Master Java Programming Language in a week
java is pure object oriented
java is pure object oriented java is pure object oriented or not.? reason.?
Java is not pure object oriented language because... are not object
2)It does not support operator overloading multiple inheritance.
3
Object-Oriented Language: Java / APIs, Java OOPs
Java OOPs
In this section we will learn Object Oriented (OOPs) Concepts ....
Java is one of the useful Object Oriented programming language. Other Object...,
Lasso, Perl 5,PHP5, VBScript, VBA etc. Java is popular object oriented
programming
Learn Java - Learn Java Quickly
Learn Java - Learn Java Quickly
Java is an object oriented
programming language... useful than other object oriented languages. It is now
most demanded
Core Java - Java Beginners
;Core JavaAn object oriented programming language that was introduced by Sun... talk about core java, we means very basic or from the scratch where you learn about...Core Java What is Java? I am looking for Core Java Training
Easiest way to learn Java
of the language like what is Java, about object oriented language,
methods of Java...There are various ways to learn Java language but the easiest way to learn... the language, the course is divided into two parts: core java and advanced
java. First - Java Beginners
to : Java How can we explain about an object to an interviewer Hi friend,
Object :
Object is the basic entity of object oriented Training and Tutorials, Core Java Training
Java Training and Tutorials, Core Java Training
Core Java Training
Java is a powerful object-oriented programming language with simple
code
Core Java tutorial for beginners
to learn the
basic of Java language. In order to make it simpler to understand... the concept of core java
completely then he/she finds it easy to learn... that programmer will
find useful in learning the language. Each Core Java tutorials
object oriented programming - Java Beginners
object oriented programming sir,
i read in the book tat object oriented program exhibits recurring structures.
i want to know "what is meant by recurring structures?" Hi Friend,
Any structure to be occurred over
Learn Java Programming
you proficient in the language.
Java is an Object Oriented Programming(OOP... of the programmer wants to learn this language quickly.
The basic of Java tutorials... grasp the fundamentals and coding basics quickly, they can learn Java language
Which language should you learn Java or .NET?
If you are confused about which language should you learn Java or .NET, well... is a high-level object-oriented programming language developed by the Sun... players, etc. have Java technology in their core. It is believed that around 1.1
Object-Oriented programming - Java Beginners
Object-Oriented programming Write a program to display the names and salaries of 5 employees.Make use of a class and an array.The salary of each employee should increase by 5% and displayed back. Hi friend,
Code
core java - Java Beginners
core java When we will use marker interface in our application? Hi friend,
Marker Interface :
In java language programming...://
Thanks
Core Java Tutorial for Beginners
Core Java tutorials for beginners help you to learn the language in an easy way giving you simple examples with explanation. Core Java is the basic of Java language. Java is an Object Oriented Programming language that is made open
core java - Java Beginners
-features.shtml
******************
2)Differences:
1)Java is pure object oriented programming language but c/c++ is not.
2)Java is platform independent but C...core java 1. What are the Advantages of Java?
2. What
Learn Java in a day
Learn Java in a day
Java is the most exciting object oriented programming language in
computer programming landscape. Java provides
core java - Development process
to : java what is an Instanciation? Hi friend,
When we create object from a class, it is created on stack or on heap.
The existance
Learn Java in 24 hours
and
how it is different from other object oriented language like C, C++. Java... to execute a particular task.
Oops Concept
Java is an Object Oriented...Learning Java programming language is very simple especially for developers
Master java in a week
to
instantiate an object of the class. This is used by the Java interpreter...;
Class Declaration:
Class is the building block in Java, each
and every methods & variable exists within the class or object
core java - Java Beginners
/java/language/java-keywords.shtml
Thanks...core java how many keywords are in java? give with category?
Learn Java
Technology development.
Java an Object Oriented based Programming language is being...There is a need to learn Java programming in today?s world as it has become... other language.
Java has become so famous that wherever a computer is Java
core java - Java Interview Questions
core java - Use of polymorphism in object oriented programming Hi... URL. ... for the same method. Polymorphism can be implemented in Java language in form
Learn Java online
Video tutorials, Examples, etc. that guides a beginner to learn the language and
eventually master it.
The other fact is learning Java online is free... can learn at his/her pace.
Learning Java online is not difficult because every
How to learn Java easily?
, Java fields and methods, Java as
Object Oriented language, Encapsulation... a simple language and
easy way to understand and learn the language. Java on its...If you are wondering how to learn Java easily, well its simple, learn
Java Reference Books
;
The Java Language Specification
The Java programming language is a general-purpose, concurrent, class-based, object-oriented language. It is designed to be simple enough that many programmers
Java as a general purpose language
Java as a general purpose language
Java is an Object oriented application programming
language... language, be it a hardware
platform or any operating system.
Java programs run
Core Java Training Topics
Programming Language
Object-Oriented concepts...
Core Java Training Topics
Core Java Training Course
Objective
To teach
java object - Java Beginners
information. object i want to where in the memory the java objects,local... are stayed Hi friend,
Instance Variables (Non-static fields): In object
Core Java Doubts - Java Beginners
Core Java Doubts 1)How to swap two numbers suppose a=5 b=10; without... and interface, visit the following links:
5
Java Courses
in Java anywhere in the world. They can use this course to master Java language... Programming Language
Object-Oriented concepts
Syntax (Data Type, Variables... and every
topic. To start with the course, very basic concepts of Object Oriented
java object - Java Beginners
of objects. The primitive data type and keyword void is work as a class object.
Object: Object is the basic entity of object oriented programming language...java object i want a complete memory description of objects,methods
Core Java Topics
of every core java topic and simple example of core java that will help you learn...
A First Java Program
Object-Oriented Programming
Classes and Objects
Fields...Core Java Topics
Following are the topics, which are covered under Core Java
to learn java
to learn java I am b.com graduate. Can l able to learn java platform without knowing any basics software language.
Learn Java from the following link:
Java Tutorials
Here you will get several java tutorials
Learn Java in 21 days
of the simplest object oriented language, where a
programmer does not need to code... on to more complex programming.
What is Java?
Java is an object oriented... and data. Every object has some properties. A language like Java follow
OOPs
Learn java
Learn java Hi,
I am absolute beginner in Java programming Language. Can anyone tell me how I can learn:
a) Basics of Java
b) Advance Java
c) Java frameworks
and anything which is important.
Thanks
New to Java programming
should learn the core concepts of Java programming
language and learn how...I am new to Java, how I can learn Java programming Language?
Java... Application.
So, first thing in learning the Java programming language is to learn
How to learn Java with no programming experience
is the most exciting object oriented programming language in computer...Tutorial, how to learn Java provides you the opportunity to you about....
Apart from these, topic How to learn Java with no programming
experience
java language
java language Define a class named Doctor whose objects are records...) and identification number (use the type String). A Billing object will contain a Patient object and a Doctor object. Give your classes a reasonable complement
Marvellous chance to learn java from the Java Experts
Marvellous chance to learn java from the Java Experts...:
Makes you an expert on Core Java technology... for Software Development on Java Platform.
Learn to implement
JAVA(core) - Java Beginners
JAVA(core) Core Java In java 'null' is keyword which means object have nothing to store. even not allocated memory
Java Programming Language
Introduction to Java programming Language
Java is one of the Object-Oriented... A. Gosling developed the Java
programming language in the year 1995 at Sun...
JME - For developing Mobile applications
Java programming language
In Java
java is java purely object oriented language
core java - Java Beginners
core java how to write a simple java program? Hi friend...");
}
}
-------------------------------------------
Read for more information.
Thanks
core java
core java what is difference between specifier and modifier?
what is difference between code and data?
what is difference between instance and object
core Java - Java Beginners
core Java how is it possible to create object before calling main() in Java
Core Java
Core Java Is Java supports Multiple Inheritance? Then How ?
There is typo the question is ,
What is Marker Interface and where it can... information, visit the following link:
Learn Interface
Thanks
core java
core java java program using transient variable
Hi... of the object. Variables that are part of the persistent state of an object must be saved when the object is archived.
Here is an example:
public class
Core java interview question, object creation.
Core java interview question, object creation. How can we restrict to create objects for more than five? That means i want only 5 objects, how to restrict to create 6th objects
Core java - Java Beginners
://
Thanks Hi...Core java Hello sir/madam,
Can you please tell me why multiple inheritance from java is removed.. with any example..
Thank you
Java
Java Whether Java is pure object oriented Language
core java - Java Beginners
core java what is object serialization ?
with an example
Hi Friend,
Please visit the following link:
Thanks
CORE JAVA
CORE JAVA Q)How to sort a Hash Table, if you pass an Employee Class Object in that First Name, Last Name, Middle Name based on last name how you sorting the Employee Class?
Q)How to display the list variables in reverse
Important Interview Questions in Core Java
Important Interview Questions in Core Java
Core Java refers to the fundamentals of Java, necessary to learn all essential components for being a Java programmer. Core java is not only essential for beginners but also for professionals
core java
core java 1)How to short this {5, 4, 7, 8, 2, 0, 9} give me logic... max connection pool is 10 and all 10 connection object are being used, if same time a request is come that want connection object what will happened
core java
core java 1.Given:
voidwaitForSignal() {
Object obj = new Object();
synchronized (Thread.currentThread()) {
obj.wait();
obj.notify();
}
}
Which statement is true?
A. This code can throw an InterruptedException.
B. This code
Java Programming Language
the languages of Java. Java is a portable programming language which was
developed... Microsystems. The nucleus of the java platform is the Java
programming language... applications and of course for aiming core java J2SE is designed.
It is noted that Sun
core java
core java Hello,
can any one please expain me.I have doubt in Object class. Why it is necessary to override hashCode method with equals method...;
}
public boolean equals(Object obj)
{
B b=(B)obj;
boolean falg=(i==b.i&
Java beginners Course
Java course helps novices to learn the language in a very easy and quick manner...?
Object oriented language
Downloading and Installing JDK
Installing...Java beginners course at RoseIndia is one of the best way to learn Java
Java Training - Corporate Java Training
them expert training and hands on exercise to really master the Core
Java...;
Learn through Java Training:
Java is one of the most popular programming language to
develop applications based on open source technology. Java.... There are several ways by
which a young developer can learn Java programming, one
Roseindia Java Tutorials
as an Object Oriented Language
JEE 5 Tutorial
JDK 6 Tutorial
Java IO... at Roseindia in a single click and learn the core concepts of Java programming... programming language that enabled the learners compile and run their own Java
Introduction to Java
Java is an open source object oriented programming language... features it provide to them over
other languages.
Java is a simple language..., Mike Sheridan under James Goslings decided to develop a programming
language
Learning the Java Language
Learning the Java Language Hi,
I am beginner in Java and I want to learn Java very quickly. Tell me which url will help me in Learning the Java Language?
Thanks
Hi,
Please see Java Tutorials section.
Thanks
core java - Java Beginners
core java hi,
we know that String object is immutable in nature so like string object can we create our own object and that object should be immutable i mean we cant modify our own object plzzzzzzzzzzzzz tell me the answer
core java
core java how to display characters stored in array in core java
Learn Java for beginner
Java is the most used programming language in the world. Java provides... and
available to anyone. This is the advantage Java has on other language and is the
main reason behind its popularity.
More and more students are opting to learn Java
Core Java Jobs at Rose India
Core Java Jobs at Rose India
This Core Java Job is unique among... with your job. You can work in Core Java and we will provide you
training
core java
core java basic java interview question
Java Video Tutorial: How to learn Java?
technology. A beginner in the Java programming language first learn
the how... programming language concept.
Finally you can learn the advanced concepts of the Java...Java Video tutorial for beginners - Provides learn first information
about
How to learn Java?
to
learn Java language than there are two ways you can do that. One is the
classroom... training.
A young developer who is seeking a career in Java can learn the language... it from home but also because its gives you the
freedom to learn the language
CORE JAVA
CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ??
Core Java Tutorials
core java
core java i need core java material
Hello Friend,
Please visit the following link:
Core Java
Thanks
Java Example Codes and Tutorials
Wide Web.
Java is an object-oriented language, and this is very
similar to C...?
Java is a high-level object-oriented programming language...
as an Internet Language
Java is an object oriented
java
programming languages.
* Java is object-oriented, that is used to build modular...java what is java
Java is a programming language... is secure. The Java language, compiler, interpreter and runtime environment
Core Java - Java Beginners
Core Java How can I take input? hai....
u can take input... InputStreamReader(System.in));//creating object
System.out.println('\7... information :
Thanks
Core java - Java Interview Questions
Core java Dear Sir/Mam
why java support dynamic object creation?please discuss advatage and disadvantage
Core JAva
Core JAva how to swap 2 variables without temp in java
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/89274 | CC-MAIN-2016-18 | en | refinedweb |
Table of Contents
A Bloom filter is a set-like data structure that is highly efficient in its use of space. It only supports two operations: insertion and membership querying. Unlike a normal set data structure, a Bloom filter can give incorrect answers. If we query it to see whether an element that we have inserted is present, it will answer affirmatively. If we query for an element that we have not inserted, it might incorrectly claim that the element is present.
For many applications, a low rate of false positives is tolerable. For instance, the job of a network traffic shaper is to throttle bulk transfers (e.g. BitTorrent) so that interactive sessions (such as ssh sessions or games) see good response times. A traffic shaper might use a Bloom filter to determine whether a packet belonging to a particular session is bulk or interactive. If it misidentifies one in ten thousand bulk packets as interactive and fails to throttle it, nobody will notice.
The attraction of a Bloom filter is its space efficiency. If we want to build a spell checker, and have a dictionary of half a million words, a set data structure might consume 20 megabytes of space. A Bloom filter, in contrast, would consume about half a megabyte, at the cost of missing perhaps 1% of misspelled words.
Behind the scenes, a Bloom filter is remarkably simple. It consists of a bit array and a handful of hash functions. We'll use k for the number of hash functions. If we want to insert a value into the Bloom filter, we compute k hashes of the value, and turn on those bits in the bit array. If we want to see whether a value is present, we compute k hashes, and check all of those bits in the array to see if they are turned on.
To see how this works, let's say we want to insert the
strings
"foo" and
"bar" into a Bloom
filter that is 8 bits wide, and we have two hash
functions.
Compute the two hashes of
"foo", and get
the values
1 and
6.
Set bits
1 and
6 in the bit
array.
Compute the two hashes of
"bar", and get
the values
6 and
3.
Set bits
6 and
3 in the bit
array.
This example should make it clear why we cannot remove an
element from a Bloom filter: both
"foo" and
"bar" resulted in bit 6 being set.
Suppose we now want to query the Bloom filter, to see
whether the values
"quux" and
"baz"
are present.
Compute the two hashes of
"quux", and get
the values
4 and
0.
Check bit
4 in the bit array. It is not
set, so
"quux" cannot be present. We do not
need to check bit
0.
Compute the two hashes of
"baz", and get
the values
1 and
3.
Check bit
1 in the bit array. It is
set, as is bit
3, so we say that
"baz" is present even though it is not. We
have reported a false positive.
For a survey of some of the uses of Bloom filters in networking, see [Broder02].
Not all users of Bloom filters have the same needs. In some cases, it suffices to create a Bloom filter in one pass, and only query it afterwards. For other applications, we may need to continue to update the Bloom filter after we create it. To accommodate these needs, we will design our library with mutable and immutable APIs.
We will segregate the mutable and immutable APIs that we
publish by placing them in different modules:
BloomFilter for the immutable code, and
BloomFilter.Mutable for the mutable code.
In addition, we will create several “helper” modules that won't provide parts of the public API, but will keep the internal code cleaner.
Finally, we will ask the user of our API to provide a function that can generate a number of hashes of an element. This function will have the type a -> [Word32]. We will use all of the hashes that this function returns, so the list must not be infinite!
The data structure that we use for our Haskell Bloom filter is a direct translation of the simple description we gave earlier: a bit array and a function that computes hashes.
-- file: BloomFilter/Internal.hs module BloomFilter.Internal ( Bloom(..) , MutBloom(..) ) where import Data.Array.ST (STUArray) import Data.Array.Unboxed (UArray) import Data.Word (Word32) data Bloom a = B { blmHash :: (a -> [Word32]) , blmArray :: UArray Word32 Bool }
When we create our Cabal package, we will not be exporting
this
BloomFilter.Internal module. It exists purely
to let us control the visibility of names. We will import
BloomFilter.Internal into both the mutable and
immutable modules, but we will re-export from each module only
the type that is relevant to that module's API.
Unlike other Haskell arrays, a UArray contains unboxed values.
For a normal Haskell type, a value can be either fully
evaluated, an unevaluated thunk, or the special value
⊥, pronounced (and sometimes written)
“bottom”. The value ⊥ is a placeholder
for a computation that does not succeed. Such a computation
could take any of several forms. It could be an infinite
loop; an application of
error; or the
special value
undefined.
A type that can contain ⊥ is referred to as
lifted. All normal Haskell types are
lifted. In practice, this means that we can always write
error "eek!" or
undefined in place
of a normal expression.
This ability to store thunks or ⊥ comes with a performance cost: it adds an extra layer of indirection. To see why we need this indirection, consider the Word32 type. A value of this type is a full 32 bits wide, so on a 32-bit system, there is no way to directly encode the value ⊥ within 32 bits. The runtime system has to maintain, and check, some extra data to track whether the value is ⊥ or not.
An unboxed value does away with this indirection. In doing so, it gains performance, but sacrifices the ability to represent a thunk or ⊥. Since it can be denser than a normal Haskell array, an array of unboxed values is an excellent choice for numeric data and bits.
GHC implements a UArray of Bool values by packing eight array elements into each byte, so this type is perfect for our needs.
Back in the section called “Modifying array elements”, we mentioned that modifying an immutable array is prohibitively expensive, as it requires copying the entire array. Using a UArray does not change this, so what can we do to reduce the cost to bearable levels?
In an imperative language, we would simply modify the elements of the array in place; this will be our approach in Haskell, too.
Haskell provides a special monad, named ST[58], which lets us work safely with mutable state. Compared to the State monad, it has some powerful added capabilities.
We can thaw an immutable array to give a mutable array; modify the mutable array in place; and freeze a new immutable array when we are done.
We have the ability to use mutable references. This lets us implement data structures that we can modify after construction, as in an imperative language. This ability is vital for some imperative data structures and algorithms, for which similarly efficient purely functional alternatives have not yet been discovered.
The IO monad also provides these capabilities.
The major difference between the two is that the ST
monad is intentionally designed so that we can
escape from it back into pure Haskell code.
We enter the ST monad via the execution function
runST, in the same way as for most other
Haskell monads (except IO, of course), and we
escape by returning from
runST.
When we apply a monad's execution function, we expect it to
behave repeatably: given the same body and arguments, we must
get the same results every time. This also applies to
runST. To achieve this repeatability, the
ST monad is more restrictive than the
IO monad. We cannot read or write files, create
global variables, or fork threads. Indeed, although we can
create and work with mutable references and arrays, the type
system prevents them from escaping to the caller of
runST. A mutable array must be frozen into
an immutable array before we can return it, and a mutable
reference cannot escape at all.
The public interfaces that we provide for working with Bloom filters are worth a little discussion.
-- file: BloomFilter/Mutable.hs module BloomFilter.Mutable ( MutBloom , elem , notElem , insert , length , new ) where import Control.Monad (liftM) import Control.Monad.ST (ST) import Data.Array.MArray (getBounds, newArray, readArray, writeArray) import Data.Word (Word32) import Prelude hiding (elem, length, notElem) import BloomFilter.Internal (MutBloom(..))
We export several names that clash with names exported by
the Prelude. This is deliberate: we expect users of our modules
to import them with qualified names. This reduces the burden on
the memory of our users, as they should already be familiar with
the Prelude's
elem,
notElem, and
length
functions.
When we use a module written in this style, we might often
import it with a single-letter prefix, for instance as
import qualified BloomFilter.Mutable as M. This
would allow us to write
M.length, which
stays compact and readable.
Alternatively, we could import the module unqualified, and
import the Prelude while hiding the clashing names with
import Prelude hiding (length). This is much less
useful, as it gives a reader skimming the code no local cue that
they are not actually seeing the Prelude's
length.
Of course, we seem to be violating this precept in our own
module's header: we import the Prelude, and hide some of the
names it exports. There is a practical reason for this. We
define a function named
length. If we
export this from our module without first hiding the Prelude's
length, the compiler will complain that it
cannot tell whether to export our version of
length or the Prelude's.
While we could export the fully qualified name
BloomFilter.Mutable.length to eliminate the
ambiguity, that seems uglier in this case. This decision has no
consequences for someone using our module, just for ourselves as
the authors of what ought to be a “black box”, so
there is little chance of confusion here.
We put type declaration for our mutable Bloom filter in the
BloomFilter.Internal module, along with the
immutable Bloom type.
-- file: BloomFilter/Internal.hs data MutBloom s a = MB { mutHash :: (a -> [Word32]) , mutArray :: STUArray s Word32 Bool }
The STUArray type gives us a mutable unboxed
array that we can work with in the ST monad. To
create an STUArray, we use the
newArray function. The
new function belongs in the
BloomFilter.Mutable function.
-- file: BloomFilter/Mutable.hs new :: (a -> [Word32]) -> Word32 -> ST s (MutBloom s a) new hash numBits = MB hash `liftM` newArray (0,numBits-1) False
Most of the methods of STUArray are actually
implementations of the MArray typeclass, which is
defined in the
Data.Array.MArray module.
Our
length function is slightly
complicated by two factors. We are relying on our bit array's
record of its own bounds, and an MArray instance's
getBounds function has a monadic type. We
also have to add one to the answer, as the upper bound of the
array is one less than its actual length.
-- file: BloomFilter/Mutable.hs length :: MutBloom s a -> ST s Word32 length filt = (succ . snd) `liftM` getBounds (mutArray filt)
To add an element to the Bloom filter, we set all of the
bits indicated by the hash function. We use the
mod function to ensure that all of the
hashes stay within the bounds of our array, and isolate our code
that computes offsets into the bit array in one function.
-- file: BloomFilter/Mutable.hs insert :: MutBloom s a -> a -> ST s () insert filt elt = indices filt elt >>= mapM_ (\bit -> writeArray (mutArray filt) bit True) indices :: MutBloom s a -> a -> ST s [Word32] indices filt elt = do modulus <- length filt return $ map (`mod` modulus) (mutHash filt elt)
Testing for membership is no more difficult. If every bit indicated by the hash function is set, we consider an element to be present in the Bloom filter.
-- file: BloomFilter/Mutable.hs elem, notElem :: a -> MutBloom s a -> ST s Bool elem elt filt = indices filt elt >>= allM (readArray (mutArray filt)) notElem elt filt = not `liftM` elem elt filt
We need to write a small supporting function: a monadic
version of
all, which we will call
allM.
-- file: BloomFilter/Mutable.hs allM :: Monad m => (a -> m Bool) -> [a] -> m Bool allM p (x:xs) = do ok <- p x if ok then allM p xs else return False allM _ [] = return True
Our interface to the immutable Bloom filter has the same structure as the mutable API.
-- file: ch26/BloomFilter.hs module BloomFilter ( Bloom , length , elem , notElem , fromList ) where import BloomFilter.Internal import BloomFilter.Mutable (insert, new) import Data.Array.ST (runSTUArray) import Data.Array.IArray ((!), bounds) import Data.Word (Word32) import Prelude hiding (elem, length, notElem) length :: Bloom a -> Int length = fromIntegral . len len :: Bloom a -> Word32 len = succ . snd . bounds . blmArray elem :: a -> Bloom a -> Bool elt `elem` filt = all test (blmHash filt elt) where test hash = blmArray filt ! (hash `mod` len filt) notElem :: a -> Bloom a -> Bool elt `notElem` filt = not (elt `elem` filt)
We provide an easy-to-use means to create an immutable Bloom
filter, via a
fromList function. This
hides the ST monad from our users, so that they
only see the immutable type.
-- file: ch26/BloomFilter.hs fromList :: (a -> [Word32]) -- family of hash functions to use -> Word32 -- number of bits in filter -> [a] -- values to populate with -> Bloom a fromList hash numBits values = B hash . runSTUArray $ do mb <- new hash numBits mapM_ (insert mb) values return (mutArray mb)
The key to this function is
runSTUArray. We mentioned earlier that in
order to return an immutable array from the ST
monad, we must freeze a mutable array. The
runSTUArray function combines execution
with freezing. Given an action that returns an
STUArray, it executes the action using
runST; freezes the STUArray
that it returns; and returns that as a
UArray.
The
MArray typeclass provides a
freeze function that we could use instead,
but
runSTUArray is both more convenient and
more efficient. The efficiency lies in the fact that
freeze must copy the underlying data from
the STUArray to the new UArray, to
ensure that subsequent modifications of the
STUArray cannot affect the contents of the
UArray. Thanks to the type system,
runSTUArray can guarantee that an
STUArray is no longer accessible when it uses it to
create a UArray. It can thus share the underlying
contents between the two arrays, avoiding the copy.
Although provide our users with to need.
If we import both
BloomFilter.Easy and
BloomFilter, you might wonder what will happen if
we try to use a name exported by both. We already know that
if we import
BloomFilter unqualified and try to
use
length, GHC will issue an error
about ambiguity, because the Prelude also makes the name
length available.
The Haskell standard requires an implementation to be able
to tell when several names refer to the same
“thing”. For instance, the Bloom
type is exported by
BloomFilter and
BloomFilter.Easy. If we import both modules and
try to use Bloom, GHC will be able to see that
the Bloom re-exported from
BloomFilter.Easy is the same as the one exported
from
BloomFilter, and it will not report an
ambiguity.
A Bloom filter depends on fast, high-quality hashes for good performance and a low false positive rate. It is surprisingly difficult to write a general purpose hash function that has both of these properties.
Luckily for us, a fellow named Bob Jenkins developed some
hash functions that have exactly these properties, and he
placed the code in the public domain at[59] . He wrote his hash functions in C, so we can
easily use the FFI to create bindings to them. The specific
source file that we need from that site is named
lookup3.c.
We create a
cbits directory and download
it to there.
There remains one hitch: we will frequently need seven or
even ten hash functions. We really don't want to scrape
together that many different functions, and fortunately we do
not need to: in most cases, we can get away with just two. We
will see how shortly. The Jenkins hash library includes two
functions,
hashword2 and
hashlittle2, that compute two hash
values. Here is a C header file that describes the APIs of
these two functions. We save this to
cbits/lookup3.h.
/* save this file as lookup3.h */ #ifndef _lookup3_h #define _lookup3_h #include <stdint.h> #include <sys/types.h> /* only accepts uint32_t aligned arrays of uint32_t */ void hashword2(const uint32_t *key, /* array of uint32_t */ size_t length, /* number of uint32_t values */ uint32_t *pc, /* in: seed1, out: hash1 */ uint32_t *pb); /* in: seed2, out: hash2 */ /* handles arbitrarily aligned arrays of bytes */ void hashlittle2(const void *key, /* array of bytes */ size_t length, /* number of bytes */ uint32_t *pc, /* in: seed1, out: hash1 */ uint32_t *pb); /* in: seed2, out: hash2 */ #endif /* _lookup3_h */
A “salt” is a value that perturbs the hash value that the function computes. If we hash the same value with two different salts, we will get two different hashes. Since these functions compute two hashes, they accept two salts.
Here are our Haskell bindings to these functions.
-- file: BloomFilter/Hash.hs {-# LANGUAGE BangPatterns, ForeignFunctionInterface #-} module BloomFilter.Hash ( Hashable(..) , hash , doubleHash ) where import Data.Bits ((.&.), shiftR) import Foreign.Marshal.Array (withArrayLen) import Control.Monad (foldM) import Data.Word (Word32, Word64) import Foreign.C.Types (CSize) import Foreign.Marshal.Utils (with) import Foreign.Ptr (Ptr, castPtr, plusPtr) import Foreign.Storable (Storable, peek, sizeOf) import qualified Data.ByteString as Strict import qualified Data.ByteString.Lazy as Lazy import System.IO.Unsafe (unsafePerformIO) foreign import ccall unsafe "lookup3.h hashword2" hashWord2 :: Ptr Word32 -> CSize -> Ptr Word32 -> Ptr Word32 -> IO () foreign import ccall unsafe "lookup3.h hashlittle2" hashLittle2 :: Ptr a -> CSize -> Ptr Word32 -> Ptr Word32 -> IO ()
We have specified that the definitions of the functions
can be found in the
lookup3.h header file
that we just created.
For convenience and efficiency, we will combine the 32-bit salts consumed, and the hash values computed, by the Jenkins hash functions into a single 64-bit value.
-- file: BloomFilter/Hash.hs hashIO :: Ptr a -- value to hash -> CSize -- number of bytes -> Word64 -- salt -> IO Word64 hashIO ptr bytes salt = with (fromIntegral salt) $ \sp -> do let p1 = castPtr sp p2 = castPtr sp `plusPtr` 4 go p1 p2 peek sp where go p1 p2 | bytes .&. 3 == 0 = hashWord2 (castPtr ptr) words p1 p2 | otherwise = hashLittle2 ptr bytes p1 p2 words = bytes `div` 4
Without explicit types around to describe what is
happening, the above code is not completely obvious. The
with function allocates room for the salt
on the C stack, and stores the current salt value in there, so
sp is a Ptr Word64. The
pointers
p1 and
p2 are
Ptr Word32;
p1 points at the
low word of
sp, and
p2
at the high word. This is how we chop the single
Word64 salt into two Ptr Word32
parameters.
Because all of our data pointers are coming from the
Haskell heap, we know that they will be aligned on an address
that is safe to pass to either
hashWord2
(which only accepts 32-bit-aligned addresses) or
hashLittle2. Since
hashWord32 is the faster of the two
hashing functions, we call it if our data is a multiple of
4 bytes in size, otherwise
hashLittle2.
Since the C hash function will write the computed hashes
into
p1 and
p2, we only
need to
peek the pointer
sp to retrieve the computed hash.
We don't want clients of this module to be stuck fiddling with low-level details, so we use a typeclass to provide a clean, high-level interface.
-- file: BloomFilter/Hash.hs class Hashable a where hashSalt :: Word64 -- ^ salt -> a -- ^ value to hash -> Word64 hash :: Hashable a => a -> Word64 hash = hashSalt 0x106fc397cf62f64d3
We also provide a number of useful implementations of this typeclass. To hash basic types, we must write a little boilerplate code.
-- file: BloomFilter/Hash.hs hashStorable :: Storable a => Word64 -> a -> Word64 hashStorable salt k = unsafePerformIO . with k $ \ptr -> hashIO ptr (fromIntegral (sizeOf k)) salt instance Hashable Char where hashSalt = hashStorable instance Hashable Int where hashSalt = hashStorable instance Hashable Double where hashSalt = hashStorable
We might prefer to use the Storable typeclass to write just one declaration, as follows:
-- file: BloomFilter/Hash.hs instance Storable a => Hashable a where hashSalt = hashStorable
Unfortunately, Haskell does not permit us to write instances of this form, as allowing them would make the type system undecidable: they can cause the compiler's type checker to loop infinitely. This restriction on undecidable types forces us to write out individual declarations. It does not, however, pose a problem for a definition such as this one.
-- file: BloomFilter/Hash.hs hashList :: (Storable a) => Word64 -> [a] -> IO Word64 hashList salt xs = withArrayLen xs $ \len ptr -> hashIO ptr (fromIntegral (len * sizeOf x)) salt where x = head xs instance (Storable a) => Hashable [a] where hashSalt salt xs = unsafePerformIO $ hashList salt xs
The compiler will accept this instance, so we gain the ability to hash values of many list types[60]. Most importantly, since Char is an instance of Storable, we can now hash String values.
For tuple types, we take advantage of function composition. We take a salt in at one end of the composition pipeline, and use the result of hashing each tuple element as the salt for the next element.
-- file: BloomFilter/Hash.hs hash2 :: (Hashable a) => a -> Word64 -> Word64 hash2 k salt = hashSalt salt k instance (Hashable a, Hashable b) => Hashable (a,b) where hashSalt salt (a,b) = hash2 b . hash2 a $ salt instance (Hashable a, Hashable b, Hashable c) => Hashable (a,b,c) where hashSalt salt (a,b,c) = hash2 c . hash2 b . hash2 a $ salt
To hash ByteString types, we write special instances that plug straight into the internals of the ByteString types. This gives us excellent hashing performance.
-- file: BloomFilter/Hash.hs hashByteString :: Word64 -> Strict.ByteString -> IO Word64 hashByteString salt bs = Strict.useAsCStringLen bs $ \(ptr, len) -> hashIO ptr (fromIntegral len) salt instance Hashable Strict.ByteString where hashSalt salt bs = unsafePerformIO $ hashByteString salt bs rechunk :: Lazy.ByteString -> [Strict.ByteString] rechunk s | Lazy.null s = [] | otherwise = let (pre,suf) = Lazy.splitAt chunkSize s in repack pre : rechunk suf where repack = Strict.concat . Lazy.toChunks chunkSize = 64 * 1024 instance Hashable Lazy.ByteString where hashSalt salt bs = unsafePerformIO $ foldM hashByteString salt (rechunk bs)
Since a lazy ByteString is represented as a
series of chunks, we must be careful with the boundaries
between those chunks. The string
"foobar" can be
represented in five different ways, for example
["fo","obar"] or
["foob","ar"]. This
is invisible to most users of the type, but not to us since we
use the underlying chunks directly. Our
rechunk function ensures that the chunks
we pass to the C hashing code are a uniform 64KB in size, so
that we will give consistent hash values no matter where the
original chunk boundaries lie.
As we mentioned earlier, we need many more than two hashes to make effective use of a Bloom filter. We can use a technique called double hashing to combine the two values computed by the Jenkins hash functions, yielding many more hashes. The resulting hashes are of good enough quality for our needs, and far cheaper than computing many distinct hashes.
--
In the
BloomFilter.Easy module, we use our
new
doubleHash function to define the
easyList function whose type we defined
earlier.
-- file: BloomFilter/Easy.hs module BloomFilter.Easy ( suggestSizing , sizings , easyList -- re-export useful names from BloomFilter , B.Bloom , B.length , B.elem , B.notElem ) where import BloomFilter.Hash (Hashable, doubleHash) import Data.List (genericLength) import Data.Maybe (catMaybes) import Data.Word (Word32) import qualified BloomFilter as B easyList errRate values = case suggestSizing (genericLength values) errRate of Left err -> Left err Right (bits,hashes) -> Right filt where filt = B.fromList (doubleHash hashes) bits values
This depends on a
suggestSizing
function that estimates the best combination of filter size
and number of hashes to compute, based on our desired false
positive rate and the maximum number of elements that we
expect the filter to contain.
-- file: BloomFilter/Easy.hs suggestSizing :: Integer -- expected maximum capacity -> Double -- desired false positive rate -> Either String (Word32,Int) -- (filter size, number of hashes) suggestSizing capacity errRate | capacity <= 0 = Left "capacity too small" | errRate <= 0 || errRate >= 1 = Left "invalid error rate" | null saneSizes = Left "capacity too large" | otherwise = Right (minimum saneSizes) where saneSizes = catMaybes . map sanitize $ sizings capacity errRate sanitize (bits,hashes) | bits > maxWord32 - 1 = Nothing | otherwise = Just (ceiling bits, truncate hashes) where maxWord32 = fromIntegral (maxBound :: Word32) sizings :: Integer -> Double -> [(Double, Double)] sizings capacity errRate = [(((-k) * cap / log (1 - (errRate ** (1 / k)))), k) | k <- [1..50]] where cap = fromIntegral capacity
We perform some rather paranoid checking. For instance,
the
sizings function suggests pairs of
array size and hash count, but it does not validate its
suggestions. Since we use 32-bit hashes, we must filter out
suggested array sizes that are too large.
In our
suggestSizing function, we
attempt to minimise only the size of the bit array, without
regard for the number of hashes. To see why, let us
interactively explore the relationship between array size and
number of hashes.
Suppose we want to insert 10 million elements into a Bloom filter, with a false positive rate of 0.1%.
ghci>
let kbytes (bits,hashes) = (ceiling bits `div` 8192, hashes)
ghci>
:m +BloomFilter.Easy Data.ListCould not find module `BloomFilter.Easy': Use -v to see a list of the files searched for.
ghci>
mapM_ (print . kbytes) . take 10 . sort $ sizings 10000000 0.001<interactive>:1:35: Not in scope: `sort' <interactive>:1:42: Not in scope: `sizings'
We achieve the most compact table (just over 17KB) by computing 10 hashes. If we really were hashing the data repeatedly, we could reduce the number of hashes to 7 at a cost of 5% in space. Since we are using Jenkins's hash functions which compute two hashes in a single pass, and double hashing the results to produce additional hashes, the cost to us of computing extra those hashes is tiny, so we will choose the smallest table size.
If we increase our tolerance for false positives tenfold, to 1%, the amount of space and the number of hashes we need drop, though not by easily predictable amounts.
ghci>
mapM_ (print . kbytes) . take 10 . sort $ sizings 10000000 0.01<interactive>:1:35: Not in scope: `sort' <interactive>:1:42: Not in scope: `sizings'
We have created a moderately complicated library, with four
public modules and one internal module. To turn this into a
package that we can easily redistribute, we create a
rwh-bloomfilter.cabal file.
Cabal allows us to describe several libraries in a single
package. A
.cabal file begins with
information that is common to all of the libraries, which is
followed by a distinct section for each library.
Name: rwh-bloomfilter Version: 0.1 License: BSD3 License-File: License.txt Category: Data Stability: experimental Build-Type: Simple
As we are bundling some C code with our library, we tell Cabal about our C source files.
Extra-Source-Files: cbits/lookup3.c cbits/lookup3.h
The
extra-source-files directive has no effect
on a build: it directs Cabal to bundle some extra files if we
run runhaskell Setup sdist to create a source
tarball for redistribution.
Prior to 2007, the standard Haskell libraries were
organised in a handful of large packages, of which the biggest
was named
base. This organisation tied
many unrelated libraries together, so the Haskell community
split the
base package up into a number
of more modular libraries. For instance, the array types
migrated from
base into a package named
array.
A Cabal package needs to specify the other packages that
it needs to have present in order to build. This makes it
possible for Cabal's command line interface automatically
download and build a package's dependencies, if necessary. We
would like our code to work with as many versions of GHC as
possible, regardless of whether they have the modern layout of
base and numerous other packages. We
thus need to be able to specify that we depend on the
array package if it is present, and
base alone otherwise.
Cabal provides a generic
configurations feature, which we can use
to selectively enable parts of a
.cabal
file. A build configuration is controlled by a Boolean-valued
flag. If it is
True, the
text following an
if flag directive is used,
otherwise the text following the associated
else
is used.
Cabal-Version: >= 1.2 Flag split-base Description: Has the base package been split up? Default: True Flag bytestring-in-base Description: Is ByteString in the base or bytestring package? Default: False
The configurations feature was introduced in version 1.2 of Cabal, so we specify that our package cannot be built with an older version.
The meaning of the
split-base flag should
be self-explanatory.
The
bytestring-in-base flag deals with a
more torturous history. When the
bytestring package was first created,
it was bundled with GHC 6.4, and kept separate from the
base package. In GHC 6.6, it was
incorporated into the
base package,
but it became independent again when the
base package was split before the
release of GHC 6.8.1.
These flags are usually invisible to people building a
package, because Cabal handles them automatically. Before we
explain what happens, it will help to see the beginning of the
Library section of our
.cabal
file.
Library if flag(bytestring-in-base) -- bytestring was in base-2.0 and 2.1.1 Build-Depends: base >= 2.0 && < 2.2 else -- in base 1.0 and 3.0, bytestring is a separate package Build-Depends: base < 2.0 || >= 3, bytestring >= 0.9 if flag(split-base) Build-Depends: base >= 3.0, array else Build-Depends: base < 3.0
Cabal creates a package description with the default
values of the flags (a missing default is assumed to be
True). If that configuration can be built (e.g.
because all of the needed package versions are available), it
will be used. Otherwise, Cabal tries different combinations
of flags until it either finds a configuration that it can
build or exhausts the alternatives.
For example, if we were to begin with both
split-base and
bytestring-in-base
set to
True, Cabal would select the following
package dependencies.
Build-Depends: base >= 2.0 && < 2.2 Build-Depends: base >= 3.0, array
The
base package cannot
simultaneously be newer than
3.0 and older than
2.2, so Cabal would reject this configuration as
inconsistent. For a modern version of GHC, after a few
attempts it would discover this configuration that will indeed
build.
-- in base 1.0 and 3.0, bytestring is a separate package Build-Depends: base < 2.0 || >= 3, bytestring >= 0.9 Build-Depends: base >= 3.0, array
When we run runhaskell Setup configure,
we can manually specify the values of flags via the
--flag option, though we will rarely need to
do so in practice.
Continuing with our
.cabal
file, we fill out the remaining details of the Haskell side of
our library. If we enable profiling when we build, we want
all of our top-level functions to show up in any profiling
output.
GHC-Prof-Options: -auto-all
The
Other-Modules property lists Haskell
modules that are private to the library. Such modules will be
invisible to code that uses this package.
When we build this package with GHC, Cabal will pass the
options from the
GHC-Options property to the
compiler.
The
-O2 option makes GHC optimise our
code aggressively. Code compiled without optimisation is very
slow, so we should always use
-O2 for
production code.
To help ourselves to write cleaner code, we usually add
the
-Wall option, which enables all of
GHC's warnings. This will cause GHC to issue complaints
if it encounters potential problems, such as overlapping
patterns; function parameters that are not used; and a myriad
of other potential stumbling blocks. While it is often safe to
ignore these warnings, we generally prefer to fix up our code
to eliminate them. The small added effort usually yields code
that is easier to read and maintain.
When we compile with
-fvia-C, GHC will
generate C code and use the system's C compiler to compile it,
instead of going straight to assembly language as it usually
does. This slows compilation down, but sometimes the C
compiler can further improve GHC's optimised code, so it can
be worthwhile.
We include
-fvia-C here mainly to show
how to make compilation with it work.
C-Sources: cbits/lookup3.c CC-Options: -O3 Include-Dirs: cbits Includes: lookup3.h Install-Includes: lookup3.h
For the
C-Sources property, we only need to
list files that must be compiled into our library. The
CC-Options property contains options for the C
compiler (
-O3 specifies a high level of
optimisation). Because our FFI bindings for the Jenkins hash
functions refer to the
lookup3.h header
file, we need to tell Cabal where to find the header file. We
must also tell it to install the header
file (
Install-Includes), as otherwise client code
will fail to find the header file when we try to build
it.
Before we pay any attention to performance, we want to establish that our Bloom filter behaves correctly. We can easily use QuickCheck to test some basic properties.
-- file: examples/BloomCheck.hs {-# LANGUAGE GeneralizedNewtypeDeriving #-} module Main where import BloomFilter.Hash (Hashable) import Data.Word (Word8, Word32) import System.Random (Random(..), RandomGen) import Test.QuickCheck import qualified BloomFilter.Easy as B import qualified Data.ByteString as Strict import qualified Data.ByteString.Lazy as Lazy
We will not use the normal
quickCheck
function to test our properties, as the 100 test inputs that it
generates do not provide much coverage.
-- file: examples/BloomCheck.hs handyCheck :: Testable a => Int -> a -> IO () handyCheck limit = check defaultConfig { configMaxTest = limit , configEvery = \_ _ -> "" }
Our first task is to ensure that if we add a value to a Bloom filter, a subsequent membership test will always report it as present, no matter what the chosen false positive rate or input value is.
We will use the
easyList function to
create a Bloom filter. The Random instance for
Double generates numbers in the range zero to one,
so QuickCheck can nearly supply us with
arbitrary false positive rates.
However, we need to ensure that both zero and one are excluded from the false positives we test with. QuickCheck gives us two ways to do this.
By construction: we specify the
range of valid values to generate. QuickCheck provides a
forAll combinator for this
purpose.
By elimination: when QuickCheck
generates an arbitrary value for us, we filter out those
that do not fit our criteria, using the
(==>) operator. If we reject a
value in this way, a test will appear to succeed.
If we can choose either method, it is always preferable to take the constructive approach. To see why, suppose that QuickCheck generates 1,000 arbitrary values for us, and we filter out 800 as unsuitable for some reason. We will appear to run 1,000 tests, but only 200 will actually do anything useful.
Following this idea, when we generate desired false positive rates, we could eliminate zeroes and ones from whatever QuickCheck gives us, but instead we construct values in an interval that will always be valid.
-- file: examples/BloomCheck.hs falsePositive :: Gen Double falsePositive = choose (epsilon, 1 - epsilon) where epsilon = 1e-6 (=~>) :: Either a b -> (b -> Bool) -> Bool k =~> f = either (const True) f k prop_one_present _ elt = forAll falsePositive $ \errRate -> B.easyList errRate [elt] =~> \filt -> elt `B.elem` filt
Our small combinator,
(=~>), lets us
filter out failures of
easyList: if it
fails, the test automatically passes.
QuickCheck requires properties to be monomorphic. Since we have many different hashable types that we would like to test, we would very much like to avoid having to write the same test in many different ways.
Notice that although our
prop_one_present function is polymorphic,
it ignores its first argument. We use this to simulate
monomorphic properties, as follows.
ghci>
:load BloomCheckBloomCheck.hs:9:17: Could not find module `BloomFilter.Easy': Use -v to see a list of the files searched for. Failed, modules loaded: none.
ghci>
:t prop_one_present<interactive>:1:0: Not in scope: `prop_one_present'
ghci>
:t prop_one_present (undefined :: Int)<interactive>:1:0: Not in scope: `prop_one_present'
We can supply any value as the first argument to
prop_one_present. All that matters is
its type, as the same type will be used
for the first element of the second argument.
ghci>
handyCheck 5000 $ prop_one_present (undefined :: Int)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present'
ghci>
handyCheck 5000 $ prop_one_present (undefined :: Double)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present'
If we populate a Bloom filter with many elements, they should all be present afterwards.
-- file: examples/BloomCheck.hs prop_all_present _ xs = forAll falsePositive $ \errRate -> B.easyList errRate xs =~> \filt -> all (`B.elem` filt) xs
ghci>
handyCheck 2000 $ prop_all_present (undefined :: Int)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_all_present'
The QuickCheck library does not provide
Arbitrary instances for ByteString
types, so we must write our own. Rather than create a
ByteString directly, we will use a
pack function to create one from a
[Word8].
-- file: examples/BloomCheck.hs instance Arbitrary Lazy.ByteString where arbitrary = Lazy.pack `fmap` arbitrary coarbitrary = coarbitrary . Lazy.unpack instance Arbitrary Strict.ByteString where arbitrary = Strict.pack `fmap` arbitrary coarbitrary = coarbitrary . Strict.unpack
Also missing from QuickCheck are Arbitrary
instances for the fixed-width types defined in
Data.Word and
Data.Int. We need to
at least create an Arbitrary instance for
Word8.
-- file: examples/BloomCheck.hs instance Random Word8 where randomR = integralRandomR random = randomR (minBound, maxBound) instance Arbitrary Word8 where arbitrary = choose (minBound, maxBound) coarbitrary = integralCoarbitrary
We support these instances with a few common functions so that we can reuse them when writing instances for other integral types.
-- file: examples/BloomCheck.hs integralCoarbitrary n = variant $ if m >= 0 then 2*m else 2*(-m) + 1 where m = fromIntegral n integralRandomR (a,b) g = case randomR (c,d) g of (x,h) -> (fromIntegral x, h) where (c,d) = (fromIntegral a :: Integer, fromIntegral b :: Integer) instance Random Word32 where randomR = integralRandomR random = randomR (minBound, maxBound) instance Arbitrary Word32 where arbitrary = choose (minBound, maxBound) coarbitrary = integralCoarbitrary
With these Arbitrary instances created, we can try our existing properties on the ByteString types.
ghci>
handyCheck 1000 $ prop_one_present (undefined :: Lazy.ByteString)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_one_present' <interactive>:1:49: Failed to load interface for `Lazy': Use -v to see a list of the files searched for.
ghci>
handyCheck 1000 $ prop_all_present (undefined :: Strict.ByteString)<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_all_present' <interactive>:1:49: Failed to load interface for `Strict': Use -v to see a list of the files searched for.
The cost of testing properties of easyList
increases rapidly as we increase the number of tests to run.
We would still like to have some assurance that
easyList will behave well on huge inputs.
Since it is not practical to test this directly, we can use a
proxy: will
suggestSizing give a sensible
array size and number of hashes even with extreme
inputs?
This is a slightly tricky property to check. We need to
vary both the desired false positive rate and the expected
capacity. When we looked at some results from the
sizings function, we saw that the
relationship between these values is not easy to
predict.
We can try to ignore the complexity.
-- file: examples/BloomCheck.hs prop_suggest_try1 = forAll falsePositive $ \errRate -> forAll (choose (1,maxBound :: Word32)) $ \cap -> case B.suggestSizing (fromIntegral cap) errRate of Left err -> False Right (bits,hashes) -> bits > 0 && bits < maxBound && hashes > 0
Not surprisingly, this gives us a test that is not actually useful.
ghci>
handyCheck 1000 $ prop_suggest_try1<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try1'
ghci>
handyCheck 1000 $ prop_suggest_try1<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try1'
When we plug the counterexamples that QuickCheck prints
into
suggestSizings, we can see that
these inputs are rejected because they would result in a bit
array that would be too large.
ghci>
B.suggestSizing 1678125842 8.501133057303545e-3<interactive>:1:0: Failed to load interface for `B': Use -v to see a list of the files searched for.
Since we can't easily predict which combinations will cause this problem, we must resort to eliminating sizes and false positive rates before they bite us.
-- file: examples/BloomCheck.hs prop_suggest_try2 = forAll falsePositive $ \errRate -> forAll (choose (1,fromIntegral maxWord32)) $ \cap -> let bestSize = fst . minimum $ B.sizings cap errRate in bestSize < fromIntegral maxWord32 ==> either (const False) sane $ B.suggestSizing cap errRate where sane (bits,hashes) = bits > 0 && bits < maxBound && hashes > 0 maxWord32 = maxBound :: Word32
If we try this with a small number of tests, it seems to work well.
ghci>
handyCheck 1000 $ prop_suggest_try2<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:18: Not in scope: `prop_suggest_try2'
On a larger body of tests, we filter out too many combinations.
ghci>
handyCheck 10000 $ prop_suggest_try2<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:19: Not in scope: `prop_suggest_try2'
To deal with this, we try to reduce the likelihood of generating inputs that we will subsequently reject.
-- file: examples/BloomCheck.hs prop_suggestions_sane = forAll falsePositive $ \errRate -> forAll (choose (1,fromIntegral maxWord32 `div` 8)) $ \cap -> let size = fst . minimum $ B.sizings cap errRate in size < fromIntegral maxWord32 ==> either (const False) sane $ B.suggestSizing cap errRate where sane (bits,hashes) = bits > 0 && bits < maxBound && hashes > 0 maxWord32 = maxBound :: Word32
Finally, we have a robust looking property.
ghci>
handyCheck 40000 $ prop_suggestions_sane<interactive>:1:0: Not in scope: `handyCheck' <interactive>:1:19: Not in scope: `prop_suggestions_sane'
We now have a correctness base line: our QuickCheck tests pass. When we start tweaking performance, we can rerun the tests at any time to ensure that we haven't inadvertently broken anything.
Our first step is to write a small test application that we can use for timing.
-- file: examples/WordTest.hs module Main where import Control.Parallel.Strategies (NFData(..)) import Control.Monad (forM_, mapM_) import qualified BloomFilter.Easy as B import qualified Data.ByteString.Char8 as BS import Data.Time.Clock (diffUTCTime, getCurrentTime) import System.Environment (getArgs) import System.Exit (exitFailure) timed :: (NFData a) => String -> IO a -> IO a timed desc act = do start <- getCurrentTime ret <- act end <- rnf ret `seq` getCurrentTime putStrLn $ show (diffUTCTime end start) ++ " to " ++ desc return ret instance NFData BS.ByteString where rnf _ = () instance NFData (B.Bloom a) where rnf filt = B.length filt `seq` ()
We borrow the
rnf function that we
introduced in the section called “Separating algorithm from evaluation” to develop
a simple timing harness. Out
timed action
ensures that a value is evaluated to normal form in order to
accurately capture the cost of evaluating it.
The application creates a Bloom filter from the contents of a file, treating each line as an element to add to the filter.
-- file: examples/WordTest.hs main = do args <- getArgs let files | null args = ["/usr/share/dict/words"] | otherwise = args forM_ files $ \file -> do words <- timed "read words" $ BS.lines `fmap` BS.readFile file let len = length words errRate = 0.01 putStrLn $ show len ++ " words" putStrLn $ "suggested sizings: " ++ show (B.suggestSizing (fromIntegral len) errRate) filt <- timed "construct filter" $ case B.easyList errRate words of Left errmsg -> do putStrLn $ "Error: " ++ errmsg exitFailure Right filt -> return filt timed "query every element" $ mapM_ print $ filter (not . (`B.elem` filt)) words
We use
timed to account for the costs
of three distinct phases: reading and splitting the data into
lines; populating the Bloom filter; and querying every element
in it.
If we compile this and run it a few times, we can see that the execution time is just long enough to be interesting, while the timing variation from run to run is small. We have created a plausible-looking microbenchmark.
$
ghc -O2 --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest0.196347s to read words 479829 words 1.063537s to construct filter 4602978 bits 0.766899s to query every element
$
./WordTest0.179284s to read words 479829 words 1.069363s to construct filter 4602978 bits 0.780079s to query every element
To understand where our program might benefit from some tuning, we rebuild it and run it with profiling enabled.
Since we already built
WordTest and
have not subsequently changed it, if we rerun ghc to enable
profiling support, it will quite reasonably decide to do
nothing. We must force it to rebuild, which we accomplish by
updating the filesystem's idea of when we last edited the
source file.
$
touch WordTest.hs
$
ghc -O2 -prof -auto-all --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest +RTS -p0.322675s to read words 479829 words suggested sizings: Right (4602978,7) 2.475339s to construct filter 1.964404s to query every element
$
head -20 WordTest.proftotal time = 4.10 secs (205 ticks @ 20 ms) total alloc = 2,752,287,168 bytes (excludes profiling overheads) COST CENTRE MODULE %time %alloc doubleHash BloomFilter.Hash 48.8 66.4 indices BloomFilter.Mutable 13.7 15.8 elem BloomFilter 9.8 1.3 hashByteString BloomFilter.Hash 6.8 3.8 easyList BloomFilter.Easy 5.9 0.3 hashIO BloomFilter.Hash 4.4 5.3 main Main 4.4 3.8 insert BloomFilter.Mutable 2.9 0.0 len BloomFilter 2.0 2.4 length BloomFilter.Mutable 1.5 1.0
Our
doubleHash function immediately
leaps out as a huge time and memory sink.
Recall that the body of
doubleHash is
an innocuous list comprehension.
--
Since the function returns a list, it makes some sense that it allocates so much memory, but when code this simple performs so badly, we should be suspicious.
Faced with a performance mystery, the suspicious mind will naturally want to inspect the output of the compiler. We don't need to start scrabbling through assembly language dumps: it's best to start at a higher level.
GHC's
-ddump-simpl option prints out
the code that it produces after performing all of its
high-level optimisations.
$
ghc -O2 -c -ddump-simpl --make BloomFilter/Hash.hs > dump.txt[1 of 1] Compiling BloomFilter.Hash ( BloomFilter/Hash.hs )
The file thus produced is about a thousand lines long.
Most of the names in it are mangled somewhat from their
original Haskell representations. Even so, searching for
doubleHash will immediately drop us at
the definition of the function. For example, here is how we
might start exactly at the right spot from a Unix
shell.
$
less +/doubleHash dump.txt
It can be difficult to start reading the output of GHC's
simplifier. There are many automatically generated names, and
the code has many obscure annotations. We can make substantial
progress by ignoring things that we do not understand,
focusing on those that look familiar. The Core language shares
some features with regular Haskell, notably type signatures;
let for variable binding; and
case for pattern
matching.
If we skim through the definition of
doubleHash, we will arrive at a section
that looks something like this.
__letrec {
go_s1YC :: [GHC.Word.Word32] -> [GHC.Word.Word32]go_s1YC :: [GHC.Word.Word32] -> [GHC.Word.Word32]
[Arity 1 Str: DmdType S] go_s1YC = \ (ds_a1DR :: [GHC.Word.Word32]) -> case ds_a1DR of wild_a1DS { [] -> GHC.Base.[] @ GHC.Word.Word32;[Arity 1 Str: DmdType S] go_s1YC = \ (ds_a1DR :: [GHC.Word.Word32]) -> case ds_a1DR of wild_a1DS { [] -> GHC.Base.[] @ GHC.Word.Word32;
: y_a1DW ys_a1DX ->: y_a1DW ys_a1DX ->
GHC.Base.: @ GHC.Word.Word32GHC.Base.: @ GHC.Word.Word32
(case h1_s1YA of wild1_a1Mk { GHC.Word.W32# x#_a1Mm ->(case h1_s1YA of wild1_a1Mk { GHC.Word.W32# x#_a1Mm ->
case h2_s1Yy of wild2_a1Mu { GHC.Word.W32# x#1_a1Mw -> case y_a1DW of wild11_a1My { GHC.Word.W32# y#_a1MA -> GHC.Word.W32#case h2_s1Yy of wild2_a1Mu { GHC.Word.W32# x#1_a1Mw -> case y_a1DW of wild11_a1My { GHC.Word.W32# y#_a1MA -> GHC.Word.W32#
(GHC.Prim.narrow32Word# (GHC.Prim.plusWord#(GHC.Prim.narrow32Word# (GHC.Prim.plusWord#
x#_a1Mm (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# x#1_a1Mw y#_a1MA)))) } } }) (go_s1YC ys_a1DX)x#_a1Mm (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# x#1_a1Mw y#_a1MA)))) } } }) (go_s1YC ys_a1DX)
}; } in go_s1YC}; } in go_s1YC
(GHC.Word.$w$dmenumFromTo2 __word 0 (GHC.Prim.narrow32Word# (GHC.Prim.int2Word# ww_s1X3)))(GHC.Word.$w$dmenumFromTo2 __word 0 (GHC.Prim.narrow32Word# (GHC.Prim.int2Word# ww_s1X3)))
This is the body of the list comprehension. It may seem daunting, but we can look through it piece by piece and find that it is not, after all, so complicated.
From reading the Core for this code, we can see two interesting behaviours.
We are creating a list, then immediately
deconstructing it in the
go_s1YC
loop.
GHC can often spot this pattern of production followed immediately by consumption, and transform it into a loop in which no allocation occurs. This class of transformation is called fusion, because the producer and consumer become fused together. Unfortunately, it is not occurring here.
The repeated unboxing of
h1 and
h2 in the body of the loop is
wasteful.
To address these problems, we make a few tiny changes to
our
doubleHash function.
-- file: BloomFilter/Hash.hs doubleHash :: Hashable a => Int -> a -> [Word32] doubleHash numHashes value = go 0 where go n | n == num = [] | otherwise = h1 + h2 * n : go (n + 1) !h1 = fromIntegral (h `shiftR` 32) .&. maxBound !h2 = fromIntegral h h = hashSalt 0x9150a946c4a8966e value num = fromIntegral numHashes
We have manually fused the
[0..num]
expression and the code that consumes it into a single loop.
We have added strictness annotations to
h1
and
h2. And nothing more. This has turned
a 6-line function into an 8-line function. What effect does
our change have on Core output?
__letrec { $wgo_s1UH :: GHC.Prim.Word# -> [GHC.Word.Word32] [Arity 1 Str: DmdType L] $wgo_s1UH = \ (ww2_s1St :: GHC.Prim.Word#) -> case GHC.Prim.eqWord# ww2_s1St a_s1T1 of wild1_X2m { GHC.Base.False -> GHC.Base.: @ GHC.Word.Word32 (GHC.Word.W32# (GHC.Prim.narrow32Word# (GHC.Prim.plusWord# ipv_s1B2 (GHC.Prim.narrow32Word# (GHC.Prim.timesWord# ipv1_s1AZ ww2_s1St))))) ($wgo_s1UH (GHC.Prim.narrow32Word# (GHC.Prim.plusWord# ww2_s1St __word 1))); GHC.Base.True -> GHC.Base.[] @ GHC.Word.Word32 }; } in $wgo_s1UH __word 0
Our new function has compiled down to a simple counting loop. This is very encouraging, but how does it actually perform?
$
touch WordTest.hs
$
ghc -O2 -prof -auto-all --make WordTest[1 of 1] Compiling Main ( WordTest.hs, WordTest.o ) Linking WordTest ...
$
./WordTest +RTS -p0.304352s to read words 479829 words suggested sizings: Right (4602978,7) 1.516229s to construct filter 1.069305s to query every element ~/src/darcs/book/examples/ch27/examples $ head -20 WordTest.prof total time = 3.68 secs (184 ticks @ 20 ms) total alloc = 2,644,805,536 bytes (excludes profiling overheads) COST CENTRE MODULE %time %alloc doubleHash BloomFilter.Hash 45.1 65.0 indices BloomFilter.Mutable 19.0 16.4 elem BloomFilter 12.5 1.3 insert BloomFilter.Mutable 7.6 0.0 easyList BloomFilter.Easy 4.3 0.3 len BloomFilter 3.3 2.5 hashByteString BloomFilter.Hash 3.3 4.0 main Main 2.7 4.0 hashIO BloomFilter.Hash 2.2 5.5 length BloomFilter.Mutable 0.0 1.0
Our tweak has improved performance by about 11%. This is a good result for such a small change.
[59] Jenkins's hash functions have
much better mixing properties than
some other popular non-cryptographic hash functions that
you might be familiar with, such as FNV and
hashpjw, so we recommend avoiding
them. | http://book.realworldhaskell.org/read/advanced-library-design-building-a-bloom-filter.html | CC-MAIN-2016-18 | en | refinedweb |
j hey i m doing a project
can i create message reader using j2me in nokia phone??
pls give me source code or hint
calculator midlet
calculator midlet give me code calculator midlet in bluetooth application with i wann source code to find out the difference between two dates... (ParseException e) {
e.printStackTrace();
}
}
}
i wann j2me code not java code my dear sirs/friends
J2ME
| Different Size of Font
MIDlet J2ME |
Audio MIDlet J2ME |
Access...
Platform Micro Edition |
MIDlet Lifecycle J2ME
|
jad and properties file in J2ME
| J2ME Hello World Program
| Creating MIDlet Apps for Login in J2ME - MobileApplications
of an image in midlet
when i put this code in midlet it is showing errors.
can...j2me Hi Deepak,
Thank for u earlier suggestion.But i need its date and time when the file was created.But i got solution
Creating Midlet Application For Login in J2ME
Creating MIDlet Application For Login in J2ME
This example show to create the MIDlet application for user login . All
MIDlet applications for the MIDP ( Mobile Information application
j2me application code for mobile tracking system using j2me
Audio MIDlet Example
Audio MIDlet Example
This example illustrates how to play audio songs in your mobile application
by creating a MIDlet. In the application we have created
j2me
j2me how to compile and run j2me program at command prompt
j2me
j2me i need more points about j2me
J2me - MobileApplications
J2me Hi, I would like to know how to send orders linux to a servlet which renvoit answers to a midlet. thank you
J2ME Item State Listener Example
the
ItemStateListener interface in the j2me midlet. The ItemStateListener interface...
J2ME Item State Listener Example
...;class ItemStateListenerMIDlet extends MIDlet{
Tutorial
;
Creating MIDlet Application For Login in J2ME
This example show... variable DATE which is 0.
J2ME Button MIDlet
This example...;
J2ME CheckBox ChoiceGroup MIDlet
This example illustrates how to create - MobileApplications
j2me Hi,
I have developed a midlet application in j2me now i want...,
For more information on J2me visit to :
Thanks Icon MIDlet Example
J2ME Icon MIDlet Example
... element and an array of image element.
In the Icon MIDlet class we are creating...;javax.microedition.midlet.*;
public class SlideImage extends MIDlet{
how to run audio files in net beans using j2me
how to run audio files in net beans using j2me i am running the audioMidlet in net beans. Now where should i place the .wav files inorder to play them
J2ME Crlf Example
J2ME Crlf Example
The given J2ME Midlet, discuss about how to show the messages...;
Source Code of LineBreakExample.java
import Canvas Repaint
J2ME Canvas Repaint
In J2ME repaint is the method of the canvas class, and is used to repaint the
entire canvas class. To define the repaint method in you midlet follow Books
and trends of the J2ME platform, the book uses the source code of several award... of the new Cracking the Code Series, Wireless Programming with J2ME provides a look at the code behind wireless Java applications.
Think of J2ME as a tiny
Radio Button in J2ME
Radio Button in J2ME
In this tutorial you will see the MIDlet Example that is going to
demonstrate, how to create the radio button in J2ME using MIDlet. The radio button
j2me
j2me What is JAD file what is necesary of that
Hi Friend,
Please visit the following link:
JAD File
Thanks
Hi Friend,
Please visit the following link:
JAD File
Thanks Display Size Example
J2ME Display Size Example
In the given J2ME Midlet example, we are going to display the size of the
screen. Like a below given image 1, the midlet will print few items database question
j2me database question **Is there any possibility to install a database into the mobile.
If possible how can i connect it through midlet(j2me)**
pls help me
J2ME Timer MIDlet Example
J2ME Timer MIDlet Example
...;\n");
}
}
in the above source code we...;}
}
}
Download Source Code
Text Example in J2ME
Text Example in J2ME
In J2ME programming language canvas class is used to paint and draw...
in our show text MIDlet Example. We have created a class called CanvasBoxText
error in compiling j2me apllication - Applet
error in compiling j2me apllication hi,
in my j2me application for video,m getting only the audio.i ll send the code.
package src.video... MIDlet implements CommandListener,ItemCommandListener,Runnable,PlayerListener
J2ME - Date Calendar
J2ME sir
i need code of calculator
performing addition, multiplication, division, subtraction
in J2ME
J2ME Read File
of this file by the help of j2me midlet.
For Details you can download the source code and use it on j2me environment...
J2ME Read File
image application
j2me image application i can not get the image in my MIDlet .........please tell me the detailed process for creating immutable image without Canvas project - Java Beginners
j2me project HOW TO CREATE MIDLET WHICH IS A GPRS BASED SOLUTION... SALES DATA FROM THE SERVER.
THIS MIDLET IS FOR THE PROJECT MEDICAL CENTRAL...://
Thanks
Draw Line in J2me
Draw Line in J2me
In this example we are going to show you how to draw a line using J2ME.
Please go through the below given J2ME syntax that includes all the package,
methods
J2ME Record Store MIDlet Example
J2ME Record Store MIDlet Example
... it on the console. In this
example we are using the following code to open, close...;
And the Application is as follows:
Source
J2ME Current Date And Time
J2ME Current Date And Time
This is a simple J2ME form example, that is going to show the current date
and time on the screen. Like core Java J2ME too use the same
J2ME Record Store Example
J2ME Record Store Example
In this Midlet, we are going to read string data and write.... In J2ME a record store consists of a collection of records
and that records remain
j2me -
Rectangle Canvas MIDlet Example
Rectangle Canvas MIDlet Example
... of rectangle in J2ME.
We have created CanvasRectangle class in this example... is as follows:
Source Code of RectangleCanvas.java
Advertisements
If you enjoyed this post then why not add us on Google+? Add us to your Circles | http://roseindia.net/tutorialhelp/comment/86920 | CC-MAIN-2016-18 | en | refinedweb |
Hi.
A few weeks a go a post some code to calculate the day of the week of a particular date. It was just an idea. I had to sit down and code it. Here you have it:
Code:
#include <stdio.h>
char *day[] = {"Friday","Saturday","Sunday","Monday","Tuesday","Wednesday",
"Thursday"};
int day_of_month[] = {31,28,31,30,31,30,31,31,30,31,30,31};
int day_of_week(int, int, int);
int leap (int);
void instructions(void);
int d, m, a;
int main () {
instructions();
do {
printf("\nEnter date -> ");
scanf("%d%d%d",&d, &m, &a);
printf("%d %d %d\n", d, m, a);
printf ("\nThe day of the week is %s \n", day[day_of_week(d,m,a)]);
} while ( (d!=0) && (m!=0) && (a!=0) );
}
int day_of_week(int d, int m, int a) {
int xa, i;
int days_counter = 0;
// Compute days from begining of era to december 31 of year a-1
for (xa = 0; xa < a; xa++)
if (leap(xa))
days_counter += 366;
else days_counter += 365;
// Compute days from months before m
for (i=0; (i+1) < m; i++)
days_counter += day_of_month[i];
// compute the remaining days
days_counter+= d;
// See if a is leap year and if so, see if date is after february
// Add 1 if so.
if ( (m > 2) && (leap(a)))
days_counter += 1;
return ( (int) (days_counter % 7) ); // return the remainder
}
int leap(int x) {
if ( (x % 400) == 0 ) // is leap
return (1);
else
if ( (x % 100) == 0 ) // it's not leap
return (0);
else if ( (x % 4) == 0 )
return (1); // is leap
else return (0); // it's not leap
}
void instructions(void) {
printf("This program computes the day of the week of a particular\n");
printf("date. The date must be entered in the form dd mm yyyy.\n");
printf("It should be separated by spaces. Example October 12 1492,\n");
printf("should be entered as : 12 10 1492 .\n");
printf("1. the program will give an answer even if the date does not exist.\n");
printf("2. the program only works for dates after Christ.\n");
printf("3. try the day you were born!\n");
} | http://cboard.cprogramming.com/c-programming/68585-day-week-america-discovered-printable-thread.html | CC-MAIN-2016-18 | en | refinedweb |
The link-editors provide a number of..
By default, the Solaris OS support library libldstab.so.1 is used by the link-editor to process, and compact, compiler-generated debugging information supplied within input relocatable objects. This default processing is suppressed if you invoke the link-editor with any support libraries specified using the -S option. The default processing of libldstab.so.1 can be required in addition to your support library services. In this case, add libldstab.so.1 explicitly to the list of support libraries that are supplied to the link-editor.. a version of zero, or a value that is greater than the ld-support interface the link-editor supports, the support library is not used.. the audit library calls printf(3C), then the audit library must define a dependency on libc. See Generating a Shared Object Output File. Because the audit library has a unique namespace,.
An audit library can allocate memory using mapmalloc(3MALLOC), as this allocation method can exist with any allocation scheme normally employed by the application.
The rtld-audit interface is enabled by one of two means. Each method implies a scope to the objects that are audited.
Local auditing is enabled through dynamic entries recorded within an object at the time the object was built. The audit libraries that are made available by this method are provided with information in regards to those dynamic objects that are identified for auditing.
Global auditing is enabled using the environment variable LD_AUDIT. Global auditing can also be enabled for an application by combining a local auditing dynamic entry with the -z globalaudit option. The audit libraries that are made available by these methods are provided with information regarding all dynamic objects used by the process.
Either method of invocation consists of a string that contains a colon-separated list of shared objects that are loaded by dlopen(3C). Each object is loaded onto its own audit link-map list. Each object is searched for audit routines using dlsym(3C). Audit routines that are found are called at various stages during the applications execution.
The rtld-audit interface enables multiple audit libraries to be supplied. Audit libraries that expect to be employed in this fashion should not alter the bindings that would normally be returned by the runtime linker. Alteration of these bindings can produce unexpected results from audit libraries that follow..
Local auditing requirements can be established when an object is built using the link-editor options -p or -P. For example, to audit libfoo.so.1, with the audit library audit.so.1, record the requirement at link-edit time using the -p option..
The auditing enabled through this mechanism results in the audit library being passed information regarding all of the applications explicit dependencies. This dependency auditing can also be recorded directly when creating an object by using the link-editor's -P option.
Global auditing requirements can be established by setting the environment variable LD_AUDIT. For example, this environment variable can be used to audit the application main together with all the dependencies of the process, with the audit library audit.so.1..
The auditing enabled through zero indicates that this path should be ignored. An audit library that monitors search paths should return name.
This function modified by the audit library to better identify the object to other rtld-audit interface routines.
The la_objopen() function.
This function presently unused. la_objfilter() is called after la_objopen() for both the filter and filtee.
A return value of zero indicates that this filtee should be ignored. An audit library that monitors the use of filters should return a non-zero value.
This.
The la_pltexit() interface is experimental. See Audit Interface Limitations.
This function is called after any termination code for an object has been executed and prior to the object being unloaded.
uint_t la_objclose(uintptr_t * cookie);
cookie identifies the object, and was obtained from a previous la_objopen(). Any return value is presently ignored.
The following simple example creates an audit library that prints the name of each shared object dependency loaded by the dynamic executable date(1).
A.
The. | http://docs.oracle.com/cd/E19253-01/817-1984/chapter6-1238/index.html | CC-MAIN-2016-18 | en | refinedweb |
Part of twisted.web.script View Source View In Hierarchy
I am an extremely simple dynamic resource; an embedded python script.This will execute a file (usually of the extension '.epy') as Python code, internal to the webserver.
Render me to a web client.Load my file, execute it in a special namespace (with 'request' and '__file__' global vars) and finish the request. Output to the web-page will NOT be handled with print - standard output goes to the log - but with request.write. | http://twistedmatrix.com/documents/11.0.0/api/twisted.web.script.PythonScript.html | CC-MAIN-2016-18 | en | refinedweb |
Introduction
In a previous article, Java Help Files, we discussed how to create a class for displaying help files. Here we will take the same approach to handling exceptions. In some cases, exceptions can be handled quietly in the background without notifying the end user. However, we are concerned with the situations where it makes sense to display error messages.
The code to do this is not very complicated but gets a bit more interesting when we try to format the way our error message is displayed. We will show a few different approaches to doing this. We might have some fun along the way -– taking different methods out for a spin and seeing how they drive.
The Code
Briefly described, this class uses a JOptionPane to display a dialogue box showing the exception type in the title bar and the error message in the body. Error messages created by the Java API are fairly uniform, but there are many occasions when you will use classes created by other programmers –- database drivers for instance. Here you won’t be sure how or if the messages have been formatted. Error strings can end up being very long and may require the insertion of newline characters in order to display properly. We will develop three different methods of doing this, but for the moment have a quick look at the code below.
1:////////////////////////////////////////////////////////
2://ErrorDialogue class
3:////////////////////////////////////////////////////////
4://comments for javadoc below
5:/** class ErrorDialogue
6:* This class is for use in catch clauses. It will display
7:* a dialogue box showing the error message.
8:*/
9://put imports here
10:import javax.swing.*;
11:import java.awt.*;
12:import java.util.*;
13:
14:public class ErrorDialogue{
15://data members
16:private Component window;
17:private final int increment = 30;
18:////////////////////////////////////////////////////////
19://constructors
20:////////////////////////////////////////////////////////
21:public ErrorDialogue(Exception e, Component window) {
22: this.window = window;
23: doDialogue(e);
24:}
25://end constructors
26:////////////////////////////////////////////////////////
27://private functions
28:////////////////////////////////////////////////////////
29:private void doDialogue(Exception e) {
30: Class error = e.getClass();
31: String errname = error.getName();
32: String message=e.getMessage();
33: String messagetwo = e.getMessage();
34: //check if contains newline
35: if (e.getMessage().indexOf(‘n’) == -1) {
36: //find length
37: int length = message.length();
38: //break at intervals
39: if (length > increment){
40: message=quickAndDirty(message);
41: messagetwo=insertNewline(messagetwo);
42: }
43: }
44: JOptionPane.showMessageDialog(window, “Error – ” + message,
45: errname, JOptionPane.WARNING_MESSAGE);
46: JOptionPane.showMessageDialog(window, “Error – ” + messagetwo,
47: errname, JOptionPane.ERROR_MESSAGE);
48:
49:
50:}
51:////////////////////////////////////////////////////////
52:/**
53:* First method of parsing the string
54:*/
55:private String quickAndDirty(String message){
56: //find space closest to midpoint
57: StringBuffer sb= new StringBuffer(message);
58: Character space=new Character(‘ ‘);
59: int strlength = message.length();
60: int midpoint = strlength/2;
61: for(int x= midpoint; x < strlength; x++){
62: if(new Character(sb.charAt(x)).equals(space)){
63: sb.insert(x,”n”);
64: break;
65: }
66: }
67: String newstring = new String(sb);
68: return newstring;
69:}
70://///////////////////////////////////////////////////////
71:/**
72:* second method of parsing the string
73:*/
74:private String insertNewline(String message){
75: String tail = “”;
76: String head = “”;
77: int newstart=29;
78: int breakpoint=0;
79: tail = message.substring(newstart);
80: int length=message.length();
81: head = message.substring(0,newstart);
82: while(length>increment && tail.indexOf(” “)!=-1){
83: //find next space, insert break and concatenate
84: breakpoint = tail.indexOf(” “)+1;
85: head += tail.substring(0,breakpoint);
86: head += “n”;
87: tail=tail.substring(breakpoint);
88: length=tail.length();
89: if (length > increment && tail.indexOf(” “,newstart)!=-1){
90: head+=tail.substring(0,newstart);
91: tail=tail.substring(newstart);
92: }else{
93: head+=tail;
94: message = head;
95: break;
96: }
97: }
98: return message;
99:}
100:////////////////////////////////////////////////////////
101:/**
102:* Third method of parsing the string
103:*/
104:private String insertNewlineToken(String message){
105: StringTokenizer stk = new StringTokenizer(message, ” “, true);
106: String temp = “”;
107: String newstring = “”;
108: int maxlength = increment;
109: while(stk.hasMoreTokens()){
110: temp = stk.nextToken();
111: newstring += temp;
112: //add newline if longer and don’t start with a space
113: if (newstring.length() > maxlength && temp.equals(” “)){
114: newstring += “n”;
115: maxlength = newstring.length() + increment;
116: }
117: }
118: return newstring;
119:}
120:public static void main (String [] args){
121: String message = “Let’s have a really long message here”;
122: message+=” so that we can test our class. “;
123: message+=”Blah blah blah blah …”;
124: Exception e = new Exception(message);
125: new ErrorDialogue(e, null);
126: System.exit(0);
127:}
128:}//end class
{mospagebreak title=General Comments}
The constructor’s first argument (line 21) is an object of the Exception class so this class or any of its subclasses may be passed in. It won’t matter whether you pass in an IOException, an SQLException or any other subtype. The second argument is a Component that will act as the parent window for our dialogue box. Again we have chosen Component because it is the parent object of the various window subclasses.
On line 22 the constructor invokes a method, “doDialogue”, to extract the error message and display it using a JOptionPane. The interesting part of this method is the call to a function to insert a newline character but let’s first look at the output.
Output
Compiling and running the application results in the following output (Image 1):
followed by (Image 2):
This is the same message formatted in two different ways. Now that we’ve seen the output let’s have a look at how it’s done.
{mospagebreak title=Parsing the String}
First Method
The code to create our first dialogue box is found on lines 55-69. Because the String class does not have an “insertAt” method, the error description is converted to a StringBuffer. The space character closest to the midpoint is found and a newline is inserted at that point. Basically the string is cut in half. Have a look at Image 1 above to see what it looks like.
This method is easy to understand and easy to code, hence the method name. Calling it “quickAndDirty” doesn’t quite do justice to this approach; it might well be suitable in other circumstances. Here though, any string of a few hundred characters would create problems. Pretty soon we’d have a dialogue box that stretched across or beyond the screen width.
Second Method
The shortcomings of the first method spawned method number two. The pseudo code for this method could be briefly written as:
get first portion of the string
get tail end
while not end of string
find next most proximate space in tail end
create substring of text up to this point
concatenate this new string with original portion
add on a newline
end while
A reasonable enough starting point but turning this into suitable code didn’t prove easy. Using almost twice as many lines of code as the “quickAndDirty” method, we end up with the dialogue box shown as Image 2.
This method does exactly what is wanted, but the code doesn’t have the clarity of the first. This is important not only for writing the code but for maintaining it. This code works –- but like the driver who refuses to ask for directions, we came the long way round. We grimly stayed behind the wheel and finally got there. There must be a better way.
{mospagebreak title=The Better Way}
This time, before starting out, we pulled over to the side of the road, got out the Java API and checked StringTokenizer for directions.
The first thing to notice is the StringTokenizer constructor that we chose to use (line 105).; it takes three arguments, the String to be tokenized, the character to use as a delimiter and finally a boolean that determines whether or not to return the delimiter as a token. This is ideal because we are not discarding the spaces between words.
The “hasMoreTokens” method is used to control our “while” loop (line 109). Inside the loop we simply reconstruct our string including spaces and add a newline when it exceeds the recommended length.
What could be simpler? Using one more line than the “quickAndDirty” method we have created more functional code without sacrificing clarity.
Larger Issues
At this point it is fairly obvious what the final form of our code should be. Drop the first two methods of parsing text, make a slight change to the “doDialogue” method and call only one method and finally, remove “main”. However, there is a larger lesson to be learned here.
Java is an object-oriented language with a large API. This means that much of the work you need to do may already have been done. Look around and take advantage of existing classes. In this case we found a class ideally suited to our needs –- StringTokenizer –- that performed the job in many fewer lines and made for more intelligible code. Programming with a high level language such as Java is more than just formulating your algorithm and implementing it. While nobody can know all of the classes available and all of their methods, a general knowledge of the language’s capability can be enormously helpful and a real timesaver. Do this and you won’t re-invent classes that already exist.
While you can create functional code without asking for directions, don’t. Roll down the window and ask ‘cause you’ll get there faster and more easily. | http://www.devshed.com/c/a/java/exceptional-class/1/ | CC-MAIN-2016-18 | en | refinedweb |
- Identify the unmappable class
- Create an equivalent class that is mappable
- Create an XmlAdapter to convert between unmappable and mappable objects
- Specify the XmlAdapter
1. Identify the Unmappable Class
In this example the unmappable class is java.util.Map.
2. Create an Equivalent Class that is Mappable
Map could be represented by an object (MyMapType), that contained a list of objects with two properties: key and value (MyMapEntryType).
import java.util.ArrayList; import java.util.List; public class MyMapType { public List<MyMapEntryType> entry = new ArrayList<MyMapEntryType>(); }
and
import javax.xml.bind.annotation.XmlValue; import javax.xml.bind.annotation.XmlAttribute; public class MyMapEntryType { @XmlAttribute public Integer key; @XmlValue public String value; }
3. Create an XmlAdapter to Convert Between Unmappable and Mappable Objects
The XmlAdapter<ValueType, BoundType> class is responsible for converting between instances of the unmappable and mappable classes. Most people get confused between the value and bound types. The value type is the mappable class, and the bound type is the unmappable class.
import java.util.HashMap; import java.util.Map; import java.util.Map.Entry; import javax.xml.bind.annotation.adapters.XmlAdapter; public final class MyMapAdapter extends XmlAdapter<MyMapType,Map<Integer, String>> { @Override public MyMapType marshal(Map<Integer, String> arg0) throws Exception { MyMapType myMapType = new MyMapType(); for(Entry<Integer, String><Integer, String> unmarshal(MyMapType arg0) throws Exception { HashMap<Integer, String> hashMap = new HashMap<Integer, String>(); for(MyMapEntryType myEntryType : arg0.entry) { hashMap.put(myEntryType.key, myEntryType.value); } return hashMap; } }
4. Specify the XmlAdapter
The @XmlJavaTypeAdapter annotation is used to specify the use of the XmlAdapter. Below it is specified on the map field on the Foo class. Now during marshal/unmarshal operations the instance of Map is treated as an instance of MyHashMapType.
import java.util.HashMap; import java.util.Map; import javax.xml.bind.annotation.XmlAttribute; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter; @XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Foo { @XmlJavaTypeAdapter(MyMapAdapter.class) Map<Integer, String> map = new HashMap<Integer, String>(); public Map getMap() { return map; } public void setMap(Map map) { this.map = map; } }
In upcoming posts I'll describe extensions in the EclipseLink JAXB (MOXy) implementation that reduce the dependency on XmlAdapter.
Further Reading
If you enjoyed this post, then you may also be interested in:
- JAXB and Immutable Objects
In this post an XmlAdapter is used map a domain object with a mult-argument constructor and fields marked final. This example also demonstrates how the @XmlJavaTypeAdapter annotation can be used at the type level.
- @XmlTransformation - Going Beyond XmlAdapter
In this post an EclipseLink MOXy extension (@XmlTransformation) is used instead of @XmlAdapter to provide a little more flexibility for an interesting use case.
Nicely explained!
But I have even more subtle problem. I have complex data structures such as Vector > and HashMap> i.e. vector inside vector and vector inside hashmaps. Could you please help me on how I should marshall/unmarshall?
The process would be very similar.
Vector Inside a HashMap
If we consider you case with a Vector inside a HashMap. You would change the MyMapEntryType so that the value property was a Vector.
Vector Inside a Vector
For this use case your intermediate object would be a Vector of Value objects, each Value Object would have a value property representing the nested Vector.
Great post Blaise!
I'm using XMLAdapter with a List, but I'm having a problem. Marshalling the entities of the List, the < and > tags are replaced by < and >
A Customer as some attibutes (id, name, ...) and a DocumentSet (basically a List). This DocumentSet is extendind the XMLAdapter where I do this;
public class DocumentSetAdapter extends XmlAdapter {
public String marshal(DocumentSet val) throws Exception {
java.io.StringWriter sb = new java.io.StringWriter();
for (Document d : val.getDocument()) {
if (d instanceof IdentificationCard) {
sb.append(((IdentificationCard) d).toXml(false));
} else if (d instanceof DrivingLicense) {
sb.append(((DrivingLicense) d).toXml(false));
} else if (d instanceof Passport) {
sb.append(((Passport) d).toXml(false));
}
}
return sb.toString();
}
IdentificationCard and DrivingLicense extend Document.
The toXML method does the marshalling, and is the same in every class.
The one is called first is the Customer and is like this:
public String toXml(boolean header) throws javax.xml.bind.JAXBException {
java.io.StringWriter sw = new java.io.StringWriter();
javax.xml.bind.JAXBContext jc = javax.xml.bind.JAXBContext.newInstance(Customer.class);
javax.xml.bind.Marshaller m = jc.createMarshaller();
m.setProperty(javax.xml.bind.Marshaller.JAXB_FORMATTED_OUTPUT, true);
m.marshal(this, sw);
return sw.toString();
}
The output xml is:
customer1
Carlos
EMPLOYEE
premium
active
2010-11-12T16:20:48.752Z
<identificationCard>
<id>1</id>
<description>BI</description>
<issueDate>2010-11-12T16:20:48.759Z</issueDate>
<expireDate>2010-11-12T16:20:48.759Z</expireDate>
<birthPlace>Aveiro</birthPlace>
<identificationNumber>123131231</identificationNumber>
</identificationCard><drivingLicense>
<id>2</id>
<description>NIF</description>
<issueDate>2010-11-12T16:20:48.759Z</issueDate>
<expireDate>2010-11-12T16:20:48.759Z</expireDate>
<licenseNumber>9654477</licenseNumber>
<vehicleCat>A</vehicleCat>
</drivingLicense>
and the correct would be:
customer1
Carlos
EMPLOYEE
premium
active
2010-11-12T16:26:09.891Z
1
BI
2010-11-12T16:26:09.893Z
2010-11-12T16:26:09.893Z
Aveiro
123131231
2
NIF
2010-11-12T16:26:09.893Z
2010-11-12T16:26:09.893Z
9654477
A
Could you help me with this?
Best regards,
Carlos
Hi Carlos,
When using XmlAdapter you are converting from an object that JAXB cannot map to one that it can. In your example you are converting to a String that contains XML markup. This is causing JAXB to escape some of the characters. Instead you should convert to an object that would produce the desired XML.
Some of the details of your message were lost due to the XML escaping aspect of this blog. Feel free to message me from the blog with about this issue:
-
-Blaise
Very nice article!
I was wondering, if my JAXB annotated classes are generated based on a given XSD file, how can I specify my adapters, knowing that every time I changed the XSD file, all classes will also be regenerated?
Hi Bogdan,
The XmlAdapter mechanism is used when starting with Java classes and you have an unmappable class. When you start from an XML schema, the JAXB schema compiler tool (XJC) generates compatible classes, no XmlAdapter is necessary.
-Blaise
Hi Carlos,
Thank you for sending me the sample code. I have just emailed you an example that demonstrates how XmlAdapter could be used.
I do not recommend the approach that you are attempting where the adapted type is a String that contains XML markup. You may be able to make this work with streams, but it will not work when dealing with DOM/SAX/StAX inputs and outputs.
-Blaise
Hi Blaise,
I'm encoutering an issue, where the objects generated from JAXB (through Maven) having no no-arg constructors on Windows couldn't work when they're deployed in server on Linux during creation of JAXBContext - complaining of no no-arg public constructor. Is that something XmlAdapter could resolve?
Thanks,
- John
Hi John,
It is common to use an XmlAdapter for classes that do not have a no-arg constructor. However, JAXB should not be creating classes without a no-arg constructor. Feel free to send me more details about your setup through the "Contact Me" page on this blog.
-Blaise
The MyMapAdapter does not compile
imcompatible types
found : java.lang.Object
required: java.util.Map.Entry
for (Entry entry : arg0.entrySet()) {
The compilation issue should be fixed now. Looks like I forgot to escape the '<' characters in my code samples. Thanks for identifying the issue.
If you do *not* use an Adapter, this works:
@XmlRootElement
@XmlAccessorType(XmlAccessType.FIELD)
public class Simple {
private Map map = new HashMap();
}
This does not:
@XmlRootElement
public class Simple {
@XmlElement
private Map map = new HashMap();
}
Why is this?
rmuller,
That appears to be a bug in the JAXB reference implementation (Metro). This issue does not occur when you use EclipseLink JAXB (MOXy).
-Blaise
Thank you. These blogs are the best tutorial for Moxy.
Now what I'd really like to do is convert the map "directly", i.e., eliminate the key and value elements, by using the keys for the element names, and the values for the element contents. I've tried returning a DynamicEntity in the marshall methodd, but not successfully. And I don't know of an Annotation that lets me specify a field as providing the element name. Am I missing something obvious?
Class MyMapAdapter won't compile. Here is the corrected version:
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import javax.xml.bind.annotation.adapters.XmlAdapter;
public final class MyMapAdapter extends
XmlAdapter> {
@Override
public MyMapType marshal(Map arg0) throws Exception {
MyMapType myMapType = new MyMapType();
for (Entry unmarshal(MyMapType arg0) throws Exception {
HashMap hashMap = new HashMap();
for (MyMapEntryType myEntryType : arg0.entry) {
hashMap.put(myEntryType.key, myEntryType.value);
}
return hashMap;
}
}
runnable Demo:
import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Marshaller;
public class Demo {
public static void main(String[] args) {
Foo foo = new Foo();
foo.getMap().put(1, "one");
foo.getMap().put(2, "two");
foo.getMap().put(3, "three");
try {
JAXBContext ctx = JAXBContext.newInstance(Foo.class);
Marshaller m = ctx.createMarshaller();
m.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
m.marshal(foo, System.out);
} catch (JAXBException e) {
e.printStackTrace();
}
}
}
Thanks, your article was helpful.
Hi
Thank you for these wonderful tutorials.
I have an issue, that is
I'm trying to parse the XML response into POJO(may not be reversely).
For this I'm using Jaxb2Marshaller with MarshallingHttpMessageConverter through Spring RestTemplate [postForObject].
My setup is fine to map the child with its direct parent element using like @XmlElementWrapper.
But I wanna to map elements like by its name, That is I need only some of the elements from the whole xml. Is this possible with xmladapter or by somthing else. pls let me know ASAP. I hope you understand my case.
Thanks
Prakash.
Hi Prakash,
The following may be of help:
- XPath Based Mapping - Geocode Example
Feel free to contact me with additional details:
- Contact Me
-Blaise
Hi Blaise,
Its an interesting pattern, thanks for sharing. I would like to ask you if jaxb can be used to map a hashmap like
Element key=val key=val
where Element would be root element and key,vals are attributes. Can names of XmlAttribute be dynamic? Even if you can point me in the right direction it would be of great help. Cheers!
You can use @XmlAnyAttribute for this use case. For an example see:
-
-Blaise
Hi Carlo,
Thank you for sending the corrected code. I apologize in the delay for posting it.
-Blaise
Hi, Blaise,
Nice article.
I am facing an interesting problem. I need to unmarshall following into an ArrayList of Customer objects.
The problem is that the top level object is ArrayList which is not mappable.
I created a class extending ArrayList and annotate it like this.
@XmlRootElement(name="collection")
public class MyArrayList extends ArrayList {
}
But this does not help. Is there a way to use XmlAdapter to take care of this problem?
Hi, Blaise,
I think I didn't ask this clear in my first comment.
The problem is that the entities in the collection is of generic type. So, it could be Customer or it could be Vendor.
Is there a way to achieve this?
Thanks,
Xianguan
I have the following collection class. Note the XmlElement(name="yes"), how can I not specify the element name and let the type decide on the name of the element?
@XmlRootElement(name="collection")
public class MyCollections {
private List items;
@XmlElement(name="yes")
public List getItems() {
return items;
}
public void setItems(List items) {
this.items = items;
}
}
Hi Xianguan,
If you know the types in advance I would recommend using the @XmlElements annotation. This corresponds to the choice concept in XML schema:
- JAXB and XSD Choice: @XmlElements
However if you don't know the types if advance you can leverage @XmlAnyElement(lax=true) for this:
- Using @XmlAnyElement to Build a Generic Message
-Blaise
@XmlAnyElement(lax=true) worked beautifully for me. Thank you so much!
From what I can tell, java.util.Calendar loses timezone information when it is marshalled/unmarshalled to XML. In particular, it loses timezone information because the timezone is expressed as an offset from GMT like this: "-07:00" rather than the more proper ID form "America/Los_Angeles". I can envision a solution to this problem using XMLAdapter and the creation of a MyCalendar class, but wanted to know if this is the best way to tackle this issue.
Below is the code I used to properly serialize a java.util.Calendar along with its timezone. This whole problem brings up an interesting question: Does a calendar represent an instant in time (i.e. millis since epoch in GMT), or does it represent that instant in time at a particular location (a timezone).
@XmlAccessorType(XmlAccessType.NONE)
public class SerializableCalendar {
@XmlElement
private Calendar calendar = null;
@XmlElement
private String tzID = null;
public Calendar getCalendar() {
return calendar;
}
public void setCalendar(Calendar calendar) {
this.calendar = calendar;
}
public String getTzID() {
return tzID;
}
public void setTzID(String tzID) {
this.tzID = tzID;
}
}
public final class SerializableCalendarAdapter extends XmlAdapter {
@Override
public Calendar unmarshal(SerializableCalendar sc) throws Exception {
Calendar c = sc.getCalendar();
c.setTimeZone(TimeZone.getTimeZone(sc.getTzID()));
return c;
}
@Override
public SerializableCalendar marshal(Calendar c) throws Exception {
SerializableCalendar sc = new SerializableCalendar();
sc.setCalendar(c);
sc.setTzID(c.getTimeZone().getID());
return sc;
}
}
Hi Ken,
JAXB marshals time zone information based on the XML schema representation of 'Z' for UTC and '+hh:mm' or '-hh:mm' for offsets.
xsd:time formats:
hh:mm:ssZ
hh:mm:ss+hh:mm
hh:mm:ss-hh:mm
An XmlAdapter is a reasonable way to provide alternate representations.
-Blaise
Blaise and co: this was really helpful, thank you all (Blaise especially.)
One quick note: it's interesting to see how JAXB treats annotations in the MyMapType wrapper class. Maybe say a word or two about that?)
Gr8 article blaise,
I am having trouble to figure out how to tackle above issue?
I want to create a class for xml file containing element with a pipe seperated data field with new line character to seperate multiple record.
For eg:
1234 |HERNANDEZ MICHEAL E | 9195672232 | 01/10/2011 08:00:00|Y|Y222|1|ETR|Lightning \n
2345 |JONATHAN SILOH|9019876534| 01/10/2011 08:00:00|Y|Y297|2|ETR|Overload \n
How could I track this element information efficiently using JAXB API?
Thank You,
Rushikey
Hi Rushikey,
You could use an XmlAdapter for this use case. The bound type (unmappable object) will be whatever you will convert that value to, and the value type (mappable object) will be java.lang.String. Then in the adapter you will need to write the code to convert to/from the piped data.
-Blaise
Blaise,
Very good presentation and very easy to understand. Lot of thanks for spending your time to help us.
Thanks Blaise!!
I was trying to unmarshal Clob since yesterday. This really helped.
Thanks,
Sid
Thank you Blaise as well.
I am interested in the order of the items inside a list but I see that the marshalling/unmarshalling process does not respect the original order of the items.
For example if I construct an object list with items a, b, c and I change the order to c, b, a the exported XML will have the original order.
How I will make sure that the order of the items of my list in the exported XML will be the same with the order of the Java list object (and vice versa)?
Do I have to use adapters for this?
Thank you in advance.
===
Apartment for rent in Litochoro -->
===
Hi Klearchos,
Assuming that you are representing your collection using the java.util.List type that maintains order, the marshalled XML should match this order.
Could you send me more details via my "Contact Me" page?
-Blaise
thank you !
it'll be perfect if we could see the xsd file.
Because I'm doin' all this, but something must be wrong in my xsd and I thought I could look at yours in the example
Hi,
I'm confused on why are you looking for the XML schema. Are you looking to generate this model? This example is specifically aimed at starting from a Java model.
-Blaise
I thought as specified in javadoc
xmladapter only works with an xsd file, but I might have misunderstood..
It may be step 2 from that link () that is causing some confusion. What that step really means is you need to decide what you want the XML to look like, and then you need to create value objects that correspond to the desired XML format.
-Blaise
Hi,
Is there any way to declare required and nillable as if @XmlElementWrapper(required = true, nillable=true) Collection types;
Hi Jin,
Not sure I understand your question. @XmlElementWrapper does allow you to specify required and nillable for Collection types. The required property corresponds to schema generation. The nillable aspect in addition to affected schema generation will cause the wrapper element to be written with the xsi:nil="true" attribute for null collections. Without this setting the wrapper element is not marshalled.
-Blaise
I want to implement JAXB for Test class,i find that if variables are final,is it not possible to inject JAXB? Will XmlAdapter help in this case?
public class Test implements Serializable {
private final int limit;
private Map results;
private final Sub sub;
@XmlElementWrapper(name = "item") //tried with this also,but not successfull
@XmlAnyElement
private final List item;
}
public class Item implements Serializable {
private Base base;// Interface
public Item(final Base base) {
this.base = base;
}
}
The following may help:
- JAXB and Immutable Objects
I am also aware of this question on Stack Overflow. It is probably easiest to tackle in on that forum:
- final property to bind JAXB
-Blaise
Based off my (admittedly short) research into XMLAdapter, it seems the annotation used to identify the adapter cannot be parametrized. Is this truly the case? It seems onerous to define an adapter class for each type of parameter combinations used with the Map class. If this is the case, do you have any thoughts on why this limitation exists? I am an not well versed enough on the inner workings of Java to guess at the Apache folks' reasoning.
Hi Matthew,
One reason for the limitation is that the Java language does not allow you to specify a parameterized type as a value on an annotation.
You may find the following post helpful:
- Java Tip of the Day: Generic JAXB Map XmlAdapter
BTW - Apache did not participate in the development of the JAXB (JSR-222) specification. There were representatives from: Sun, Oracle (myself), IBM, BEA, SAP, and quite a few other companies.
-Blaise
Hi Blaise,
Above you created XMLAdapter using HashMap can we do samething using ArrayList.
Thanks in Advance
Hi,
You can use an XmlAdapter with an ArrayList property, but the XmlAdapter will be applied to each item in the collection instead of the collection as a whole.
-Blaise
Hello Blaise,
Can you please tell me what data type should I mention in my XSD file while using adapters.
Will it be the user defined java/util/Map equivalent class or something else.
Thanks
Hello,
XmlAdapters are mainly used when starting from Java objects. Below is an example that demonstrates how they can be applied when starting from an XML schema.
- XML Schema to Java - Generating XmlAdapters
-Blaise
Hi Blaise,
can we use an Adapter Class for sql Connection interface ? As we know JAXB can not handle Interface.
- Gopi
Hi Gopi,
Yes you can use this approach for handling interfaces. Below is another article you may find interesting about supporting interfaces in JAXB.
- JAXB and Interface Fronted Models
-Blaise
Hello Blaise,
Thank you for your valuable blog posts !
On the subject of XmlAdapter, I was wondering if you could have a look at my SO question ?
I'd like to have an elegant implementation that is along the lines of JAXB's philosophy.
Thank you in advance !
-C
I have posted an answer on your Stack Overflow question:
-
You can find more information in the following link:
- Mixing Nesting and References with JAXB's XmlAdapter
Hi Blaise,
Is it possible to use an external value in the XmlAdapter? I have the following pojo
class Task{
int id;
Timezone timezone;
@XmlJavaTypeAdapter(type = DateTime.class, value = DateTimeAdapter.class)
DateTime startDate;
....
}
And the XmlAdapter
public class DateTimeAdapter extends XmlAdapter {
private static final DateTimeFormatter dateFormat = DateTimeFormat.forPattern("yyyy-MM-dd'T'HH:mm:ss'Z'");
public DateTime unmarshal(String dateString) throws Exception {
return (StringUtils.isNotEmpty(dateString) ? dateFormat.withZoneUTC().parseDateTime(dateString) : null);
}
public String marshal(DateTime dateTime) throws Exception {
return (dateTime != null ? dateFormat.print(dateTime) : null);
}
}
I would like to use the timezone set in the same pojo when marshalling and unmarshalling the DateTime. Can I somehow pass the timezone to the marshaller?
Hi Banu,
By default a JAXB impl will create a new instance of the XmlAdapter each time it is used. If you want it to be stateful you can set a configured instance via the setAdapter methods on Marshaller/Unmarshaller.
-Blaise | http://blog.bdoughan.com/2010/07/xmladapter-jaxbs-secret-weapon.html | CC-MAIN-2016-18 | en | refinedweb |
Preprocessing data is an often overlooked key step in Machine Learning. In fact - it's as important as the shiny model you want to fit with it.
Garbage in - garbage out.
You can have the best model crafted for any sort of problem - if you feed it garbage, it'll spew out garbage. It's worth noting that "garbage" doesn't refer to random data. It's a harsh label we attach to any data that doesn't allow the model to do its best - some more so than other. That being said - the same data can be bad for one model, but great for another. Generally, various Machine Learning models don't generalize as well on data with high scale variance, so you'll typically want to iron it out before feeding it into a model.
Normalization and Standardization are two techniques commonly used during Data Preprocessing to adjust the features to a common scale.
In this guide, we'll dive into what Feature Scaling is and scale the features of a dataset to a more fitting scale. Then, we'll train a
SGDRegressor model on the original and scaled data to check whether it had much effect on this specific dataset.
Scaling or Feature Scaling is the process of changinng the scale of certain features to a common one. This is typically achieved through normalization and standardization (scaling techniques).
$$
x' = \frac{x-x{min}}{x{max} - x_{min}}
$$
$$
x' = \frac{x-\mu}{\sigma}
$$ A normal distribution with these values is called a standard normal distribution. It's worth noting that standardizing data doesn't guarantee that it'll be within the [0, 1] range. It most likely won't be - which can be a problem for certain algorithms that expect this range. To perform standardization, Scikit-Learn provides us with the
StandardScaler class.
Normalization is also known as Min-Max Scaling and Scikit-Learn provides the
MinMaxScaler for this purpose. On the other hand, it also provides a
Normalizer, which can make things a bit confusing.
Note: The
Normalizer class doesn't perform the same scaling as
MinMaxScaler.
Normalizer works on rows, not features, and it scales them independently.
Feature Scaling doesn't guarantee better model performance for all models.
For instance, Feature Scaling doesn't do much if the scale doesn't matter. For K-Means Clustering, the Euclidean distance is important, so Feature Scaling makes a huge impact. It also makes a huge impact for any algorithms that rely on gradients, such as linear models that are fitted by minimizing loss with Gradient Descent.
Principal Component Analysis (PCA) also suffers from data that isn't scaled properly.
In the case of Scikit-Learn - you won't see any tangible difference with a LinearRegression, but will see a substantial difference with a
SGDRegressor, because a
SGDRegressor, which is also a linear model, depends on Stochastic Gradient Descent to fit the parameters.
A tree-based model won't suffer from unscaled data, because scale doesn't affect them at all, but if you perform Gradient Boosting on Classifiers, the scale does affect learning.
We'll be working with the Ames Housing Dataset which contains 79 features regarding houses sold in Ames, Iowa, as well as their sale price. This is a great dataset for basic and advanced regression training, since there are a lot of features to tweak and fiddle with, which ultimately usually affect the sales price in some way or the other.
If you'd like to read our series of articles on Deep Learning with Keras, which produces a Deep Learning model to predict these prices more accurarely, read our Deep Learning in Python with Keras series.
Let's import the data and take a look at some of the features we'll be using:
import pandas as pd import matplotlib.pyplot as plt # Load the Dataset df = pd.read_csv('AmesHousing.csv') # Single out a couple of predictor variables and labels ('SalePrice' is our target label set) x = df[['Gr Liv Area', 'Overall Qual']].values y = df['SalePrice'].values fig, ax = plt.subplots(ncols=2, figsize=(12, 4)) ax[0].scatter(x[:,0], y) ax[1].scatter(x[:,1], y) plt.show()
There's a clear strong positive correlation between the "Gr Liv Area" feature and the "SalePrice" feature - with only a couple of outliers. There's also a strong positive correlation between the "Overall Qual" feature and the "SalePrice": Though these are on a much different scale - the "Gr Liv Area" spans up to ~5000 (measured in square feet), while the "Overall Qual" feature spans up to 10 (discrete categories of quality). If we were to plot these two on the same axes, we wouldn't be able to tell much about the "Overall Qual" feature:
fig, ax = plt.subplots(figsize=(12, 4)) ax.scatter(x[:,0], y) ax.scatter(x[:,1], y)
Additionally, if we were to plot their distributions, we wouldn't have much luck either:
fig, ax = plt.subplots(figsize=(12, 4)) ax.hist(x[:,0]) ax.hist(x[:,1])
The scale of these features is so different that we can't really make much out by plotting them together. This is where feature scaling kicks in.
The
StandardScaler class is used to transform the data by standardizing it. Let's import it and scale the data via its
fit_transform() method:
import pandas as pd import matplotlib.pyplot as plt # Import StandardScaler from sklearn.preprocessing import StandardScaler fig, ax = plt.subplots(figsize=(12, 4)) scaler = StandardScaler() x_std = scaler.fit_transform(x) ax.hist(x_std[:,0]) ax.hist(x_std[:,1])
Note: We're using
fit_transform() on the entirety of the dataset here to demonstrate the usage of the
StandardScaler class and visualize its effects. When building a model or pipeline, like we will shortly - you shouldn't
fit_transform() the entirety of the dataset, but rather, just
fit() the training data, and
transform() the testing data.
Running this piece of code will calculate the μ and σ parameters - this process is known as fitting the data, and then transform it so that these values correspond to 1 and 0 respectively.
When we plot the distributions of these features now, we'll be greeted with a much more manageable plot:
If we were to plot these through Scatter Plots yet again, we'd perhaps more clearly see the effects of the standarization:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = StandardScaler() x_std = scaler.fit_transform(x) ax.scatter(x_std[:,0], y) ax.scatter(x_std[:,1], y)
To normalize features, we use the
MinMaxScaler class. It works in much the same way as
StandardScaler, but uses a fundementally different approach to scaling the data:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.hist(x_minmax [:,0]) ax.hist(x_minmax [:,1])
They are normalized in the range of [0, 1]. If we were to plot the distributions again, we'd be greeted with:
The skewness of the distribution is preserved, unlike with standardization which makes them overlap much more. Though, if we were to plot the data through Scatter Plots again:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.scatter(x_minmax [:,0], y) ax.scatter(x_minmax [:,1], y)
We'd be able to see the strong positive correlation between both of these with the "SalePrice" with the feature, but the "Overall Qual" feature awkwardly overextends to the right, because the outliers of the "Gr Liv Area" feature forced the majority of its distribution to trail on the left-hand side.
Both normalization and standardization are sensitive to outliers - it's enough for the dataset to have a single outlier that's way out there to make things look really weird. Let's add a synthetic entry to the "Gr Liv Area" feature to see how it affects the scaling process:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.scatter(x_minmax [:,0], y)
The single outlier, on the far right of the plot has really affected the new distribution. All of the data, except for the outlier is located in the first two quartiles:
fig, ax = plt.subplots(figsize=(12, 4)) scaler = MinMaxScaler() x_minmax = scaler.fit_transform(x) ax.hist(x_minmax [:,0])
Finally, let's go ahead and train a model with and without scaling features beforehand. When working on Machine Learning projects - we typically have a pipeline for the data before it arrives at the model we're fitting.
We'll be using the
Pipeline class which lets us minimize and, to a degree, automate this process, even though we have just two steps - scaling the data, and fitting a model:
from sklearn.model_selection import train_test_split from sklearn.pipeline import Pipeline from sklearn.linear_model import SGDRegressor from sklearn.preprocessing import StandardScaler from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_absolute_error import sklearn.metrics as metrics import pandas as pd import numpy as np import matplotlib.pyplot as plt # Import Data df = pd.read_csv('AmesHousing.csv') x = df[['Gr Liv Area', 'Overall Qual']].values y = df['SalePrice'].values # Split into a training and testing set X_train, X_test, Y_train, Y_test = train_test_split(x, y) # Define the pipeline for scaling and model fitting pipeline = Pipeline([ ("MinMax Scaling", MinMaxScaler()), ("SGD Regression", SGDRegressor()) ]) # Scale the data and fit the model pipeline.fit(X_train, Y_train) # Evaluate the model Y_pred = pipeline.predict(X_test) print('Mean Absolute Error: ', mean_absolute_error(Y_pred, Y_test)) print('Score', pipeline.score(X_test, Y_test))
This results in:
Mean Absolute Error: 27614.031131858766 Score 0.7536086980531018
The mean absolute error is ~27000, and the accuracy score is ~75%. This means that on average, our model misses the price by $27000, which doesn't sound that bad, although, it could be improved beyond this. Most notably, the type of model we used is a bit too rigid and we haven't fed many features in so these two are most definitely the places that can be improved. Though - let's not lose focus of what we're interested in. How does this model perform without Feature Scaling? Let's modify the pipeline to skip the scaling step:
pipeline = Pipeline([ ("SGD Regression", SGDRegressor()) ])
What happens might surprise you:
Mean Absolute Error: 1260383513716205.8 Score -2.772781517117743e+20
We've gone from ~75% accuracy to ~-3% accuracy just by skipping to scale our features. Any learning algorithm that depends on the scale of features will typically see major benefits from Feature Scaling. Those that don't, won't see much of a difference.
For instance, if we train a
LinearRegression on this same data, with and without scaling, we'll see unremarkable results on the behalf of the scaling, and decent results on behalf of the model itself:
pipeline1 = Pipeline([ ("Linear Regression", LinearRegression()) ]) pipeline2 = Pipeline([ ("Scaling", StandardScaler()), ("Linear Regression", LinearRegression()) ]) pipeline1.fit(X_train, Y_train) pipeline2.fit(X_train, Y_train) Y_pred1 = pipeline1.predict(X_test) Y_pred2 = pipeline2.predict(X_test) print('Pipeline 1 Mean Absolute Error: ', mean_absolute_error(Y_pred1, Y_test)) print('Pipeline 1 Score', pipeline1.score(X_test, Y_test)) print('Pipeline 2 Mean Absolute Error: ', mean_absolute_error(Y_pred2, Y_test)) print('Pipeline 2 Score', pipeline2.score(X_test, Y_test))
Pipeline 1 Mean Absolute Error: 27706.61376199076 Pipeline 1 Score 0.7641840816646945 Pipeline 2 Mean Absolute Error: 27706.613761990764 Pipeline 2 Score 0.7641840816646945
Feature Scaling is the process of scaling the values of features to a more managable scale. You'll typically perform it before feeding these features into algorithms that are affected by scale, during the preprocessing phase.
In this guide, we've taken a look at what Feature Scaling is and how to perform it in Python with Scikit-Learn, using
StandardScaler to perform standardization and
MinMaxScaler to perform normalization. We've also taken a look at how outliers affect these processes and the difference between a scale-sensitive model being trained with and without Feature Scaling. | https://www.codevelop.art/feature-scaling-data-with-scikit-learn-for-machine-learning-in-python.html | CC-MAIN-2022-40 | en | refinedweb |
smullis has asked for the wisdom of the Perl Monks concerning the following question:
Hi All,
It's been years since I posted here... I've been away with the fairies in Python and Ruby land for so long I've forgotten almost everything I ever knew about Perl OO...
I have no doubt that the following question is extremely basic..
I want to extend a class for a Perl API with one additional method.
I have the method but I don't want to edit the core API files. I would like to simply add the method in my script.So, to test this: filename: SPEAK/ITALIAN.pm
package SPEAK::ITALIAN; use strict; use warnings; use Exporter; use vars qw(@ISA @EXPORT); @ISA = qw( Exporter ); @EXPORT = qw(hello); use constant HELLO => "Hello"; use constant GOODBYE => "Goodbye"; sub hello { print HELLO . " in Italiano\n"; } return 1;
Now, in my script (in this case "test.pl") I simply want to add a method to the SPEAK::ITALIAN class with all constants and other good stuff in the correct namespaces and so onfilename: test.pl
Running this gives:Running this gives:#!/usr/bin/perl use strict; use warnings; use SPEAK::ITALIAN; sub goodbye{ print GOODBYE . " in Italiano\n"; } hello(); goodbye();
Hello in Italiano! GOODBYE in Italiano!
As you've noticed, GOODBYE is incorrect. It's not picking up the constant from the SPEAK::ITALIAN module.
I know I'm missing something very basic. Do I need "use base qw(SPEAK::ITALIAN)"?
Any pointers?
Thanks in advance and apologies for the awfulness of the question...
S | https://www.perlmonks.org/index.pl/?node_id=821583;displaytype=print | CC-MAIN-2022-40 | en | refinedweb |
Starting a New Django Projectdjangopythonweb development
Introduction #
Occasionally I get to start over with a new Django project. Usually this is just some side project: very occasionally I get to build a greenfield project for someone else (protip for new developers: 99.99% of your career you'll be inheriting someone else's project, for better or worse).
If it's just a quick throwaway, for a tutorial or proof-of-concept, I just type
django-admin.py startproject and go with all the defaults. I won't need anything more than Sqlite and I definitely won't need Docker. Maybe a virtualenv and that's that.
But say it's a serious project that is likely to have legs. What does a "good" project skeleton look like?
Cookiecutters #
At this point some people will recommend using a cookiecutter, and the best supported and maintained right now is Daniel Roy Greenfeld's Django cookiecutter. If you have never built a large Django project before you could do far worse than use this. It comes with some good defaults. Personally I find it a little too large, and it has a lot of artefacts I don't need, but I still use it as a reference for current thinking about best practices in Django.
I also do not maintain my own cookiecutter. I've tried a couple times, but they're a pain to maintain. You want to keep adding new things to the cookiecutter to reflect your learning in your Django projects, and you also want to keep up with latest changes in the ecosystem - for example, a best-of-breed library falls out of favour in place of the next hot thing. Over time the cookiecutter drifts away from your latest projects and thinking.
I would probably keep a cookiecutter if a) there were other maintainers who could help keep it up to date or b) I was running an agency where I make new projects every other week and a cookiecutter saves perhaps days of work each time, and there is a need to maintain a base level of quality and consistency. At present though I just start with a plain Django project and make the changes I need manually.
Project configuration #
Typically you'll find:
- .dockerignore
- .editorconfig
- .flake8 (because Flake doesn't support pyproject.toml)
- .gitignore
- .npmrc
- .pre-commit-config.yaml
- .prettierrc
- package.json
- pyproject.toml
- requirements.in
- requirements.txt
My pre-commit file typically includes (in addition to standard whitespace checking etc):
- Bandit
- Black
- Flake8
- absolufy-imports
- djhtml
- isort
- mypy
- prettier
I have tried mypy with
django-stubs, but found it a massive pain to work with due to need to run it inside the Docker container (among other problems), so I just use mypy with these settings:
[mypy]
python_version = 3.10
check_untyped_defs = false
ignore_missing_imports = true
show_error_codes = true
warn_unused_ignores = false
warn_redundant_casts = false
warn_unused_configs = false
warn_unreachable = true
Perhaps not as comprehensive as
django-stubs but good enough to provide some benefit to typing.
Settings #
My typical settings structure will look something like this:
myproject - settings base.py local.py production.py test.py urls.py wsgi.py
Some people like to have an extra level or use a
config package or something like that. Personally I dislike the extra typing that involves.
To keep settings maintainable I use django-environ to use environment variables as much as possible, and django-split-settings to keep inter-settings imports nice and tidy. For example in
local.py instead of this:
from .base import *
INSTALLED_APPS = INSTALLED_APPS + ["debug_toolbar"]
we have:
from split_settings.tools import include
from myproject.settings.base import INSTALLED_APPS
include("base.py")
INSTALLED_APPS = INSTALLED_APPS + ["debug_toolbar"]
Templates #
Generally I avoid per-app templates, but keep the templates all in one place under the top-level directory. Keeping them in one easily-accessible place is nice and consistent, particularly if non-Django developers (say frontend people) want to work on them and don't particularly want to have to hunt around in different apps trying to find the right file (the same goes, of course, for static files).
I've gone back-and-forth on naming individual templates and subdirectories, particularly regarding partials. For a while I used the underscore convention for example
_article.html as opposed to a non-partial
article.html. Nowadays I prefer to move partials under an
includes subdirectory and avoid the underscore naming. This is just a personal preference thing, but it avoids a directory becoming too large with similarly named files. The top level templates directory will have a "junk drawer"
includes directory (in addition to specific includes for things like pagination templates) and individual apps will have their own
includes:
myproject + myproject - templates base.html - django - forms default.html - includes sidebar.html - pagination pagination_links.html ... - articles article.html - includes - article.html
Rule of thumb: if I have to
{% include %} a template (or access it using an inclusion template tag) it goes in the relevant
includes subdirectory, unless the include has a specific function like pagination, forms etc.
Static files #
Static files also live under the top directory:
myproject + myproject - static + css + dist + img + js
The
dist directory contains any files generated by whatever frontend build system (django-compress, esbuild, webpack etc) such as minified/processed CSS and Javascript/sourcemap files. These days I tend to start with a more lightweight frontend but if I'm building a full SPA the static files will probably live in their own frontend directory at the top level of the project (or an entirely different repo).
I prefer Tailwind over Bootstrap, Bulma and other CSS frameworks, at least as a starter default, so I'll probably have a
tailwind.config.js in the top directory as well.
Local apps #
Django apps are a bone of contention for even experienced developers. First of all the word "app" screams "I was a framework designed pre-2007" as the very word has changed in meaning not only in the tech world but in mainstream culture. Perhaps if Django were invented today we'd use something else; the Elixir framework Phoenix has "contexts" for example, but maybe "domains" would be more accurate (although we would then go into the weeds of Domain-Driven Development). Nevertheless, the hardest part about apps is not so much what we call them but deciding on their granularity. Some developers like to make Django more like Rails or Laravel and have a giant single app with separate packages for models, views and so on. I personally like the concept of apps though and prefer to keep them relatively small, with a few models per app.
In any case your apps will change during the lifetime of the project. However I know I'm probably going to need users and I'll probably need somewhere to put any code that's not going to fit anywhere else (or is not particularly business domain-specific): a "junk drawer". You can call your junk drawer app whatever you want, I like "common".
myproject - myproject + common + settings + static + users urls.py wsgi.py
Some projects I've seen have an
apps directory but personally I find this redundant, especially if using absolute imports. I also have a personal aversion to calling packages and modules
utils: if I have a couple of functions that do networking stuff, for example, I'll make a
networking module rather than keep them in a
utils module.
Preferred libraries #
I've already mentioned
django-environ and
django-split-settings. Other favourites include:
- dj-database-url
- django-allauth
- django-cachalot
- django-extensions
- django-model-utils
- django-redis
- django-widget-tweaks
- psycopg2-binary
- redis
- whitenoise
For production I'll probably throw in:
And for local development:
I'm a big fan of pytest (I'm aware some developers are less so). But as I am, I also include these libraries:
- pytest
- coverage
- pytest-django
- factory-boy
- pytest-cov
- pytest-forked
- pytest-mock
- pytest-randomly
- pytest-xdist
If I'm using HTMX (and for new projects that's increasingly the case) I'll also add django-htmx.
For queuing, as mentioned, I go with
rq over Celery unless I have bigger requirements, and starter projects tend to have pretty small requirements.
Regarding requirements: some people recommend splitting these up into local, production, test and so on, but I've found that more micromanagement that I like and it's easy to add a requirement in the wrong place and end up with a broken build. It's not the worst thing if your production has to install pytest libraries, for example, but you can easily miss or delete an important library from your production requirements and have everything work in your CI/CD pipeline only to have a nasty surprise at the end.
In addition I tend to use Heroku or Dokku for early-stage projects, and these work out of the box with a plain
requirements.txt file.
That's not so say a more complex requirements setup is better for a larger and more complex project, but for a starter project (the subject of this article) I want to keep it simple as possible.
I've tried Poetry a few times, but in general I've found it slow and error-prone, particularly inside Docker environments. I see the appeal, and I hope to make it my go-to some day, but right now I find it more trouble than it's worth. Instead I use pip-tools to generate and update my
requirements.txt file from a
requirements.in file.
Docker #
One other library I enjoy for local development - particularly as I tend to start with Heroku/Dokku for early-stage deployments - is Honcho. It makes it easier to add a development Procfile and wrap all my local environment into a single Docker image:
services:
honcho:
build:
context: .
environment:
DATABASE_URL: postgres://postgres:password@postgres:5432/postgres
REDIS_URL: redis://redis:6379/0
SECRET_KEY: seekrit!
restart: on-failure
privileged: true
tty: true
stop_grace_period: '3s'
logging:
options:
max-size: '100k'
max-file: '3'
ports:
- '8000:8000'
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
volumes:
- ./:/app
- /app/node_modules
command: [
'honcho',
'start',
'-f',
'./Procfile.local'
]
The
Procfile.local file looks something like this:
web: python manage.py runserver 0.0.0.0:8000 worker: python manage.py rqworker mail default watcher: npm run watch
This also means I can keep my local image in a small, simple
Dockerfile. This one includes both Python and frontend (Node/npm):
FROM python:3.10.4-buster
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONFAULTHANDLER=1
ENV PYTHONHASHSEED=random
ENV NODE_VERSION 17.9.0
RUN curl "" -O \
&& tar -xf "node-v$NODE_VERSION-linux-x64.tar.xz" \
&& ln -s "/node-v$NODE_VERSION-linux-x64/bin/node" /usr/local/bin/node \
&& ln -s "/node-v$NODE_VERSION-linux-x64/bin/npm" /usr/local/bin/npm \
&& ln -s "/node-v$NODE_VERSION-linux-x64/bin/npx" /usr/local/bin/npx \
&& rm -f "/node-v$NODE_VERSION-linux-x64.tar.xz"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN pip install -r ./requirements.txt
COPY ./package.json ./package.json
COPY ./package-lock.json ./package-lock.json
RUN npm cache clean --force && npm ci
A more complex project might require using multi-stage builds, or multiple Docker images/containers, but again KISS is the principle until I know I need that complexity.
Note that this is a Docker set up for local development. Nowadays there are so many approaches to production deployments that it's very difficult to come up with generalized advice, especially around containerized deployments. You could go with anything from a simple PAAS deployment like Heroku, Dokku or Render up through various AWS or Google Cloud environments or a more complex multi-cloud Kubernetes-based set up (or even your own on-prem hardware), and a lot depends on requirements, experience and budget.
Scripts #
I typically have a top-level
bin directory with simple Bash scripts to access some common things inside Docker containers:
- bin manage npm pytest
So for example
bin/manage will look like:
#!/usr/bin/env bash
set -euo pipefail
docker-compose exec honcho ./manage.py $@
This saves on a lot of tedious typing, so for example I can do
./bin/manage shell instead of
docker-compose exec honcho ./manage.py shell.
Makefile #
I'll usually make a simple
Makefile at some point with common things e.g.
make build: to build containers
make test: run unit tests
make upor
make start
make downor
make stop
make shell: start Django shell
Again, I like a smooth local development environment with minimal typing, especially for things I have to do over and over.
Source code for this article here.
- Previous: Notes on Learning Finnish | https://danjacob.net/posts/startingnewdjangoproject/ | CC-MAIN-2022-40 | en | refinedweb |
The von Mangoldt Function #
In this file we define the von Mangoldt function: the function on natural numbers that returns
log p if the input can be expressed as
p^k for a prime
p.
Main Results #
The main definition for this file is
nat.arithmetic_function.von_mangoldt: The von Mangoldt function
Λ.
We then prove the classical summation property of the von Mangoldt function in
nat.arithmetic_function.von_mangoldt_sum, that
∑ i in n.divisors, Λ i = real.log n, and use this
to deduce alternative expressions for the von Mangoldt function via Möbius inversion, see
nat.arithmetic_function.sum_moebius_mul_log_eq.
Notation #
We use the standard notation
Λ to represent the von Mangoldt function.
log as an arithmetic function
ℕ → ℝ. Note this is in the
nat.arithmetic_function
namespace to indicate that it is bundled as an
arithmetic_function rather than being the usual
real logarithm.
The
von_mangoldt function is the function on natural numbers that returns
log p if the input can
be expressed as
p^k for a prime
p.
In the case when
n is a prime power,
min_fac will give the appropriate prime, as it is the
smallest prime factor.
In the
arithmetic_function locale, we have the notation
Λ for this function. | https://leanprover-community.github.io/mathlib_docs/number_theory/von_mangoldt.html | CC-MAIN-2022-40 | en | refinedweb |
Here, we will see how to solve Length of the Longest Alphabetical Continuous Substring Solution of leet code 2414 problem with code.
You are given an alphabetical continuous string. An alphabetical continuous string is a string consisting of consecutive letters in the alphabet. In other words, it is any substring of the string
"abcdefghijklmnopqrstuvwxyz".
Example:
1) str = "abacaba" Output: 2 Explanation: There are 4 distinct continuous substrings: "a", "b", "c" and "ab". "ab" is the longest continuous substring. 2) str = "abcde" Output: 5 Explanation: "abcde" is the longest continuous substring.
Length of the Longest Alphabetical Continuous Substring Solution code in C++
Code 1:
#include <iostream> using namespace std; int longestContinuousSubstring(string s) { int max = 1, count = 1; for(int i = 0; i < s.length() - 1; i++) { // Compare with ASCII value of character in string if((s[i] + 1) == s[i+1]) { count++; } else { if(max < count) { max = count; } count = 1; } } max = max < count ? count : max; return max; } int main() { cout<<longestContinuousSubstring("abacaba"); return 0; }
Output:
2
To check more leetcode problem’s solution. Pls click given below link: | https://www.techieindoor.com/length-of-the-longest-alphabetical-continuous-substring-solution/ | CC-MAIN-2022-40 | en | refinedweb |
std::tan, std::tanf, std::tanl
From cppreference.com
1-3) Computes the tangent of
arg(measured in radians).
4) A set of overloads or a function template accepting an argument of any integral type. Equivalent to 2) (the argument is cast to double).
Run this code
#include <iostream> #include <cmath> #include <cerrno> #include <cfenv> // #pragma STDC FENV_ACCESS ON const double pi = std::acos(-1); // or C++20's std::numbers::pi int main() { // typical usage std::cout << "tan(1*pi/4) = " << std::tan(1*pi/4) << '\n' // 45° << "tan(3*pi/4) = " << std::tan(3*pi/4) << '\n' // 135° << "tan(5*pi/4) = " << std::tan(5*pi/4) << '\n' // -135° << "tan(7*pi/4) = " << std::tan(7*pi/4) << '\n'; // -45° //(1*pi/4) = 1 tan(3*pi/4) = -1 tan(5*pi/4) = 1 tan(7*pi/4) = -1 tan(+0) = 0 tan(-0) = -0 tan(INFINITY) = -nan FE_INVALID raised | https://en.cppreference.com/w/cpp/numeric/math/tan | CC-MAIN-2022-40 | en | refinedweb |
The best answers to the question “PATH issue with pytest 'ImportError: No module named YadaYadaYada'” in the category Dev.
QUESTION:
I used easy_install to install pytest on a mac and started writing tests for a project with a file structure likes so:
repo/ |--app.py |--settings.py |--models.py |--tests/ |--test_app.py
run
py.test while in the repo directory, everything behaves as you would expect
but when I try that same thing on either linux or windows (both have pytest 2.2.3 on them) it barks whenever it hits its first import of something from my application path. Say for instance
from app import some_def_in_app
Do I need to be editing my PATH to run py.test on these systems? Has Anyone experienced this?
ANSWER:
conftest solution
The least invasive solution is adding an empty file named
conftest.py in the
repo/ directory:
$ touch repo/conftest.py
That’s it. No need to write custom code for mangling the
sys.path or remember to drag
PYTHONPATH along, or placing
__init__.py into dirs where it doesn’t belong (using
python -m pytest as suggested in Apteryx’s answer is a good solution though!).
The project directory afterwards:
repo ├── conftest.py ├── app.py ├── settings.py ├── models.py └── tests └── test_app.py
Explanation
pytest looks for the
conftest modules on test collection to gather custom hooks and fixtures, and in order to import the custom objects from them,
pytest adds the parent directory of the
conftest.py to the
sys.path (in this case the
repo directory).
Other project structures
If you have other project structure, place the
conftest.py in the package root dir (the one that contains packages but is not a package itself, so does not contain an
__init__.py), for example:
repo ├── conftest.py ├── spam │ ├── __init__.py │ ├── bacon.py │ └── egg.py ├── eggs │ ├── __init__.py │ └── sausage.py └── tests ├── test_bacon.py └── test_egg.py
src layout
Although this approach can be used with the
src layout (place
conftest.py in the
src dir):
repo ├── src │ ├── conftest.py │ ├── spam │ │ ├── __init__.py │ │ ├── bacon.py │ │ └── egg.py │ └── eggs │ ├── __init__.py │ └── sausage.py └── tests ├── test_bacon.py └── test_egg.py
beware that adding
src to
PYTHONPATH mitigates the meaning and benefits of the
src layout! You will end up with testing the code from repository and not the installed package. If you need to do it, maybe you don’t need the
src dir at all.
Where to go from here
Of course,
conftest modules are not just some files to help the source code discovery; it’s where all the project-specific enhancements of the
pytest framework and the customization of your test suite happen.
pytest has a lot of information on
conftest modules scattered throughout their docs; start with
conftest.py: local per-directory plugins
Also, SO has an excellent question on
conftest modules: In py.test, what is the use of conftest.py files?
ANSWER:
I’m not sure why py.test does not add the current directory in the PYTHONPATH itself, but here’s a workaround (to be executed from the root of your repository):
python -m pytest tests/
It works because Python adds the current directory in the PYTHONPATH for you.
ANSWER:
Yes, the source folder is not in Python’s path if you
cd to the tests directory.
You have 2 choices:
Add the path manually to the test files, something like this:
import sys, os myPath = os.path.dirname(os.path.abspath(__file__)) sys.path.insert(0, myPath + '/../')
Run the tests with the env var
PYTHONPATH=../.
ANSWER:
I had the same problem. I fixed it by adding an empty
__init__.py file to my
tests directory. | https://rotadev.com/path-issue-with-pytest-importerror-no-module-named-yadayadayada-dev/ | CC-MAIN-2022-40 | en | refinedweb |
SNT: Python Notebooks
Python Notebooks
Direct access to the SNT API from the Python programming language is made possible with the pyimagej module. This enables full integration between SNT and any library in the Python ecosystem.
Installing pyimagej
Follow the instructions given here
Getting Started
To initialize Fiji from Python:
import imagej ij = imagej.init('sc.fiji:fiji') cortical motor neuron (UUID = "AA0100" in the MouseLight database).
d_stats = TreeStatistics(tree) metric = TreeStatistics.INTER_NODE_DISTANCE summary_stats = d_stats.getSummaryStats(metric) d_stats.getHistogram(metric).show() print("The average inter-node distance is %d" % summary_stats.getMean()) | https://imagej.net/index.php?title=SNT:_Python_Notebooks&oldid=39289&printable=yes&mobileaction=toggle_view_mobile | CC-MAIN-2020-16 | en | refinedweb |
A Scientific Dissent From Darwinism
A Scientific Dissent From Darwinism is a petition publicized in 2001 by the Discovery Institute, a creationist "think" tank, which attempts to push creationism, in the guise of Intelligent design, into public schools in the United States.[2] The petition expresses denial about the ability of genetic drift and natural selection to account for the complexity of life. It also demands that there should be a more careful examination of Darwinism. The petition was signed by about 700 individuals, with a wide variety of scientific and non-scientific backgrounds when first published. It now contains over 1200 signatures.[3]
The Dissent is reminiscent of the 1931 anti-relativity book, Hundert Autoren Gegen Einstein (A Hundred Authors Against Einstein),[4] which only included one physicist, and can be seen now as "a dying cry from the old guard of science" based primarily on philosophical objections.[1]
The petition states that:
The petition continues to be used in Discovery Institute intelligent design campaigns in an attempt to discredit evolution and bolster claims that intelligent design is scientifically valid by claiming that evolution lacks broad scientific support. However, the language of the statement is misleading. It frames the argument in a way that anyone could agree with it. So long as they don't know the Discovery Institute's true motivations (which is to undermine evolution using deceit and trickery, not to show any kind of genuine fallibility with it), anyone who is open to the idea of scientific inquiry would agree that they should be skeptical of everything, including evolution. If only the writers of the statement (i.e. creationists) were skeptical of their own ideas, which they clearly aren't.
The petition is considered a fallacious Appeal to authority, whereby the creationists at the Discovery Institute are attempting to prove that there is a dissent from "Darwinism" by finding a few creationist scientists to support the statement. The roughly 700 dissenters who originally signed the petition would have represented about 0.063% of the estimated 1,108,100 biological and geological scientists in the US in 1999, except, of course, that three-quarters of the signatories had no academic background in biology.[5][6] (The roughly 150 biologist Darwin Dissenters would hence represent about 0.013% of the US biologists that existed in 1999.) As of 2006, the list was expanded to include non-US scientists. However, the list nonetheless represents less than 0.03% of all research scientists in the world.[7] Despite the increase in absolute number of scientists willing to sign the dissent form, the figures indicate the support from scientists for creationism and intelligent design is steadily decreasing.
Since scientific principles are built on publications in peer-reviewed journals, discussion in open forums, and finally through consensus, the use of a petition should be considered the last resort of a pseudoscience rather than a legitimate scientific dissent from the prevailing consensus.
[edit] Sowing “controversy”
The claims of the document are of course rejected by the scientific community, but, as Robert T. Pennock points out, proponents of intelligent design are “manufacturing dissent” to explain the absence of scientific debate of their claims:
Furthermore, the statement itself, including the title, is deceptive, as it refers to evolution as “Darwinism” or “Darwinian theory”, expressions that will mean different things to different people, even though what the authors have in mind is evolution due to natural selection. As Larry Moran puts it:
In fact, when the National Centre for Science Education contacted several of the signatories, many of them admitted that they had no problem with common descent or evolution at all; one of them said that his "dissent mainly concerns the origin of life," but the theory of evolution is, of course, not a theory about the origin of life at all (though if the statement is read literally, such concerns would in fact be a reason to assent to it).[10] In fact, several of the signatories - including quite a few of those signatories who have a real, respectable research record - have explicitly denied that they have any problems with evolution, but signed the list for other reasons (e.g. Patricia Reiff, Phillip Savage, Ronald Larson).
Of course, the Discovery Institute is using the list to promote the idea that evolution is the subject of wide controversy and debate within the scientific community (despite the minuscule percentage of actual scientists that have signed up for it).[11] It has, for instance, been used to support their Teach the Controversy campaigns and their relatives (“Critical Analysis of Evolution”, “Free Speech on Evolution”, “Stand Up For Science”).[12] For instance, with regard to the Teach the Controversy campaigns, the Institute has claimed “evolution is a theory in crisis” that is disputed widely within the scientific community, citing the list as evidence or a resource, and hence also that this information is being withheld from students in public high school science classes along with “alternatives” to evolution such as intelligent design.[13] In 2002 Stephen Meyer presented the list to the Ohio Board of Education to promote Teach the Controversy, citing it as demonstrating the existence of genuine controversy over Darwinian evolution;[14] in the 2005 Kansas evolution hearings he similarly cited the list in support of there being “significant scientific dissent from Darwinism” that students should be informed about.[15]
The Discovery Institute-related organization Physicians and Surgeons for Scientific Integrity manages “Physicians and Surgeons who Dissent from Darwinism”, a similar list for medical professionals. The institute has also compiled and distributed other misleading lists of local scientists during controversies over evolution education in Georgia, New Mexico, Ohio, and Texas.[16]
[edit] Project Steve
As a tongue-in-cheek response to the list the National Center for Science Education started Project Steve, a list of living scientists named "Steve" (or variants of the name) who support evolution. As of February 9 2012 the list contained 1187 signatures, of which two-thirds are qualified biologists. As simple random searches will reveal, the signatories to Project Steve are overall far more consistently active scientists and researchers with real credentials.
By comparison The Discovery Institute's list had 12 signatories whose names would have qualified them for the Steve list per 2012. The twelve constitute a motley crew that contains at least some non-scientists (Meyer, Cheesman), certified crackpots (Gift), and one single biologist named C. Steven Murphree, who, to add insult to injury, later repudiated his involvement with the Discovery list and signed Project Steve instead.[17]
[edit] The List
The list of signatories, as per December 2011. From a quick glance at the list the Texas A&M University seems vastly over-represented and close to being a hub for creationism (16 signatories signed as faculty or retired faculty, as well as 10 signatories listed as receiving their Ph.Ds from the institution). Georgia Institute of Technology is rather well represented as well (9 signatories listed as faculty), as is the Autonomous University of Guadalajara, Mexico (10 signatories listed as faculty); by comparison, a well-known creationist university such as Cedarville "only" had five signing faculty members (though the real numbers turn out to be far higher, since many Cedarville faculty seem to prefer to sign with their degree-awarding institution instead). Note that, apart from David DeWitt, the signatories among the Liberty University faculty tend not to mention their affiliation but rather the institution that awarded them their degrees. The same applies to Oral Roberts University.
Another striking thing about the list is the sheer number of signatories who have made PR efforts on behalf of creationism, including outreach efforts such as writing books targeted at children or students, and how few of them have actually attempted to do anything resembling research related to evolution or intelligent design.
Note also that many of the signatories are listed by the institution where they obtained their Ph.Ds, which does not indicate any current affiliation. So, for instance “Alfred G. Ratz, Ph.D. Engineering Physics, University of Toronto” does not currently have any affiliation with the University of Toronto, and Google does not reveal any current affiliation for Ratz whatsoever. In fact, relatively quick searches reveal that a very large percentage of the signatories have no academic affiliation at all; the number of biologists actively researching biological issues even remotely related to evolution can be counted on one hand.
Note also that deceased signatories are not removed from the list, and not consistently kept track of, something that further contributes to inflating the number of signatories. Signatories who are known to have died since signing are marked with "†" (and include far more than the ones actually noted as deceased on the original Discovery list), but there may be more than the ones actually marked as deceased here. A large percentage of those signatories who do have a research record are retired.
[edit] B
- David Berlinski,[18] Ph.D. Philosophy, Princeton University; Senior Fellow at the Discovery Institute. Has written popular books on mathematics, but not involved in scientific research. Famous for his enumerative “Cows cannot evolve into whales” argument.[19] Also writes for Uncommon Descent.
- Marco Bernardes, Professor & Chair, Department of Mechanical Engineering, Federal Center of Technological Education, Minas Gerais. Currently head of the Mechanical Engineering Department at CEFET-MG. Has two publications on solar heaters.
- C. Biedebach, Mark, Professor Emeritus of Physiology, California State University, Long Beach. Google scholar returned two papers, one on muscular reactions to toxins from sea urchins (1978), the other published in Acupuncture & electro-therapeutics research in 1989. Currently affiliated with (?) Caroline Crocker’s creationist and global warming denialist organization American Institute for Technology and Science Education, and apparently currently writing a book called “Evolution is a Weasel Word.”
- P. Birch, Keith, Ph.D. Atmospheric Physics, U. of Southampton. No current affiliation or research (except a single 1991 paper) found.
- Livingston Birchfield, Gayle, Ph.D. Biology, University of Missouri, Columbia. No current affiliation found; does not appear to be a scientist.
- Bishop, Phillip, Professor of Kinesiology, University of Alabama. Has some publications in an unrelated field. Used to teach “optional” classes with a “Christian perspective” for his exercise physiology students (in which he promoted creationism). When his employers asked him to stop, Bishop took them to court and lost[20] (a case that has been widely misused by creationists as a limitation of Academic freedom, since it is obviously a matter of academic freedom when a professor is denied the opportunity to use his classrooms to convert people). Has also written for various fundamentalist venues.
- Blackstone, Gage, Doctor of Veterinary Medicine, Texas A&M University (a professional, not a research doctorate)
- F. Blick, Edward, Ph.D. Engineering Science, University of Oklahoma. Also on James Inhofe’s list of 650 scientists who supposedly dispute the global warming consensus,[21] and on the record saying that “[t,” and that “[t]his whole [AGW] scheme is a ‘Trojan Horse’ for global socialism!”[22] Also known for trying to show that modern science is accurately foreshadowed in the Bible. For instance, when Isaiah says that the obedient ones “shall mount up with wings as eagles; they shall run, and not be weary; they shall walk and not faint” (Isa. 40:28-31), Blick takes it as an accurate description of modern research in aerodynamics, which shows that eagles can fly for a long time without getting tired. Blick seems to have done real research (unrelated field) once upon a time, but the only papers found written during the last 20 years are “Global Warming Myth and Marxism” and “Obama’s ‘Bad Molecule CO2 Myth’ Is a Dagger in the Back of the US Economy”, neither of which is, needless to say, a peer-reviewed publication.
- Blomberg, Sture, Associate Professor of Anesthesia & Intensive Care Medicine, The Sahlgren University Hospital. Has done some research in unrelated fields. Has also claimed, in letters to the editor and various blogs, that the theory of evolution is fraud mostly because of the (alleged) absence of transitional fossils.[23] Member of the Clapham Institue, a fundamentalist Christian think tank.
- Blomgren, Robert, Ph.D. Mathematics, University of Minnesota. No information, research, or affiliation found.
- Bloom, John, Ph.D. Physics, Cornell University. Currently Professor at the faculty of Christian Apologetics at Biola University (he is also M.Div and Ph.D in Ancient Near Eastern Studies). Has published extensively on theology, including “Does Intelligent Design Theory Help Christian Apologitcs” and “Intelligent Design and Evolution: Do We Know Yet?”, but done no scientific research for the last 30 years. Claims that “Darwinists do not have a clue how life first got started ‘by itself,’ [which it does not purport to] as was well documented in the recent movie Expelled: No Intelligence Allowed”[24] and that the idea that humans and chimp share a common ancestor is ridiculous.
- D. Blumer, Aric, Ph.D. Computer Engineering, Virginia Tech. Chief Technology Officer, SDG. Has a few publications; unrelated fields.
- P. Bodey, Gerald, Emeritus Professor of Medicine, Former Chairman, Department of Medical Specialties, University of Texas M.D. Anderson Cancer Center. Has done research in an unrelated field.
- Bohlin, Raymond, Ph.D. Molecular & Cell Biology, University of Texas, Dallas (that’s where he received his Ph.D.; he is currently affiliated with Probe Ministries). Research Fellow of the Discovery Institute 's Center for the Renewal of Science and Culture. Though his credentials are fine, his research seems mostly concerned with questions such as “Is Masturbation a Sin?” and “Is pole-dancing OK for believers?” Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- M. Bohn, Edward, Ph. D. Nuclear Engineering, University of Illinois. No affiliation, information or research found.
- Boldt, Yvonne, Ph. D. Microbiology, University of Minnesota. Currently Biology and Chemistry instructor at Providence Academy, a fundamentalist "college-preparatory" school. Contributed to a paper or two in the nineties, but seems not to have done any research or been involved in science since then; instead, she is involved in church groups and teaches religion in her parish. Known to have argued that Intelligent Design should be taught in public schools, and for supporting the Ohio State Board of Education’s Teach the controversy-friendly, Discovery Institute-inspired 2004 “lesson plan” for the “Critical Analysis of Evolution”.[25]
- Bolender, David, Assoc. Prof., Dept. of Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin. Has done research in an unrelated field; also affiliated with the Creation Science Society of Milwaukee.
- Jonathan C. Boomgaarden, Ph.D. Mechanical Engineering, University of Wisconsin. Currently employed by the General Electric Corporation. No research located.
- William Bordeaux, Chair, Department of Natural & Mathematical Science, Huntington College (a Christian liberal arts college). Has said that “The theory of evolution, as it is presently taught, is fraught with both scientific and philospohical problems. Intelligent design continues to provide a thoughtful response to these issues and deserves to be included in the life science curriculum.”[26] Does not appear to be involved in research.
- John Bordelon, Ph.D. Electrical Engineering, Georgia Institute of Technology. No updated information or research found.
- David Bossard, Ph.D, Mathematics, Dartmouth College. Also M.Div. Hardly a scientist, but has worked on scientific computer modeling and simulation, with emphasis on military applications. Author of “God’s Law, Creation Law: Social Theory vs. Brute Fact”.
- Gregory D. Bossart, Director and Head of Pathology, Harbor Branch Oceanographic Institution. Appears to have some publications that are not obviously unrelated, making Bossart one of the few signatories with real credentials.
- David Bourell, Professor Mechanical Engineering, University of Texas, Austin. Has a decent research record, in an unrelated field.
- Mark P. Bowman, Ph.D. Organic Chemistry, Pennsylvania State University. Currently chemist for PPG Industries in Ohio and OSI Specialties, Inc. Has some publications and patents in an unrelated field.
- Denis M. Boyle, Ph.D. Medical Biochemistry, U. of Witwatersrand. Appears (possibly) to have publications; no current affiliation found.
- Begona M. Bradham, Ph.D. Molecular Biology, University of South Carolina. No current affiliation or current research found.
- Walter Bradley, Distinguished Professor of Engineering, Baylor University. One of the pioneers of Intelligent Design and the Wedge strategy. Well-known creationist lecturer, e.g. at conferences that “thoroughly equip church members and leaders with generally non-technical, cutting-edge information [… and] demonstrate practical steps to use design-evidence as a thoughtful bridge to skeptics who have been taught through Darwinian evolution that God is a myth.”[27] Has done little if any research, apart from contributing chapters to creationist books and publications in religious journals.
- Ernest L. Brannon, Professor Emeritus, (Ph.D. Fisheries, University of Idaho). Seems to have done some real research on ecology.
- Herman Branover, Professor of Mechanical Engineering, Ben-Gurion University. A pioneer of the field of magnetohydrodynamics[28] and president of the SHAMIR Association of Religious Professionals from the USSR, known for his studies of Jewish mysticism and spirituality. With rabbi Joseph Gisburg he has written “Mind over Matter”, which espouses young earth creationism and reconciles the assertion that the earth is ca. 6000 years old with science by claiming that “science formulates and deals with theories and hypotheses while the Torah deals with absolute truths;” just observe how scientists disagree with each other – clearly they are not uncovering absolute truths. Problem solved.
- James R. Brawer, Professor of Anatomy & Cell Biology, McGill University (Center for Medical Education). Has published research (unrelated) on medical education and teaching; also some medical papers, which seem unrelated to evolution. Involved in apologetics.
- John Brejda, Ph.D. Agronomy, University of Nebraska, Lincoln. Principal Statistician at Alpha Statistical Consulting, Inc. Has some publications in unrelated fields.
- Gregory J. Brewer, Prof. of Neurology, Medical Microbiology, Immunology and Cell Biology, Southern Illinois University School of Medicine. A real scientist with a respectable research record. Says he accepts microevolution, but thinks that science has failed to show that one species can evolve into another, which means that he has failed to grasp the basic biological understanding of species and endorses a bogus distinction. Doesn’t accept an old universe either: “Based on faith, I do believe in the creation account.”
- Joel Brind, Professor of Biology, Baruch College, City University of New York. The main proponent of the absolutely debunked[29][30] idea that abortion leads to breast cancer, and consultant and expert witness for pro-life groups such as Christ's Bride Ministries. Brind presented the alleged support for the connection between abortion and breast cancer in a meta-analysis, and obtained his results through various methodological weaknesses, including selection bias. His later papers on the same theme was e.g. published in the pseudojournal[31] JPANDS, the house journal of the crank organization Association of American Physicians and Surgeons.
- Glen O. Brindley, Professor of Surgery, Director of Ophthalmology, Scott & White Clinic, Texas A&M University, H.S.C. Has a few publications in unrelated fields (the latest from 1992).
- Rudolf Brits, Ph.D. Nuclear Chemistry, University of Stellenbosch. Currently Deputy Director of Economic Development, Petoria Inc. Has no academic affiliation or research. Not a scientist.
- Frederick Brooks, Kenan Professor of Computer Science, University of North Carolina at Chapel Hill. Author of “The Mythical Man-Month” and “No Silver Bullet”. Respected researcher in his field, in particular virtual environments and scientific visualization.
- Neil Broom, Associate Professor, Chemical & Materials Engineering, University of Auckland. Well known as an ID activist in New Zealand. Author of “How Blind Is the Watchmaker?: Nature’s Design & the Limits of Naturalistic Science,” which was glowingly reviewed by Phillip Johnson, and “Life's X Factor: The missing link in materialism's science of living things.” Fellow of Dembski’s think tank International Society for Complexity, Information, and Design, and seems to have some publications on biomechanics (tissue and spinal biomechanics).
- Daniel M. Brown, Ph.D. Physics, Catholic University of America. No information, research or affiliation found.
- John Brown, Research Meteorologist, National Oceanic and Atmospheric Administration. No further information located.
- Mary A. Brown, DVM (Veterinary Medicine), Ohio State University. Holds a professional rather than a research doctorate.
- Olen R. Brown, Former Professor of Molecular Microbiology & Immunology; University of Missouri, Columbia. Retired (though can still be hired as a life sciences expert witness). Has done some research, and is the author of “Miracles”, a book that criticizes science for not recognizing miracles or accepting God’s authorship of the universe.[32]
- Paul Brown, Assistant Professor of Environmental Studies, Trinity Western University. Currently Associate Professor, who has done some research in unrelated fields. Known proponent of Intelligent Design,[33] and gives talks and presentations at various religious venues.
- John Brumbaugh, Emeritus Professor of Biological Sciences, University of Nebraska, Lincoln. A standout on the list, Brumbaugh is one of few – perhaps the only – signatories on the list who has published directly on biological evolution in respectable venues. However, his published research does not challenge evolution.
- Nancy Bryson,[34] Associate Professor of Chemistry, Mississippi University for Women at the time of signing the petition; currently on Wingnut welfare after her contract with the university was not renewed. Often touted as an example of the persecution of dissidents in science for her creationism. Not a scientist.
- Douglas R. Buck, Ph.D. Nutrition and Food Sciences, Utah State U. Has two papers from the 70s; no updated information/research found.
- Eugene Buff, Ph.D Genetics, Institute of Developmental Biology, Russian Academy of Sciences. VP, Consulting at yet2.com. No research found; appears to be a business person and not a researching scientist.
- Richard Buggs, DPhil Plant Ecology & Evolution, Oxford University. Currently Research Fellow, Queen Mary University of London. Apparently a young earth creationist,[35] though he has several publications in various types of journals, some of which seem legitimate. Member of the “scientific panel” of the British creationist organization Truth in Science.
- John L. Burba, Ph.D. Physical Chemistry, Baylor University. Chief Technology Officer and Executive Vice President, Molycorp, Inc. Seems to have some patents, but no scientific research to his name.
- Stuart C. Burgess, Professor of Design & Nature, Dept. of Mechanical Engineering, Bristol University. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Member of the Council of Reference of the British creationist organization Truth in Science and signatory to the 2002 Estelle Morris letter, proselytizing creationism in schools.[36] Tells children they would go to hell if they believe the theory of evolution and that modern Cosmology and the theory of evolution are no more than “ploys by Satan to divert man from belief in God and a literal interpretation of Genesis.”[37] Counts his articles for Answers in Genesis as peer reviewed scientific articles. Appeared on video in Ken Ham's main presentation during his debate with Bill Nye.
- Laura Burke, Former Associate Prof. of Industrial Engineering, Lehigh U. Did research in the 90s. No current affiliation or research found.
- N. Ricky Byrn, Ph.D. Nuclear Engineering, Georgia Institute of Technology. No affiliation, research or information found.
[edit] C
- Donald Calbreath, Professor, Department of Chemistry, Whitworth College (a small Presbyterian liberal arts college). Retired. Currently Research Team Member at Science and the Spirit: Pentecostal Perspectives on the Science/Religion Dialogue at Calvin College, where he works on defending Non-materialist neuroscience.[38] Has previously written a few religious screeds concerning Stem cell research, abortion, “miracles of healing”, the relationship between Pentecostalism and science (particularly notable is the paper “7 Serotonin and Spirit: Can There Be a Holistic Pentecostal Approach to Mental Illness”), creationism, and zeh gays. Also the guy behind this quote. Does not appear to have any research record whatsoever. Not a scientist.
- John B. Cannon, Ph.D. Organic Chemistry, Princeton University. Currently visiting assistant professor at Trinity International University, a religious institution. Has some publications in an unrelated field.
- Arnold Eugene Carden, Professor Emeritus of Engineering Science & Mechanics, University of Alabama. Appears to have some (rather old) publications in an unrelated field.
- Russell Carlson,[39] Professor of Biochemistry & Molecular Biology, University of Georgia. Fellow of William Dembski’s International Society for Information, Complexity, and Design; testified during the Kansas Evolution hearings. Does research on bacterial infections of cells, which is hardly closely related to evolution but at least makes Carlson one of few signatories with anything remotely resembling relevant credentials. Also on the editorial team of Bio-Complexity.
- Richard L. Carpenter, Jr., Ph.D. Meteorology, University of Oklahoma. CCM, Weather Decision Technologies, Inc. Seems to have done real research, in an unrelated field.
- Ronald S. Carson, Ph.D. Nuclear Engineering, University of Washington. Currently Technical Fellow in Systems Engineering at The Boeing Company and Adjunct Professor in Systems Engineering at the Missouri University of Science & Technology. Has published on the unrelated topic of Systems Engineering.
- David Richard Carta, Ph.D. Bio-Engineering, U. of California, San Diego. President, Telaeris Inc. No research/academic affiliation found.
- Jarrod W. Carter, Ph.D. Bioengineering, University of Washington. Co-founder of Origin Engineering LLC, a consulting firm specializing inautomotive accident reconstruction and biomechanical injury analysis. Appears to have no academic affiliation.
- Reid W. Castrodale, P.E., Ph.D. Structural Engineering, University of Texas, Austin. Director of Engineering at Carolina Stalite Company and has a few publications concerning concrete bridges.
- Chris Cellucci, Associate Professor of Physics, Ursinus College. Seems to have a decent research record, in an unrelated field.
- Emilio Cervantes, Ph.D. Molecular Biology, University of Salamanca. Staff Scientist of the Consejo Superior de Investigaciones Científicas. Has some publications, however, mostly in the Journal of Plant Physiology. He has rejected natural selection but not evolution, which can be seen here; his criticism of natural selection is his claim that it is not testable and is a tautology.
- Arthur Chadwick, Ph.D. Molecular Biology, University of Miami. Currently affiliated with the Earth History Research Center, which seeks to “develop a scientifically credible view of earth history consistent with scripture.” Young earth creationist who rejects naturalism in favor of Scriptural presuppositionalism, and argues that naturalists geologists need to have an open mind and admit that they may be wrong.[40] Does nevertheless have some real publications as well, though none of them supports creationism and they are generally older – his newer work is primarily published in non-scientific venues.
- David Chambers, Physicist, Lawrence Livermore National Laboratory. Has done research in an unrelated field.
- Mark A. Chambers, Ph.D. Virology, University of Cambridge. No current affiliation or recent publications found.
- Scott A. Chambers, Affiliate Professor of Chemistry and Materials Science & Engineering, University of Washington. Has a decent research record in an unrelated field. Claims that ID “provides a broad, satisfactory framework for understanding the origin of the cosmos, and the origin, diversity and complexity of life on earth.”[41]
- Chi-Deu Chang, Ph.D. Medicinal Chemistry, U. of New York, Buffalo. Appears to have some publications; no updated affiliation found.
- †David Chapman, Senior Scientist, Woods Hole Oceanographic Institution
- Gene B. Chase, Professor of Mathematics and Computer Science, Messiah College (a fundamentalist Evangelical institution). Claims to work on the relationship between mathematics and faith, but seems to have no publications found apart from articles on his institution’s homepage. His "research" also encompasses arguments to the effect that “it’s impossible to be both gay and evangelical”.[42]
- Jan Chatham, Ph.D. Neurophysiology, University of North Texas. Currently Research Associate at Probe Ministries. Has no research background, and is not a scientist.
- Stephen J. Cheesman, Ph.D. Geophysics, University of Toronto. No information on research or current academic affiliation found. Appears to be involved in various ministries.
- Guang-Hong Chen, Assistant Professor of Medical Physics & Radiology, U. of Wisconsin-Madison. Does research in an unrelated field.
- T. Timothy Chen, Ph.D. Statistics, University of Chicago. Presently at Southwestern Baptist Theological Seminary and not currently involved in scientific research, though he has a research background in an unrelated field.
- Frank Cheng, Associate Professor of Chemistry, University of Idaho. Has some research publications in an unrelated field. Also appears to have tried to argue for the authenticity of the Shroud of Turin.[43]
- Shun Yan Cheung, Associate Professor of Computer Science, Emory University. Has some publications in an unrelated field. Hardcore creationist who runs a webpage presenting the overwhelming evidence for the accuracy of the Bible and the falsity of evolution targeted at non-specialists. All the standard creationist PRATTs are there.
- Malcolm D. Chisholm, Ph.D. Insect Ecology, University of Bristol. No information, affiliation, or research found.
- Shing-Yan Chiu, Professor of Physiology, University of Wisconsin, Madison. A real scientist with a respectable research record.
- Gerald Chubb, Associate Professor of Aviation, Ohio State University. Has some publications in an entirely unrelated field. Also on the Board of Directors of Pioneer Bible Translators’ ministry, working particularly in the areas of Personnel and Church Mobilization.
- John Cimbala, Professor of Mechanical Engineering, Pennsylvania State University. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Seems to have a few publications in unrelated fields. Also contributor to “In Six Days - Why Fifty Scientists Choose to Believe in Creation”, and has arranged seminars on the “evolution/creation” controversy where he attempts to show that “all attempts to harmonize scripture with evolutionary philosophy (such as the day-age theory, the gap theory, etc.) have failed,”[44] in addition to presenting all the usual creationist PRATTs. His rants have also been picked up by Creation.com and the Christian Answers Network.
- Jorge Pimentel Cintra, University Professor, Earth Sciences, University of São Paulo. Does research in an unrelated field.
- Donald Clark, Ph.D. Physical Biochemistry, Louisiana State University. Currently Vice President of Development and Medical Affairs at Houston Biotechnology Inc. (no updated information about the company located), and contributor to the Creation Moments website. Young earth creationist who rejects astronomy as well as biology since it conflicts with his reading of the Bible, saying that “[w]e must interpret our physical observations based on the scripture and not interpret the scripture based on our physical observations.”[45]
- Kieran Clements, Assistant Professor, Natural Sciences, Toccoa Falls College (a fundamentalist, end-times focused, “Christ-centered educational institution”). Has contributed to a few papers.
- John Cogdell, Professor of Electrical & Computer Engineering, University of Texas, Austin. Has done real research in an unrelated field (though for the last decades he appears primarily to have written textbooks). Associated with the Christian Leadership Ministries.[46]
- Jennifer M. Cohen, Ph.D. Mathematical Physics, New Mexico Institute of Mining and Technology. Appears to be the owner of 4physics.com, and has some research publications, though few if any from the last decade. Also a global warming denialist associated with the Science and Public Policy Institute.
- Harold D. Cole, Professor of Physiology, Southwestern Oklahoma State University. No research or further information found.
- William B. Collier, Ph. D. Physical Chemistry, Oklahoma State U. Currently Professor of chemistry at Oral Roberts University. Staunch defender of Intelligent Design who has claimed that religion and science are the same. No serious research found; hardly a scientist.
- Leon Combs, Professor & Chair, Chemistry & Biochemistry, Kennesaw State U. Retired. Used to do research in an unrelated field.
- Nicholas Comninellis, Associate Professor of Community and Family Medicine, University of Missouri, Kansas City. No actual research found, but Comninellis is the author of several creationist books, including “Creative Defense: Evidence Against Evolution” (“Philosophically, the dogma of evolution is a dream, a theory without a vestige of truth,” whatever that means) and “Darwin’s Demise: Why Evolution Can’t Take the Heat”. Harun Yahya is apparently a fan of Comninellis’s writings.
- Keith F. Conner, Ph.D. Electrical Engineering, Clemson University. Has contributed to research in an unrelated field.
- David Conover, Ph.D. Health Physics, Purdue University. No information found.
- John D. Cook, Head of Software Development, Department of Biostatistics & Applied Mathematics, U. of Texas, M.D. Anderson Cancer Center. Has a decent research record, but it seems unrelated to evolution.
- Wayne L. Cook, Ph.D. Inorganic Chemistry, University of Kentucky. Google Scholar returns some publications from the early 70s, in an unrelated field, but nothing more recent.
- Ronald R. Crawford, Ed.D. Science Education, Ball State University. No research or current affiliation found.
- Caroline Crocker, Ph.D. Immunopharmacology, University of Southampton. Lost her position at George Mason University (did not have her contract renewed[47]) after lecturing on Intelligent design using discredited creationist arguments in her class on evolution. Ended up on wingnut welfare at the creationist IDEA center[48], which is less than fully accurate regarding her incompetence.[49] One of the main characters of Expelled: No Intelligence Allowed.
- Stephen Crouse, Professor of Kinesiology, Texas A&M University. Has published research on exercise physiology, and does have a B.S. in biology. Associated with the Leadership University, where he describes his strong religious beliefs. (Leadership University is not a university but ‘a “one-stop shopping superstore” in the marketplace of ideas’,[50] sponsored by the Christian Leadership Ministries, a branch of the Campus Crusade for Christ). He has also petitioned the Texas Board of Education to adopt the teaching of creationism.[51]
- Malcolm A. Cutchins, Ph. D. Engineering Mechanics, Virginia Tech. Professor Emeritus, Auburn University. Has done some research on unrelated matters, and is currently traveling around lecturing about creationism and against evolution to church groups. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
[edit] D
- Danielle Dalafave, Associate Professor of Physics, College of New Jersey. No publication record located; does not seem to be a scientist. Cites fine tuning of physical constants as her reason to support Intelligent Design, even though it has nothing to do with evolution.[52]
- Cham Dallas, Professor, Pharmaceutics & Biomedical Science, University of Georgia. Publishes in unrelated fields. Is something of a celebrity expert on toxicology issues related to WMD and nuclear power, and has made numerous TV appearances (CBS) over the last decade. Also affiliated with the Christian Leadership Ministries.
- Lisanne D’Andrea-Winslow, Ph. D. Cell Biology & Biochemistry, Rutgers University. Professor of Biology at Northwestern College (a small fundamentalist school) and affiliated with the Biologic Institute. Has a few publications but none that seem to touch on evolution. Has also written quite a bit of poetry (including “In Praise of Creation”). Signatory to an apparently Discovery Institute initiated Amicus Brief supporting Intelligent Design in the Kitzmiller v. Dover case.[53]
- Marc C. Daniels, Assistant Professor of Biology, William Carey College. No research record found.
- Paul S. Darby, Ph.D. Organic Chemistry, University of Georgia. Contributing scientist at the Cleaning Industry Research Institute. Contributed to a 1988 paper, but no other research record found. Signatory to an apparently Discovery Institute initiated Amicus Brief supporting Intelligent Design in the Kitzmiller v. Dover case.[54]
- Holger Daugaard, Ph. D. Agronomy, Danish Institute of Agricultural Sciences. Has earlier published some papers on agriculture. Seventh Day Adventist and currently principal of Vejlefjordskolen, a small, private, fundamentalist elementary- and high-school.
- Melody Davis, Ph.D. Chemistry, Princeton University. Affiliated with “Parents Holding Doctorates”, a student assistance service in Ohio. Not a scientist; no research found.
- †John A. Davison,[55] Emeritus Associate Professor of Biology, University of Vermont. Famous crank and the guy behind A Prescribed Evolutionary Hypothesis. Also on the List of Internet kooks.
- Thomas Deahl, Ph.D. Radiation Biology, The University of Iowa. Currently Adjunct Associate Professor in the Department of Developmental Dentistry at the UTHSCSA Dental School. Has done some research in a rather clearly unrelated field.
- Glen E. Deal, Ph.D. Electrical Engineering, Florida Institute of Technology. Systems Engineer at Northrop Grumman Aerospace Systems. Google scholar returns a single (2009) unrelated paper.
- Hans Degens, Reader in Muscle Physiology, Manchester Metropolitan University. Currently Assistant Professor, Department of Physiology, Catholic University Nijmegen. Has a decent research record in an at best tangentially related field, but has also written anti-evolution articles other venues.[56]
- Ronald D. DeGroat, Ph.D. Electrical Engineering, U. of Colorado, Boulder. Senior Scientist, Broadcom; does wholly unrelated research.
- Robert DeHaan, Ph.D. Human Development, U. of Chicago. Has done some unrelated research; author “Educating Gifted Children”.
- William DeJong, Ph.D. Computer Science, University of Groningen. No affiliation or research found.
- Harold Delaney, Professor of Psychology, University of New Mexico. Real researcher in an unrelated field. Also Templeton grant recipient known to have taught an honors seminar in 2003 and 2004 on “Origins: Science, Faith and Philosophy” at the University of New Mexico. The course, which was co-taught with Michael Kent (who is apparently also a signatory to this list), included readings on “both sides” as well as a guest lecture by David Keller, another intelligent-design advocate (and signatory) on the New Mexico faculty.[57]
- Michael Delp, Professor of Physiology, Texas A&M University. Currently Professor and Chair, Department of Applied Physiology and Kinesiology. Does research in an unrelated field.
- Charles N. Delzell, Professor of Mathematics, Louisiana State University. Does research in unrelated fields. Also signatory to a letter to Governor Bill Haslam in support of Louisiana’s creationist-friendly HB 368 bill.[58]
- Kenneth Demarest, Professor of Electrical Engineering, University of Kansas. Does research in unrelated fields.
- William Dembski, Ph.D. Mathematics, University of Chicago
- Lawrence DeMejo, Ph.D. Polymer Science and Engineering, U. of Massachusetts, Amherst. No current affiliation or information found.
- David Deming, Associate Professor of Geosciences, University of Oklahoma. Adjunct faculty member at the conservative think tanks the Oklahoma Council of Public Affairs and the National Center for Policy Analysis, and known for criticizing the notion of “sustainability” since “technological progress is our birthright and destiny,” which may not completely assuage all worries. Also on James Inhofe’s list of 650 scientists who supposedly dispute the global warming consensus.[59], claiming that global warming hysteria is “generated by journalists who don't understand the provisional and uncertain nature of scientific knowledge,” and that “global warming is a scientific question, not a moral one.” Has also claimed “the last two years of global cooling have nearly erased 30 years of temperature increases. To the extent that global warming ever existed, it is now officially over,”[60] which is patently absurd. Has done some real research as well, apparently, and has – curiously enough – claimed that Intelligent Design cannot be formulated as a scientific hypothesis and is scientifically useless.
- Charles Detwiler, Ph.D. Genetics, Cornell University; currently professor of biology at Liberty University. As expected from a proponent of creationism he is not particularly concerned with doing research, but very concerned about Intelligent Design’s position in public schools.[61]
- David A. DeWitt, Ph.D., Case Western Reserve University (thesis title, "Interactions of Glial Cells in Alzheimer's Disease"). Chair, Department of Biology & Chemistry, Liberty University – yes he is, and that tells you more about Liberty University than anything else. He teaches, for instance, a required course in “creation studies”.[62] Also a signatory to the CMI list of scientists alive today who accept the biblical account of creation, and has published papers in Answers in Genesis’s house journal Answers Research Journal.
- Eshan Dias, Ph.D. Chemical Engineering, King’s College, Cambridge University. Senior scientist at Hemas Holding, but does not seem to do any scientific research. Founder and President of Cultura Vitae, the pro-life movement in Sri Lanka.
- James Robert Dickens, Ph.D. Mechanical Engineering, Texas A&M University. No affiliation or research found.
- Lawrence Dickson, Ph.D. Mathematics, Princeton University. Involved in computer science and may have some low-tier publications. Also political activist who contributes to the Culture Wars magazine (formerly Fidelity magazine, a fundamentalist Catholic magazine), author of the self-published The Book of Honor, and involved in various political protests.
- Gary Dilts, Ph.D. Mathematical Physics, U. of Colorado. Currently at Los Alamos National Laboratory. Does research in unrelated fields.
- Robert DiSilvestro, Ph.D. Biochemistry, Texas A&M University; Professor of Nutrition at Ohio State University. Hardcore creationist[63] affiliated with the Christian Leadership Ministries, where he for instance publishes materials (containing all the standard creationist canards) so that students can challenge their biology teacher[64] (no, he doesn’t do research, he helps students to Jesus). Testified for the creationists during the Kansas evolution hearings. Has publications in an unrelated field.
- Daniel Dix, Associate Professor of Mathematics, University of South Carolina. Has publications in unrelated fields, though he claims that his skepticism of evolution arose through studying the structure of biological molecules.[65] Also member of Dembski’s International Society for Complexity, Information, and Design.
- Allison Dobson, Assistant Professor, Chemistry, Georgia Southern University. Has done work on an unrelated topic (crystallography).
- Francis M. Donahue, Professor Emeritus, Chemical Engineering, University of Michigan. Has a few publications in an unrelated field, but little from the last 25 years.
- Alistair Donald, Ph.D. Environmental Science/Quaternary or Pleistocene Palynology, University of Wales. Not a working scientist but a Church of Scotland clergyman. Contributed a chapter to Norman Nevin’s creationist tract “Should Christians Embrace Evolution?”
- Kenneth Dormer, Ph.D. Biology & Physiology, University of California, Los Angeles. Currently at the University of Oklahoma, College of Medicine. Involved in research, though in unrelated fields.
- John Doughty, Ph.D. Aerospace & Mechanical Engineering, University of Arizona. Teaches at the fundamentalist Noah Webster College and member of the Board of Directors of the Creation Science Fellowship of New Mexico; formerly president of Albuquerque Bible College. Claims to have been converted to creationism by Henry Morris’s claims about thermodynamics,[66]. May have a very few, old, unrelated publications (nothing less than 20 years old located).
- James F. Drake, Ph.D. Atmospheric Science, University of California, Los Angeles. Currently (it seems) at the University of Maryland, where he does research in unrelated fields. Also apparently a signatory to the Oregon Petition, and his name appears on James Inhofe’s list of scientists whose work (according to Inhofe) refutes AGW.
- Scott T. Dreher, Ph.D. Geology, University of Alaska, Fairbanks. Currently at the University of Durham; researcher in the unrelated fields of isotopic geology and petrology.
- Jeanne Drisko, Clinical Assistant Professor of Alternative Medicine, University of Kansas, School of Medicine. Developed the Program in Integrative Medicine at the Kansas University School of Medicine, and has been instrumental in developing research projects in the area of CAM therapies. Involved in chelation therapy. Has a page on Quackwatch,[67] which emphasizes her thoroughly anti-scientific outlook. Has no understanding, aptitude, or sympathy for science.
- James O. Dritt, Ph.D. Civil Engineering & Environmental Science, University of Oklahoma. Religious fundamentalist[68] and hardcore creationist.[69] Appears to have worked on manuals on industrial energy efficiency; does not appear to have or have had any academic affiliation, or to have been involved in research.
- Tim Droubay, Ph.D. Physics, University of Wisconsin-Milwaukee. Appears to be associated with Pacific Northwest National Laboratory. Real scientist in an unrelated field.
- Jan Frederic Dudt, Associate Professor of Biology, Grove City College (a small fundamentalist, generally unaccredited Christian institution dedicated to fighting taxes[70]); also Coordinator of the Center of Vision and Values, a rightwing think tank. Opposed to stem cell research because it allegedly conflicts with his Christian beliefs and because other treatments are effective in other settings.[71] His publication record is limited to two papers from the early 90s completely unrelated to evolution; does not appear to be a working scientist.
- Karl Duff, Sc.D. Mechanical Engineering, MIT. Young earth creationist who has written plenty of creationist and anti-evolutionary screeds.[72] Also on the Flood Science Review panel for In Jesus’ Name Productions,[73] who apparently wants to make a movie about the Flood that “could have historic impact […] if the science upon which it is based can be sufficiently defended. It could even represent a significant challenge to the validity of the theory of Evolution.” Duff is not a scientist, and has no academic affiliation. Instead he is the author of books such as “Dating, Intimacy, and the Teenage Years”.[74]
- W. John Durfee, Assistant Professor of Pharmacology, Case Western Reserve University. DVM (a professional, not a research degree; does not appear to be involved in research).
- David Van Dyke, Ph.D. Analytical Chemistry, University of Illinois, Urbana. Currently Analytical Chemist at Environmental Regulatory Compliance. Appears to have a few publications in unrelated fields.
- Fred Van Dyke, Professor of Biology and Chair of the Biology Department, Wheaton College (Illinois), a Christian liberal arts college. Also Executive Director of the Au Sable Institute, and important champion of evangelical initiatives to take global warming seriously. Notoriously slippery when it comes to his position on evolution,[75] though he has argued that theistic evolution is problematic from a religious perspective.[76] At least he is open about the fact that his dissent is religiously, not scientifically, based. Has done some (apparently unrelated) research on wildlife, and has several publications and books on evangelical theology.
- David W. Dykstra, Ph.D. Computer Science, University of Illinois, Urbana-Champaign. No present affiliation or research found.
[edit] E
- Joe R. Eagleman, Professor Emeritus, Department of Physics & Astronomy, University of Kansas. Known authority on tornados but more recently signatory to the Oregon Petition mostly quoted as claiming that global warming is “way overplayed” and nothing to worry about.[77]
- Cris Eberle, Ph.D. Nuclear Engineering, Purdue University. Currently Nuclear Engineer & Health Physicist at Pearl Harbor Naval Shipyard & IMF. No research since his educational years located.
- Marcos N. Eberlin, Professor, The State University of Campinas (Brazil). Appears to have contributed to some papers in unrelated fields (chemistry). Also on the editorial board of BIO-Complexity. Claims that Intelligent Design is a real, rigorous scientific theory formed to accommodate the data[78] (which, of course, is not how science works). Claims to be persecuted for supporting ID in Brazil.[79]
- Robert Eckel, Professor of Medicine, Physiology & Biophysics, University of Colorado Health Sciences Center. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Has publications in unrelated fields. Thinks that evolution is just a theory, and that this immediately entails that the Biblical account of creation is equally valid,[80] that creationists and scientists just interpret the same data just under different Worldviews (overlooking the fact that creationists generally reject the data), and that evolutionists believe what they do because they haven’t accepted Jesus.
- William A. Eckert III, Ph.D. Cell & Molecular Biology, University of North Carolina, Chapel Hill. Seems to be a legitimate scientist; he does not appear to work on evolution, though he does have some expertise on cell biology.
- Seth Edwards, Associate Professor of Geology, University of Texas, El Paso. No research or current affiliation found.
- Michael Egnor, Professor and Vice-Chairman, Dept. of Neurological Surgery, State U. of New York at Stony Brook. Promoter of neuro-woo, non-materialist neuroscience and dualism, and writer for the Discovery Institute's online newsletter Evolutionary News and Views.
- Lee Eimers, Professor of Physics & Mathematics, Cedarville University (the faculty is committed to a literal, “grammatical/historical” interpretation of the Bible). Eimers has no academic publications, and cannot reasonably be called a “scientist”. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- D.R. Eiras-Stofella, Director, Electron Microscopy Center (Ph.D. Molecular Biology), Parana Federal University. Seems to be involved in real research, and may as such be one of few signatories with even remotely relevant credentials.
- Jonathan D. Eisenback, Professor of Plant Pathology, Dept. of Plant Pathology and Weed Science, Virginia Tech. Publishes in tangentially related fields; none of the research seems to challenge evolution, however.
- George A. Ekama, Professor, Water Quality Engineering, Dept of Civil Engineering, University of Cape Town. Does some research in a completely unrelated field (sludge processes).
- James A. Ellard, Sr., Ph.D. Chemistry, University of Kentucky. Google returns a single coauthored 1957 paper. No updated information, affiliation, or research found.
- David L. Elliott, Chair, Div. of Nat. Sciences/Mathematics, Louisiana College. Has some unrelated research (water-soluble copolymers).
- Daniel Ely, Professor, Biology, University of Akron. Hardcore intelligent design activist who testified during the evolution hearings – his colleagues at Akron later published a letter decrying Ely’s misconceptions and misrepresentations.[81] Has a decent research record but though he is a professor in biology, Ely has, contrary to what his own claims may suggest, no formal training in evolution, and he has done no research on the subject.
- Martin Emery, Ph.D. Chemistry, University of Southampton. No research or academic affiliation found.
- Don England, Professor Emeritus of Chemistry, Harding University, a conservative religious school. Retired. Formerly an Elder of the College Church of Christ and has written several religious books (including “A Christian View of Origin”). Seems to have done some research in an unrelated field in the early 60s (Google Scholar returns nothing else).
- Thomas English, Adjunct Professor of Physics & Engineering, Palomar College. Outspoken supporter of Intelligent Design, and has contributed to a variety of Dembski-related productions, such as the Proceedings of the 2000 Congress on Evolutionary Computation (a creationist gathering). No other trace of research found.
- Richard Erdlac, Ph.D. Structural Geology, University of Texas (Austin). Has mostly worked in the private sector and with government contracts.. Web of Science and Google Scholar list twelve publications in the last thirty years, with the most recent in 1994. The Clean Technology Conference and Expo lists him as an energy consultant.[82]
- Bruce Evans, Ph.D. Neurobiology, Emory University. Currently professor of Biology and Department Chair at Huntington University (a Christian liberal arts college that has let Evans teach an EXCEL class on the Origins of Life[83]). Hardcore Intelligent Design proponent (and Sunday school teacher) who was on the board of reviewers for Explore Evolution. Does have a few (old) publications in unrelated fields. Also a signatory to Rethinking AIDS, a list of HIV “skeptics”.
- William Everson, Ph.D. Human Physiology, Penn State College of Medicine. No reliable information or research found.
- Donald Ewert,[84] Ph.D. Microbiology, University of Georgia. Used to operate a research laboratory at the Wistar Institute. Currently he lectures about creationism to anyone who will listen. His talks and work consist primarily of denial, raw assertions, and misunderstandings.[85] Has some publications in not entirely unrelated fields, however, though even his coauthors deny that the publications show what Ewert claims they show. Also testified before the Texas Board of Education during their 2009 Evolution hearings.
[edit] F
- Pamela Faith Fahey, Ph.D. Physiology & Biophysics, University of Illinois. Co-author (with Ann Gauger, Stephanie Ebnet, and Ralph Seelke) of the paper “Reductive Evolution Can Prevent Populations from Taking Simple Adaptive Paths to High Fitness,” published in the Biologic Institute’s house journal BIO-Complexity and touted by the Discovery Institute as one of the scientific publications supportive of intelligent design.[86] Works with Cedar Park Church. No other affiliation or research found.
- Ferenc Farkas, Ph.D. Applied Chemical Sciences, Technical University of Budapest. No further information located.
- Kevin Farmer, Adjunct Assistant Professor, University of Oklahoma. Listed as having “Ph.D., Scientific Methodology”; the university website, however, lists him as having a PhD in Pharmacy Administration. Those are, needless to say, not the same, though Farmer's confusion about that may go some way toward explaining how Farmer ended up on the Discovery list. Well-known creationist who has appeared on Christian TV programs with standard Kent Hovind arguments against evolution. Has nevertheless contributed to some real publications, in a completely unrelated field.
- Marco Fasoli, Ph.D. Biochemistry, University of Cambridge. Co-founder and Managing Partner at TITIAN Global Investments. No academic affiliation or research since his study days found. Does not appear to be a scientist.
- Abraham S. Feigenbaum, Ph.D. Nutritional Biochemistry, Rutgers University. No research younger than 45 years found.
- Denis Fesenko, Junior Research Fellow, Engelhardt Institute of Molecular Biology. Google scholar returns a contribution to a single paper.
- Wildon Fickett, Ph. D, Chemistry, Caltech. No updated information or recent research found.
- Steve D. Figard, Ph.D. Biochemistry, Florida State University. Immunologist. Currently at Abbott Laboratories. No publications younger than 25 years found.
- Dave Finnegan, Staff Member, Los Alamos National Laboratory. Has done research in an unrelated field.
- Hannes Fischer, Ph.D Molecular Biology, University of Pennsylvania. No reliable updated information located.
- James Florence, Associate Professor, Dept. of Public Health, East Tennessee State U. Has some research in unrelated fields.
- Margaret Flowers, Professor of Biology, Wells College. Retired. May have an old publication or two in low-tier journals.
- Andrew Fong, Ph.D. Chemistry, Indiana University. Team Leader at the FDA. Appears to do real research, in an unrelated field.
- David W. Forslund, Ph.D. Astrophysics from Princeton University. Currently laboratory fellow at Los Alamos National Laboratory, and seems to be involved in some real research in an unrelated field.
- Mike Forward, Ph.D. Applied Mathematics, Imperial College, University of London. Google returns nothing apart from this list.
- Mark Foster, Ph.D. Chemical Engineering, U. of Minnesota. Currently Professor at U. of Akron; real scientist in an unrelated field.
- Clarence Fouche, Professor of Biology, Virginia Intermont College (the only full-time biology professor at that institution). Seems to have published two papers, the latest in 1977. Hardly a working scientist.
- James T. Fowler, Ph.D. Mathematics, University of Durham. Currently Information Technology Specialist at Durham University, which is not a research position. No research record located. Not a scientist.
- Joseph Francis, Associate Professor of Biology, Cedarville University. A young earth creationist who has contributed several articles to Answers in Genesis’s house journal Answers. Does not appear to have done any serious research in biology.
- Luis Paulo Franco de Barros, D.Sc. Mechanical Engineering, Pontificia Universidade Católica. No research or current affiliation found.
- Douglas G. Frank, Ph.D. Surface Electrochemistry, University of Cincinnati. Currently consultant at Precision Analytical Instruments Inc. A few unrelated papers found; none from the last 25 years.
- Kenneth French, Chairman, Division of Natural Science, Blinn College (on their curriculum committee). No research found; does not seem to be a scientist.
- John R. Fritch, Ph.D. Chemistry, University of California Berkeley. No current information or research less than 30 years old located.
- Marvin Fritzler, Professor of Biochemistry & Molecular Biology, University of Calgary Medical School. Has a respectable research record in medicine, mostly on immune responses.
- Ian C. Fuller, Senior Lecturer in Physical Geography, Massey University. An elder of the Grace Reformed Baptist Fellowship who has been active in attempts to get creationism into UK school curricula. Young Earth Creationist who has written for Origins, the journal of the Biblical Creation Society; does nevertheless appear to do research. Also signatory to the infamous 2002 Estelle Morris letter.[87]
- Mark Fuller, Ph.D. Microbiology, University of California, Davis. No reliable affiliation or information (or research) located.
- Scott R. Fulton, Ph.D. Atmospheric Science, Colorado State University. Currently professor of mathematics and computer science at Clarkson University; publishes in unrelated fields. Said that the argument for intelligent design was “very interesting and promising,” and that his religious belief was “not particularly relevant” to how he judged intelligent design, even though he emphasized that “[w]hen I see scientific evidence that points to God, I find that encouraging.”
- Noel Funderburk, Ph.D. Microbiology, University of North Texas. On the board of directors of Training Evangelistic Leadership, an aggressive missionary organization. Young earth creationist who thinks that the Grand Canyon is obvious evidence for the Flood and a literal interpretation of the Bible: “each time I [visit the Grand Canyon] am amaized [sic] that geologists are so blinded that they cannot see the evidence.”[88] Not a scientist.
[edit] G
- Edward Gade, Professor Emeritus of Mathematics, University of Wisconsin, Oshkosh. No research found.
- Sandra Gade, Emeritus Professor of Physics, University of Wisconsin, Oshkosh. No research found. In 2006 she started a petition drive to ask the Oshkosh school board for an “advisory referendum” requesting that students learn evidence for and against evolution,[89] because it is all about teaching students about Jesus rather than, you know, trying to do any research on the issue. “The way evolution is being taught is antagonistic to students’ religious beliefs,” said Gade, and therefore a violation of the First Amendment, asserting that Wisconsin students are being brainwashed.[90]
- Daniel Galassini, Doctor of Veterinary Medicine, Kansas State University. A professional degree rather than a research degree.
- Weimin Gao, Microbiologist, Brookhaven National Laboratory. Currently Assistant Professor, Molecular Epidemiology at Texas Tech. Real scientist.
- Charles Garner, Professor of Chemistry, Baylor University. Flak for the Discovery Institute. Google scholar returns a few (co-authored) papers unrelated to evolution. Claims elsewhere that evolution does not and cannot have observable support, a point he included e.g. in his testimony before the Texas Board of Education.
- John Garth, Ph.D. Physics, University of Illinois, Champaign-Urbana. No research record found. Currently Board Member of the Creation Science Fellowship of New Mexico.
- George A. Gates, Emeritus Professor of Otolaryngology-Head and Neck Surgery, University of Washington. Real scientist (retired). Has said that “I simply affirm that science and religion work in parallel magisteria and each has much to learn from the other," and that “[m]ost of creationism and ID are funded privately because they don't qualify as science," adding that he is skeptical of ID and creationism.[91]
- Ann Gauger,[92] Ph.D. Zoology, University of Washington. Did real and possibly relevant research during her postdoc at Harvard. Currently affiliated with the Biologic Institute. In 2011 she got a paper (co-authored with Douglas Axe) published in the institute’s home journal BIO-Complexity, which, according to her, disproves evolution (and is demolished here). Its most notable trait is its complete misrepresentation of the papers cited by the authors.[93] Coauthor of Science and Human Origins.
- Theodore W. Geier, Ph.D. Forrest Hydrology, University of Minnesota. No affiliation or information found (apart from a single 1994 paper).
- Mark Geil, Ph.D. Biomedical Engineering. Ohio State U. Associate prof. at Georgia State U. Does real research in an unrelated field.
- Jim Gibson, Ph.D. Biology, Director of the Geoscience Research Institute, Loma Linda University, a Seventh Day Adventist front organization which pushes a creationist agenda in Earth sciences. Gibson’s article “Did Life Begin in the ‘RNA World?’” was a huge inspiration for Ray Comfort, who based a chapter (“From Dust to Dust”) on it in his book Nothing Created Everything: The Scientific Impossibility of Atheistic Evolution. Does not appear to be involved in scientific research.
- Maciej Giertych, Professor, Institute of Dendrology, Polish Academy of Sciences. Also on the CMI list of scientists alive today who accept the biblical account of creation, where he is listed as “Geneticist”. Most famous in the US, perhaps, for his incoherent letter to Nature[94] in which he proved himself a master of the Gish gallop. Giertych is perhaps the most prominent pushers for including creationism in education in Poland. Also a notable and vocal anti-Semite,[95] who has claimed that Jews are dishonest because they don’t recognize “Jesus Christ as the awaited Messiah.” In an address to the European Parliament he praised Francisco Franco, António de Oliveira Salazar and Éamon de Valera as guardsmen of traditional European values.[96] Interviewed in Expelled.
- Stephan J. G. Gift, Professor of Electrical Engineering, The University of the West Indies. Serious crackpot. Well-known in Trinidad as an outspoken creationist and opponent of Big Bang cosmology and relativity,[97] who claims to have proved Einstein wrong. His homepage will give you his papers allegedly refuting relativity, but no indication of actual research.
- William Gilbert, Emeritus Professor of Biology, Simpson College. At least at some point President of Iowa Academy of Science. No other information or research found.
- James Gilchrist, Ph.D. Physics, University of Texas, Austin. No information found.
- Thomas D. Gillespie, Research Professor Emeritus Transportation Research Institute, University of Michigan. University of Michigan. Has done research in unrelated fields. Consultant to George W. Bush’s science advisor Allan Bromley.
- Warren Gilson, Associate Professor, Dairy Science, University of Georgia. No research located.
- Jeffrey M. Goff, Associate Professor of Chemistry, Malone College; currently Chair, Department of Natural Sciences, where the mission contains the goal that “students should be able to apply the principles of Christian Stewardship to biological practice and interpret biological phenomena within a Christian worldview.” Appears to be minimally involved in research (Google Scholar give him as first author of a single 2001 paper).
- Steven Gollmer, Ph.D. Atmospheric Science, Purdue University. Currently at Cedarville University. Declares that “[o]ur approach to science and origins is based on the presupposition that our highest and ultimate authority is the unchanging Word of God.” Active in creationist attempts to impose lesson plans on the Ohio State Board of Education,[98] and also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- John R. Goltz, Ph.D. Electrical Engineering, U. of Arizona. Involved with CompuServe. No research or present affiliation found.
- Manuel Garcia-Ulloa Gomez, Director of Marine Sciences Laboratory, Autonomous University of Guadalajara. Seems to have done some research on aquaculture (mostly in Spanish).
- Teresa Gonske, Assistant Professor of Mathematics, Northwestern College. No research located.
- Guillermo Gonzalez,[99] Associate Professor of Astronomy, Iowa State University, and touted by the Discovery Institute as one of their “clearest” examples of a victim of evil Darwinist conspiracy in academia. One of the main characters in Expelled: No Intelligence Allowed. Currently Senior Fellow at the Discovery Institute. Currently at Grove City College, an evangelical Christian school. Published some good papers early in his career, but does not seem to have done much research the last 10 years. Also a climate change denialist.
- Michael T. Goodrich, Professor of Computer Science, University of California, Irvine, Associate Dean for Faculty Development in the Donald Bren School of Information and Computer Sciences, and Technical Director for the ICS Secure Computing and Networking Center. Seems to have serious credentials in his field, which is not biology.
- Bruce L. Gordon, Ph.D. Philosophy of Physics, Northwestern University. Research Director at the Discovery Institute’s Center for Science and Culture, Fellow of the International Society for Complexity, Information and Design, and a central figure in the ID movement. Formerly William Dembski’s companion at the controversial ID center at Baylor University[100][101] and participant at the Biological Information: New Perspectives conference in 2011. Currently at King’s College, NY, a small fundamentalist college. No research found.
- Chris Grace, Associate Professor of Psychology, Biola University. Has published extensively in religious magazines and journals such as “Journal of Theology and Psychology”, e.g. on evolutionary psychology and intelligent design, and appears to be one of the founders of what he calls “intelligent design psychology” (to replace the evolutionary psychology, evidently). No real research or scientific work found apart from a co-authored 1988 paper. Does in other words not appear to be a scientist.
- Robert J. Graham, Ph.D. Chemical Engineering, Iowa State University. No information found.
- Giulio D. Guerra, First Researcher of the Italian National Research Council (Chemistry), Istituto Materiali Compositi e Biomedici, CNR. Appears to be a respectable scientist, though his research seems not to touch on evolution.
- Thomas G. Guilliams, Ph.D. Molecular Biology, The Medical College of Wisconsin. Currently VP/Director of Science and Regulatory Affairs, Ortho Molecular Products, Inc. Listed as an “integrative practitioner” at integrativepractitioner.com, an online community of practitioners of woo. Has some publications that seem legitimate (in unrelated fields), but is also the author e.g. of “The Original Prescription: How the Latest Scientific Discoveries Can Help You Leverage the Power of Lifestyle Medicine”. His understanding of science and scientific methods is thus debatable.
- Richard Gunasekera, Ph.D. Biochemical Genetics, Baylor University. Currently Adjunct Professor and Resource Fellow, Missiology and Science, at the College of Biblical Studies. Gives lectures in various religious venues, and has been involved in the Ratio Christi Student Apologetics Alliance lecture series at Texas A&M University together with familiar creationists and intelligent design promoters.[102] Has some publications, but they are completely unrelated to evolution.
- James Gundlach, Associate Professor of Physics, John A. Logan College. No research located.
- Graham Gutsche, Emeritus Professor of Physics, U.S. Naval Academy. No research since the 1960s found.
[edit] H
- David Hagen, Ph.D. Mechanical Engineering, University of Minnesota. Used to work for Ford; retired. No research record found.
- Dan Hale, Professor of Animal Science, Texas A&M University. No publication record located.
- Dominic M. Halsmer, Ph. D. Mechanical Engineering, UCLA. Currently Professor of Engineering and Dean of the College of Science and Engineering at Oral Roberts University, where “he is studying how the universe is engineered to reveal the glory of God and accomplish His purposes.” No serious research found, but lots of attempts to apply engineering concepts in support of intelligent design published in various online venues. Can hardly be counted as a scientist.
- Mubashir Hanif, Ph.D. Plant Biology, University of Helsinki. Has done some research; hard to determine how relevant it is.
- William Hankley, Professor of Computer Science, Kansas State University. Retired. Has some publications in an unrelated field.
- Donald J. Hanrahan, Ph.D. Electrical Engineering, U. of Maryland. No affiliation/research found (apart from some 1960s online documents).
- Israel Hanukoglu, Professor of Biochemistry and Molecular Biology Chairman, The College of Judea and Samaria (Israel). A real scientist with a respectable research record. Former Science and Technology Adviser to Prime Minister of Israel Benjamin Netanyahu (1996–1999), and founder of Israel Science and Technology Homepage.
- James Harbrecht, Clinical Associate Professor, Division of Cardiology, University of Kansas Medical Center. Two (unrelated) research publications located, the newest from 1991. MD; does not appear to be a working scientist.
- James Harman, Associate Chair, Dept. of Chemistry & Biochemistry, Texas Tech University. Currently Associate Professor at South Plains College. Has a biology background, but seems to do little research at all, and none of it is relevant to the issue at hand.
- William Harris, Ph.D. Nutritional Biochemistry, University of Minnesota; managing director of the Intelligent Design Network, which was responsible for creating the Kansas Kangaroo Court,[103] and at least leaning toward young earth creationism.[104] He has, apparently, done some research on nutrition and heart disease, and gained some notoriety from a study purporting to show that prayer could help people suffering from heart disease. It was easily shown to be bunk,[105] if anyone wondered. Does apparently struggle with the idea that science has to do with observation.[106]
- †A.D. Harrison, Emeritus Professor of Biology, University of Waterloo
- William B. Hart, Assistant Professor of Mathematics, University of Illinois at Urbana-Champaign. Currently Research Fellow at the University of Warwick. Has a real research record, in an unrelated field.
- Jeffrey H. Harwell, Ph. D. Chemical Engineering, University of Texas, Austin. Currently professor at the University of Oklahoma. Has a decent research record in an unrelated field.
- Richard Hassing, Ph.D. Theoretical Physics, Cornell University. Currently Associate Professor of Philosophy at the Catholic University of America. Has a few low-tier publications in unrelated fields, mostly defenses of Leo Strauss and attacks on Darwinism in favor of a conservative, Christian interpretation of Natural Law; or, in other words, a presuppositionalist defense, premised on rejecting science, of a position no serious philosopher (as opposed to fundamentalist theologian) has defended since Medieval times.
- James Pierre Hauck, Professor of Physics & Astronomy, University of San Diego. Retired; now “legal consultant and lapidarist”. May have done a little research in an unrelated field.
- Paul Hausgen, Ph.D. Mechanical Engineering, Georgia Institute of Technology. Appears to have done some unrelated research.
- Oleh Havrysh, Senior Research Assistant, Protein & Peptide Structure & Function, Dept. Institute of Bioorganic Chemistry & Petrochemistry, Ukrainian National Academy of Sciences. No further information located.
- Curtis Hawkins, Asst. Clinical Professor of Dermatology, Case Western Reserve Univ. School of Medicine. Google Scholar returns a few (unrelated) research papers from the early 80s.
- Russell C. Healey, Ph.D. Electrical Engineering, University of Cambridge. On the Council of Reference for the British creationist organization Truth in Science. According to said organization Healey was formerly a Fellow of Selwyn College and Lecturer in the Engineering Department at Cambridge University, but that he “now teaches mathematics at a leading (unnamed) independent school” (probably Loughborough Grammar School).[107]
- Walter Hearn, Ph.D. Biochemistry, University of Illinois. Seems to have been involved in some research some point, but not for the last 40 years. Currently freelance editor for various Christian publishers. Author of “Being a Christian in Science”, where he discusses how religion is a useful tool for scientists, even though he is himself not a scientist.
- William J. Hedden, Ph.D. Structural Geology, Missouri University of Science & Technology. No current affiliation or research found.
- David Heddle, Ph. D. Physics, Carnegie Mellon University. Associate professor at Christopher Newport University, and has written the novel “Here, eyeball this” in which he defends his version of intelligent design; Google Scholar returns a decent amount ofscientific research.[108] A proponent of cosmological intelligent design (e.g. the fine-tuning argument), Heddle has generally been skeptical of Intelligent design in biology. Has been in disagreement with Dembski, for instance, which resulted in Dembski booting him from his blog – teaching Heddle the hard way that the Intelligent Design movement is about public relations, not discussions of science.[109] In February 2010 he requested his name be removed from the list,[110] as he is now a supporter of theistic evolution.
- Timothy H. Heil, Ph.D. Computer Engineering, University of Wisconsin, Madison. Has done some research in an unrelated field.
- Daniel W. Heinze, Ph.D. Geophysics from Texas A&M University. Currently board member of Vibrant Dance, which “inspire, educate, and unify pastors, scientists and others with the growing congruence of scientific discovery and the Christian faith!” Not a scientist.
- Christian Heiss, Post-Doctoral Associate, Complex Carbohydrate Res. Ctr., Univ. of Georgia. Does research in an apparently unrelated field. Also signatory to the apparently Discovery Institute initiated Amicus Brief supporting Intelligent Design in the Kitzmiller v. Dover.
- Larry S. Helmick, Senior Professor of Chemistry, Cedarville University (a fundamentalist, young earth creationist Bible institution). Affiliated with the Creation Research Society. Has some publications, though many of the ones he list on his record are publications e.g. in the Creation Research Society Quarterly, for instance on Flood geology. Lists “the search for Noah's Ark” as one of his main research interests. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- Barbara S. Helmkamp, Ph.D. Theoretical Physics, Louisiana State University. . Young earth creationist who has written some online documents arguing against “the myth of evolution” targeted at children;[111] after all, creationism has nothing to do with science, but with religious outreach. Claims creationism is a much more explanatorily powerful hypothesis than Big Bang or evolution, since God can do anything. Currently teaching physics and chemistry at Credo Academy, a homeschool co-op in Denver. Not a scientist by a long shot.
- Olivia A. Henderson, Ph.D. Pharmaceutics, University of Missouri, Kansas City. No current affiliation, research, or information located.
- R. Craig Henderson, Associate Prof., Dept. of Civil & Environmental Engineering, Tennessee Tech U. Does research in unrelated fields.
- Kurt J. Henle, Professor Emeritus, University of Arkansas for Medical Sciences. Was involved in medical research back in the days, but retired early to found and lead the Holy Trinity Anglican Church. Currently an M.Div and pastor with no academic affiliation.
- Hugh L. Henry, Lecturer (Ph.D. Physics), Northern Kentucky University. No research or information found.
- John K. Herdklotz, Ph.D Physical Chemistry, Rice University. Chief Executive Officer of Telor Ophthalmic Pharmaceuticals, Inc. Contributed to two papers in 1970. Does not appear to be a scientist.
- David W. Herrin, Research Assistant Professor in Mechanical Engineering, U. of Kentucky. Appears to be a real researcher on acoustics.
- Nolan Hertel, Professor, Nuclear & Radiological Engineering, Georgia Institute of Technology. Does research in an unrelated field.
- Joel D. Hetzer, Ph.D. Statistics, Baylor University. No current affiliation or research found.
- John Hey, Associate Clinical Prof., Dept. of Family Medicine, University of Mississippi. Has a professional doctorate. Also teaching elder of Grace Bible Church. No research record found.
- A. Clyde Hill, Ph.D. Soil Chemistry, Rutgers U. May have some unrelated research from the 60s and 70s. No recent information found.
- Miroslav Hill, Former Director of Research, Centre National de la Recherche Scientifique. Used to be a respectable scientist and researcher (was among the lab researchers who established the life cycle of retroviruses). Then he went crackpot; currently he appears to be sympathetic to something reminiscent of Rupert Sheldrake’s Morphic resonance theory, suggesting that “quantum systems” are responsible for much of adaptations at a cellular level through what some call “entangled learning”.[112]
- Roland Hirsch, Ph.D. Analytical Chemistry, University of Michigan. Appears to have done real research in an unrelated field. Fellow at the International Society for Complexity, Information, and Design, and writes for William Dembski’s blog Uncommon Descent.
- Mae-Wan Ho, Ph.D. Biochemistry, University of Hong Kong. A known critic of genetic engineering (in fact, a hardcore doomsday prophet) through opinion pieces (not peer-reviewed research).[113] Has a real research background, however. Listed on Rethinking AIDS's list of HIV “skeptics”, though it is unclear to what extent she would agree to this listing herself. Nevertheless she entertains a serious presence at whale.to. Her signature here should probably be viewed in light of these facts.
- Dewey Hodges, Professor, Aerospace Engineering, Georgia Institute of Technology. Seems to be a serious researcher in his field, which has nothing to do with biology. Also member of the hate group American Family Association and the Creation Research Society[114]
- John G. Hoey, Ph.D. Molecular and Cellular Biology, City University of New York Graduate School. Currently owner of Integra BioCompliance, LLC and Laboratory Operations Consultant with The Quantic Group, Ltd. Does have a few research publications in a not completely unrelated field; nevertheless believes that “the theory of evolution and similar ideas designed to explain away the existence of a Creator are little more than fairytales.”
- John L. Hoffer, Professor of Engineering; Texas A&M University College of Engineering; (also) Professor of Anesthesiology, Texas A&M Univ. Syst. Health Science Center. MD; no information or research located (the name is not found on the Texas A&M homepages).
- Justin Holl, Ph.D. Animal Science, University of Nebraska, Lincoln. Currently Genetic Improvement Specialist at Genus. Appears to have some not entirely unrelated research, though none of it appears to challenge evolution.
- Jay Hollman, Assistant Clinical Professor of Cardiology, Louisiana State University Health Science Center. Also on the list of Physicians and Surgeons for Scientific Integrity (a.k.a. Doctors Doubting Darwin).[115] Has contributed to some research papers in unrelated fields.
- Bruce Holman, III, Ph.D. Organic Chemistry, Northwestern University. Founded the chemistry department at Wisconsin Lutheran College and active in the Creation Science Society of Milwaukee. Has given talks in favor of creationism, and contributed to various creationist study materials,[116] but seems to have no scientific publications from the last 30 years.
- Peter William Holyland, Ph.D. Geology, U. of Queensland. Owns Terra Sancta Inc.; no updated information or research found.
- Barry Homer, Ph.D. Mathematics, Southampton University. Works with computer security; no science or research connection found.
- Liang Hong, Associate Professor, Dept. of Dental Public Health & Behavioral Science, U. of Missouri, Kansas City. Also on the list of Physicians and Surgeons for Scientific Integrity (a.k.a. Doctors Doubting Darwin).[117] Involved in a smattering of unrelated research.
- Gary Hook, Ph.D. Environmental Science, Uniformed Services University of the Health Sciences. Seems to do real (unrelated) research.
- Chrystal L. Ho Pao Assistant Professor of Biology, Trinity International University (a fundamentalist institution “founded on the cornerstone belief that all wisdom lies in Jesus Christ”). Does have some publications, though none appear to be related to evolution.
- Marko Horb, Ph.D. Cell & Developmental Biology, State University of New York. Research Unit Director, Molecular Organogenesis, Clinical Research Institute of Montreal. Appears to have some not entirely unrelated publications, but none that seem to challenge evolution.
- Barton Houseman, Emeritus Professor of Chemistry, Goucher College. Google scholar returns a few unrelated papers from the 60s.
- Daniel Howell, Ph.D. Biochemistry, Virginia Tech. Author of “The Barefoot Book” and best known for his advocacy of barefoot running and barefoot living. Staunch creationist. Currently Associate Professor of Biology at Liberty University.
- Gerald E. Hoyer, Retired Forrest Scientist, Washington State Department of Natural Resources. Has done some unrelated research.
- Curtis Hrischuk, Ph.D. Electrical Engineering, Carleton University. Currently at IBM. Has some publications in unrelated fields.
- Joel D. Hubbard, Associte Professor, Dept. of Lab. Science and Primary Care, Texas Tech University Health Sciences Center. Does research in unrelated fields.
- Neil Huber, Dr. rer. nat. (Ph.D. Anthropology), Tuebingen University. Biblical literalist; originally associated with Wisconsin State University, but renounced science in 1990. Currently his view is “to start with the assumption of the authority of the Bible, looking at all the evidence that it presents for trusting it. Then build your science from there, based upon the Bible’s truth.”[118] Currently affiliated with the Imago Dei Institute, a Bible college. Not presently a scientist, but even from his years in Wisconsin Google Scholar returns only a single 1969 paper. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- Susan L.M. Huck, Ph.D. Geology/Geography, Clark University. Neither Google Scholar or Web of Science lists papers by this person, though there are several papers and books related to various political issues, such as “Narcotics: The Communist Drug Offensive”, which have been cited several times over at whale.to. No current affiliation found.
- Doug Hufstedler, Ph.D. Animal Nutrition, Texas A&M University. Currently Beef Cattle Technical Consultant at ELANCO. Google scholar returns two papers from the mid-90s, concerned with questions unrelated to the topic at hand (lamb feeding).
- James A. Huggins, Chair, Dept. of Biology & Dir., Hammons Center for Scientific Studies, Union University (a fundamentalist (Southern Baptist) institution). Also Pastor at Unity Baptist Church, Chester County, and signatory to the CMI list of scientists alive today who accept the biblical account of creation. According to the University website[119] he “prays with students in each class as well as when they come to him for advising.” Does have a few low-tier publications on wildlife ecology but nothing that touches on evolution.
- Charles E. Hunt, Prof. of Electrical & Computer Engineering/Design, U. of California, Davis. Does real research, in an unrelated field.
- Cornelius Hunter, Ph.D. Biophysics, University of Illinois; adjunct professor of biophysics at Biola University. Fellow at the Discovery Institute who maintains the website Darwin's Predictions, where he purportedly shows that the theory of evolution has been falsified with notoriously bad arguments and misleading claims. Does not understand evolution, and displays little understanding of science or its methodology.[120] Scant evidence of real research found. Has bought hardcore into the “Darwin critics are persecuted” myth[121], and has claimed that the 2005 Kitzmiller v. Dover case “was a disaster for evolution” since “evolutionists paid a […] cost which can’t be measured in dollars. They gave up their soul.”[122]
- Seyyed Imran Husnain, Ph.D. Bacterial Genetics, University of Sheffield. Has contributed to three or four papers on bacteriology. Cannot find any current academic affiliation.
- Wolfgang Hutter, Ph.D. Chemistry, University of Ulm. Currently Gemeindeleiter der Freien Christengemeinde Ecclesia Laupheim. Has no current academic affiliation, and is not a working scientist.
[edit] I
- Rodney Ice, Principle Research Scientist, Nuclear & Radiological Engineering, Georgia Institute of Technology. Retired (though used to be a real scientist in an unrelated field).
- Bridget Ingham, Ph.D. Physics, Victoria University of Wellington. Currently Technical Director of the New Zealand Synchrotron Support Programme, and appears to be involved in real, though unrelated, (industrial) research. Has claimed that “[f]aith in the Judeo-Christian God and the Bible stand up to true scientific scrutiny,” and that “[o]f all the many theories regarding the origin of the universe and the origin of life, none can be absolutely proven because there were no observers and the ‘experiment’ cannot be repeated.” (The latter is the sole piece of alleged evidence for the former.)
- Muzaffar Iqbal, Ph.D. Chemistry, University of Saskatchewan. Also an Islamic scholar and founding president of the Center for Islam and Science, Alberta. He is a prolific author on topics such as “Islamic perspectives on science and civilization”, claiming that Western accounts of science from Francis Bacon onwards have been “disrespectful” of Islamic science, though by signing the list at hand he has pretty much proved his inability to distinguish science from theology (in fact, Iqbal appears to place revelation at the center of science). He is also a Fellow of Dembski’s International Society for Complexity, Information, and Design and as such one of relatively few points of connection between the American ID movement and Islamic creationism.[123]
- Hiroshi Ishii, M.D., Ph.D. Behavioral Neurology, Tohoku U. Currently VP at Takasago. May have some publications in unrelated fields.
- J. Ishizaki, Associate Professor of Neuropsychology (M.D., Ph.D. Medicine), Kobe Gakuin University. Has a few unrelated publications in psychology (ageing and therapy).
- David Ives, Emeritus Professor of Biochemistry, Ohio State University. Has done some research in unrelated fields. Also signatory to the Discovery Institute sponsored 2002 Ohio Academic Freedom list submitted to the Ohio State Board of education.
- Peter C. Iwen, Professor of Pathology and Microbiology, University of Nebraska Medical Center. Appears to be a real scientist in a perhaps not entirely unrelated field (though none of his research appears to challenge evolution).
[edit] J
- Dave Jansson, Ph.D. Engineering, Massachusetts Institute of Technology. Currently scientific advisor at the law firm Jansson, Shupe & Munger Ltd. Did some (unrelated) research earlier in his career, but does not appear to be a working scientist.
- David Jansson, Sc.D. Instrumentation and Automatic Control, MIT. Same as Dave Jansson; duplicate listing in the petition.
- Amiel G. Jarstfer, Professor & Chair, Department of Biology, LeTourneau U. Currently Dean, Paul V. Hamilton School of Mathematics and Sciences, Lincoln Memorial University. Testified in favor of Teach the controversy before the Texas Board of Education during their evolution hearings.[124] Also teaches high school Bible classes. Has some publications, but they do not seem to be related to evolution.
- Gintautas Jazbutis, Ph.D. Mechanical Engineering, Georgia Institute of Technology. No research, affiliation or information found.
- Tony Jelsma, Ph.D. Biochemistry, McMaster University. Currently Professor of Biology at Dordt College, a Reformed Christian College that teaches creationism rather than evolution (the program is of course non-accredited). Hardcore creationist, who appears to suffer from a severe case of confirmation bias: “as I pursued the biological sciences I was aware that my views would be challenged, but I knew that evolution was wrong, God’s Word is true and I had confidence that any new findings I had would simply confirm my view.”[125]
- Matthew A. Jenks, Professor of Horticultural Science, Purdue University (the university website lists him as “Adjunct Professor”). Seems to be involved in real research, but primarily on breeding for fruit quality; it is not clear that any of the research touches on evolution.
- †David William Jensen, Professor of Biology, Tomball College.
- Lyle H. Jensen, Emeritus professor, Dept. of Biological Structure & Dept. of Biochemistry, Washington State University. Appears to have a real research record in biochemistry, and has as such been of deep interest to the Discovery Institute (Evolution News and Views has run a multi-part series of interviews with him). Involved in the Teach the Controversy campaigns in Ohio in 2006: “I strongly urge you to retain the Critical Analysis of Evolution Lesson Plan so that Ohio students are objectively informed concerning the facts of biology and trained to be better scientists.”
- Ferenc Jeszenszky, Former Head of the Center of Research Groups, Hungarian Academy of Sciences. Educated as a physicist and central in the Hungarian “Creation Research” movement. Has contributed to several creationist “documentaries”, such as “Nature’s IQ: Smart Animals Challenge Darwin”,[126] together with luminaries such as Michael Cremo. His brand of creationism is apparently summed up in the screed “Evolution, Intelligent Design, and Creationism”,[127] which emphasizes the scientific basis for creationism while consisting mainly of Bible verses and equating “evolutionists” with atheists. No research record found; does not appear to be a scientist.
- Bradley R. Johnson, Ph.D. Materials Science, University of Illinois at Urbana-Champaign. Currently Technical Group Manager at Pacific Northwest National Laboratory. May have some research in unrelated fields.
- †Charles D. Johnson, Ph.D. Chemistry, University of Minnesota.
- David Johnson, Assoc. Prof. of Pharmacology & Toxicology, Duquesne U. Researcher in an unrelated field (though has a biology BA).
- Donald E. Johnson, Ph.D. Computer & Information Sciences, University of Minnesota; also Ph.D. Chemistry, Michigan State University. Runs the creationist website scienceintegrity.org, but does not work as a scientist. Author of “Probability's Nature and Nature's Probability” (which also exists in a version for non-specialists for outreach purposes) and “Programming for Life”, which purports to study the intersection of physical science and information science with creationist conclusions, complete with persecution complex concerning the oppressive Darwinian paradigm in research institutions.
- Fred Johnson, Ph.D. Pathology, Vanderbilt University. Senior Medical Writer at PPD Inc. Has some publications on pharmaceutics.
- Glenn R. Johnson, Adjunct Prof. of Medicine, U. of North Dakota School of Medicine. Orthopedic surgeon. No research record found.
- Jeff W. Johnson, Ph.D., Industrial, Organizational, & Cognitive Psychology, University of Minnesota; presently research scientist at Personnel Decisions Research, and contributes to this not even remotely related field.
- Jerry Johnson, Ph.D. Pharmacology & Toxicology, Purdue University. No further information found.
- Richard Johnson, Professor of Chemistry, LeTourneau University. Appears to have done real research in an unrelated field.
- Thomas H. Johnson, Ph.D. Mathematics, University of Maryland. No reliable information located.
- †Lawrence Johnston, Emeritus Professor of Physics, University of Idaho. Real scientist. Most famous for inventing the detonators for the first atomic bombs and for a long time being the last living physicist involved in the development of The Bomb.
- Erkki Jokisalo, Ph.D. Social Pharmacy, University of Kuopio. Has a few publications in an unrelated field.
- Arthur John Jones, Ph.D. Zoology & Comparative Physiology, Birmingham University. Does not appear to have ever worked as a scientist. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation and member of the “scientific panel” of the British creationist organization Truth in Science. He presently works for the Christian Schools’ Trust as their research consultant for curriculum development, and has published several papers in the house journal of the Creation Research Society.
- David Jones, Professor of Biochemistry & Chair of Chemistry, Grove City College, and institution where – according to their website – “[professors] don't cloister themselves to work on research while leaving classroom instruction to assistants;”[128] so it is perhaps no surprise that Google Scholar returns no research for Jones. Furthermore, at the Department of Chemistry “we attempt to instill in our students an awareness of the beauty and design in nature that reflects the creative hand of God.”[129] Also associated with The Center for Vision and Values, a conservative think tank, where he has used his position to promote teaching creationism in public schools.[130]
- Jeffrey M. Jones, Professor Emeritus in Medicine, University of Wisconsin, Madison. Seems to be a respectable researcher, but his field of expertise is unrelated to evolution per se.
- Kerry N. Jones, Professor of Mathematical Sciences, Ball State University. Emeritus. Has some low-tier, unrelated publications.
- Robert Jones, Associate Professor of Mech. Engineering, U. of Texas-Pan America. Appears to have a bit of research, unrelated fields.
- Robert L. Jones, Associate Professor, Department of Ophthalmology, University of California, Irvine. MD; not on the 2012 Irvine faculty list. Hard to locate any further information or research.
- Matti Junnila, DVM, Ph.D. Veterinary Pathology, University of Helsinki. No current affiliation found, and no research since his degree.
[edit] K
- Robert Kaita, Ph.D. Nuclear Physics, Rutgers University. Fellow of the Discovery Institute’s Center for Science and Culture and contributor to the Dembski-edited anthology “Mere Creation: Science, Faith & Intelligent Design”. Currently Principal Research Physicist in the Plasma Physics Laboratory at Princeton University, and has a respectable publication record in an unrelated field.
- Robert O. Kalbach, Ph.D. Physical Chemistry, University of South Florida. No current affiliation or research found.
- Ingolf Kanestrøm, Professor Emeritus, Department of Geoscience, University of Oslo. Google scholar reveals no peer-reviewed research in serious journals, but he is perhaps the most central defender of Intelligent Design creationism in Norway (where the idea has admittedly had little impact), and denies that intelligent design is a religious doctrine (though he is of course religious himself), citing Antony Flew, Michael Ruse and ”Randley [sic] Monton” as evidence.[131] So there.
- Donald A. Kangas, Professor of Biology, Truman State U. Emeritus. Google scholar returns two 1970 papers. Apparently not a scientist.
- Edwin Karlow, Chair, Department of Physics, LaSierra University (a small, fundamentalist, Seventh-day Adventist institution). Has written a bit on teaching, as well as several theological inquiries (e.g. for Spectrum Magazine), but no scientific research record found. Appears to be a teacher and theologian, not a scientist.
- Olaf Karthaus, Associate Professor, Chemistry, Chitose Institute of Science & Technology in Japan. A respectable scientist in his field, which is polymer chemistry and nano-technology, not biology. Karthaus is a vociferous critic of evolution who has claimed that “the theory of evolution is not scientific” since it is allegedly not falsifiable.
- Gary Kastello, Ph.D. Biology, University of Wisonsin-Milwaukee. Currently professor, Dept. of Health, Exercise, and Rehabilitative Sciences, Winona State University. Involved in a few publications on physical therapy.
- Shane A. Kasten, Post-Doctoral Fellow, Virginia Commonwealth U. Appears to have some research that seems unrelated to the issue.
- Michael J. Kavaya, Senior Scientist, NASA Langley Research Center. Seems to be a respectable researcher in an unrelated field; also pushes anti-evolution to kids in churches and Sunday schools (he does no research in those fields; it is all a matter outreach for Jesus).
- Michael N. Keas, Professor of History and Philosophy of Science, The College at Southwestern (Southwestern Baptist Theological Seminary). Senior Fellow at the Discovery Institute. Not a scientist, though he nevertheless “leads workshops for science teachers on how to teach about controversial subjects such as Darwinism.” Taught an Intelligent Design course, “Unified Studies: Introduction to Biology”, at the Oklahoma Baptist University, one of few such courses that have been taught for credit at an accredited institution.
- James Keener, Professor of Mathematics & Adjunct of Bioengineering, University of Utah. Has done research, though apparently not anything related to evolution (though he has written and talked about evolution and creationism in other venues, such as the American Scientific Affiliation). Also on the editorial team of Bio-Complexity.
- James Keesling, Prof. of Mathematics, U. of Florida (past president, Christian Faculty Fellowship). Does research in an unrelated field.
- Clifton L. Kehr, Ph.D. Chemistry, University of Delaware. Has some unrelated publications from the early 1960s.
- Douglas Keil, Ph.D. Plasma Physics, U. of Wisconsin, Madison. Senior Technologist, Lam Research. Has some (unrelated) research.
- Micheal Kelleher, Ph.D. Biophysical Chemistry, U. of Ibadan. No information/affiliation found; coauthor of a few papers on nutrition.
- David Keller, Associate Professor of Chemistry, U. of New Mexico. Appears to have research publications in an unrelated field. Also contributed an article (w. Jed Macosko) to the anthology “Darwin’s Nemesis: Phillip Johnson and the Intelligent Design Movement”.[132] Also on the editorial team of Bio-Complexity.
- Rebecca Keller, Research Professor, Department of Chemistry, University of New Mexico. Currently a home-schooling mom with no academic or research affiliation. She is the author and publisher of the Real Science-4-Kids student texts, teacher manuals, and student laboratory workbooks in chemistry, biology and physics to serve kindergarten through ninth grade, targeted at a homeschooling audience and apparently rather widely used. The series was developed precisely to incorporate intelligent design concepts into a science curriculum, and Keller is as such yet another Intelligent Design advocate (though she tends to present herself as a “Teach the controversy” advocate[133]) who views her mission primarily as doing outreach to children, not research. There’s a pattern here.
- Robert W. Kelley, Ph.D. Entomology, Clemson University. Senior Environmental Scentist for ETT Environmental Inc. Appears to have some publications from the 1980s.
- David C. Kem, Professor of Medicine, University of Oklahoma College of Medicine. Has a respectable (unrelated) research record.
- Kevin L. Kendig, Ph.D. Materials Science & Engineering, U. of Michigan. Program Manager, US Air Force. Apparently not a scientist.
- Laraba P. Kendig, Ph.D. Materials Science & Engineering, University of Michigan. Currently Scientist at UES, Inc. Apparently an opponent of contraception for religious reasons, and associated with the Quiverfull movement.[134] Does not appear to have a research record.
- Michael Kent, Ph.D. Materials Science, University of Minnesota. Principal Member, Technical Staff, Bioenergy and Defense Technologies Department, Sandia Laboratories. Has done research in not obviously related fields. Affiliated with the Intelligent Design Network.
- Dean Kenyon, Emeritus Professor of Biology, San Francisco State University. One of the crucial characters in the development of modern creationism. Tried to teach creationism at San Francisco State in the 1980s, but the university refused. Later expert witness for the defense in all the important creationist court cases. Has not done scientific research since the early 1970s, and his recent material is purely creationist apologetics, mostly intended for a more general audience or as educational material. Kenyon is coauthor of the infamous Intelligent design textbook Of Pandas and People, and on the CMI list of scientists alive today who accept the biblical account of creation.
- Karl Heinz Kienitz, Professor, Department of Systems & Control Instituto Technologico de Aeronautica (Brazil). Appears to be a respectable scientist in an unrelated field.
- Sun Uk Kim, Ph.D. Biochemical Engineering, University of Delaware. Appears to have been involved in real research.
- Richard Kinch, Ph.D. Computer Science, Cornell University. Owner of truetex.com. No academic affiliation or research found.
- Bretta King, Assistant Professor of Chemistry, Spelman College. No research since her student years located. Not presently on the Spelman College faculty list either.
- R. Barry King, Prof. of Environmental Safety & Health, Albuquerque Technical Vocational Institute. No recent research found.
- Michael Kinnaird, Ph.D. Organic Chemistry, University of North Carolina, Chapel Hill. No affiliation or current research found. Has written about the evolution “controversy” for various non-academic outlets.
- Scott S. Kinnes, Professor of Biology, Azusa Pacific University. No research record found; publication lists on his webpages feature internal documents written for his employer as well as publications in undergraduate journals(!) (presumably coauthored with his students). Staunch fundamentalist and creationist, as shown by the bibliography he assembled for Science and Faith Integration.[135] Cannot reasonably be counted as a scientist.
- Stephen C. Knowles, Ph.D. Marine Science, University of North Carolina, Chapel Hill. Currently affiliated with the US Army Corps of Engineers; does research in unrelated fields.
- Donald Kobe, Professor of Physics, University of North Texas, Denton. Has written on the alleged role of the church in the evolution of science and argued in favor of the Teach the Controversy language promoted by certain elements of the Texas Board of Education.[136] Has done real research as well.
- Charles Koons, Ph.D. Organic Chem., U. of Minnesota. May have some unrelated publications from the 60s. No newer information found.
- Robert W. Kopitzke, Professor of Chemistry, Winona State University. Has done some research in unrelated fields.
- Carl Koval, Full Professor, Chemistry & Biochemistry, University of Colorado, Boulder. Does research in unrelated fields, but also speaks about the evolution/creation “controversy” in churches and religious institutions, where he claims that the conflict is “unresolved”.[137] Koval himself is a staunch supporter of Intelligent Design.
- Christa R. Koval, Ph.D. Chemistry, University of Colorado at Boulder. Currently Associate Professor of Chemistry at Colorado Christian University, an Evangelical school that requires faculty and students to affirm, among other things, that “[w]e believe the Bible to be the inspired, the only infallible, authoritative Word of God.” No research located.
- John K. G. Kramer, Adjunct Professor, Dept. of Human Biology & Nutrition Sciences, University of Guelph. Currently at Agri-Food Canada and associated with Answers in Genesis. Seems to have some research background in unrelated fields. Claims that beneficial mutations are impossible and points out that since “[a]rcheologists have no problem identifying man-made objects. Why then do we have problems identifying a Creator-made world?”. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- Martin Krause, Research Scientist (Astronomy), University of Cambridge. Real scientist who does research in an unrelated field (his hobby appears to be to find the Star of Bethlehem).
- Mark Krejchi, Ph.D. Polymer Science & Engineering, University of Massachusetts. Currently Research Fellow, Sustainability and Product Innovation at Wilsonart International. Does research in an unrelated field.
- Bruce Krogh, Professor of Electrical & Computer Engineering, Carnegie Mellon University. Respectable scientist in an unrelated field.
- Daniel Kuebler, Ph.D. Molecular & Cellular Biology, U. of California, Berkeley. Assistant professor of biology, Franciscan University of Steubenville (a fundamentalist institution). Has written several articles for the National Catholic Register, as well as “The Evolution Controversy: A Survey of Competing Theories” (endorsed by Michael Behe). Does have some real research to his name as well.
- Paul Kuld, Associate Professor of Biological Science, Biola University. Retired. No research found.
- Joseph A. Kunicki, Associate Professor of Mathematics, University of Findlay. No research located.
- Orhan Kural, Professor of Geology, Technical University of Istanbul. Has a few publications in unrelated fields, mostly in Turkish.
- Heather Kuruvilla, Ph.D. Biological Sciences, State University of New York, Buffalo. Currently professor of biology at the fundamentalist creationist institution Cedarville University, who contributed to that institution's publication on Darwin (which endorses unconditional young earth creationism).[138] She nevertheless appears to have some real publications, though they are unrelated to evolution. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
[edit] L
- Martin LaBar, Ph. D. Genetics & Zoology, University of Wisconsin, Madison. Later professor and Chairman of the Division of Natural Science and Mathematics at Southern Wesleyan University; now retired. No scientific research publications found. Explains his views on evolution in his blog,[139] where he shows that his dissent is, suffice to say, not even remotely scientific.
- Jeffrey E. Lander, Ph.D. Biomechanics, University of Oregon. Associate Professor, Sports Health Science, a “university” specializing in chiropracty. Has some publications in unrelated fields.
- Brian Landrum, Associate Professor of Mechanical & Aerospace Engineering, University of Alabama, Huntsville. May have some research in unrelated fields. Also a Christian apologist who is fond of talking about worldviews; claims that all worldviews are religious but the Biblical one is the best.[140]
- Ivan M. Lang, Ph.D. Physiology and Biophysics, Temple University. Professor of Gastroenterology (and DVM) at the Medical College of Wisconsin. Appears to be a respectable scientist, though his research does not seem to touch on evolution.
- Joel Lantz, Ph.D. Chemistry, University of Rhode Island. Currently R&D Engineer at Cleveland Electric Laboratories. Google Scholar returns a single 1976 paper.
- Teresa Larranaga, Ph.D. Pharmacology, U. of New Mexico. Currently Administrator at Presbyterian Healthcare Services. Not a scientist.
- JoAnne Larsen, Assistant Professor of Industrial Engineering, University of South Florida, Lakeland. Has said that she had “been exposed to” Intelligent Design and found it interesting but signed the petition primarily because scientists have refused to entertain the possibility that the theory of evolution is flawed, and that “[u]nfortunately, in major universities, academic freedom doesn't exist.”[141] No updated information or affiliation, and no research, located.
- Ronald Larson, Professor, Chair of Chemical Engineering, U. of Michigan. A respectable researcher in an unrelated field. Signed the petition not because he has any problems with evolution, but because he doesn’t think Darwinism explains abiogenesis, which it does, of course, not purport to do to begin with.[142]
- Joseph Lary, Epidemiologist and Research Biologist (retired), Ctrs. for Disease Control. Seems to have some research in unrelated fields.
- Mark J. Lattery, Associate Professor of Physics, U. of Wisconsin-Oshkosh. Has some unrelated publications on physics teaching.
- Robert Lattimer, Ph.D. Chemistry, University of Kansas, Lawrence. At the BFGoodrich Research & Development Cente, and has done some completely unrelated research. Has said about his support of ID that “[a]mong scientists, we're a distinct minority. Among the public, I'd say I'm easily in the majority,” which was apparently intended as an argument for the scientific legitimacy of Intelligent Design.[143] Has a long history of trying to push creationism in public schools, partially through his creationist organization Citizens for Excellence in Education. Appointed to the Ohio science standard writing committee in 2002, where he suggested teaching creationism (ID) in Ohio schools and became more or less responsible for the subsequent debate.[144][145] Lattimer used to be a standard Biblical creationist, but switched to Intelligent Design for political reasons, which he has for a decade been pushing to various congregations, schoolboards, and others willing to listen.[146] Of course, it is all about marketing – Lattimer hasn’t even considered trying to support Intelligent Design by research or science.
- M. Harold Laughlin, Professor & Chair, Department of Biomedical Sciences, University of Missouri. Has done some research on the cardiovascular effects of exercise (which is not obviously related to the question at hand).
- David J. Lawrence, Ph.D. Physics, Washington U., St. Louis. At Los Alamos National Laboratory. Serious researcher, unrelated field.
- Jeffery R. Layne, Ph.D. Electrical Engineering, Ohio State University. Currently (it seems) at the Air Force Research Laboratory, Ohio. Has some publications in unrelated fields.
- George Lebo, Associate Professor of Astronomy, University of Florida. Retired. Has done a little research in an unrelated field.
- J.B. Lee, Assistant Professor of Electrical Engineering, University of Texas, Dallas. Appears to do real research in unrelated fields.
- Raul Leguizamon, Professor of Medicine, Autonomous University of Guadalajara. Hardcore creationist: “I am absolutely convinced of the lack of true scientific evidence in favour of Darwinian dogma. Nobody in the biological sciences, medicine included, needs Darwinism at all [note that Leguizamon is not a biologist]. Darwinism is certainly needed, however, in order to pose as a philosopher, since it is primarily a worldview.” Hardly a scientist; neither Google Scholar nor PubMed returned any research, and his books and articles criticizing evolution are not published in serious, peer-reviewed venues.
- Matti Leisola, Professor, Laboratory of Bioprocess Engineering, Helsinki University of Technology. Emeritus. Has a real research record in a not entirely irrelevant field, and is as such one of few signatories with real credentials. Hardcore creationist. Affiliated with the Biologic Institute,[147] Editor in Chief of the Intelligent Design pseudojournal BIO-Complexity,[148] and contributor to Evolution News and Views.
- Magda Narciso Leite, Professor, College of Pharmacy & Biochemistry, Universidade Federal de Juiz de Fora. Has some research publications, mostly in Portuguese, and mostly in local journals.
- Bruno Lemaire, Professor, Decision Science & Information Systems, HEC Paris. Emeritus. Worked on business management.
- E. Lennard, Sc. D. Surgical Infections & Immunology, University of Cincinnati. No research or academic affiliation located.
- Ricardo Leon, Dean of School of Medicine, Autonomous University of Guadalajara. Holds a professional, not research doctorate.
- Lane Lester, Ph.D. Genetics, Purdue University. Professor of Biology at Emmanuel College, Georgia, a small, extremist, Pentecostal college offering non-accredited education. Calls himself “creationist geneticist”. No real research found; instead Lester writes textbooks and articles for non-specialists in various creationist magazines, the purpose of which is outreach rather than research. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- Catherine Lewis, Ph.D. Geophysics, Colorado School of Mines. At Exxon Production Research; may have research in unrelated fields.
- Roger Lien, Ph.D. Physiology, North Carolina State University. Currently Associate Professor at Auburn University. Has some research in unrelated fields. Says that he earlier accepted evolution, but realized that “the world is broken, and we humans and our science can't fix it. I was brought to Jesus Christ and God and creationism and believing in the Bible,” adding that he thought that evolution was “inconsistent with what the Bible says,” which does not count as scientific dissent.
- Walter E. Lillo, Ph.D. Electrical Engineering, Purdue University. May be affiliated with the Aerospace Corporation. Appears to do some research, but no serious publication found since the early 90s. Also dabbles in philosophy, trying to argue from Pyrrhonian skepticism that atheism cannot be a foundation for science since everything would be uncertain and the problem of induction, rather unwilling to see that Pyrrhonian skepticism would apply to religion as well,[149] thereby making him just another Christian presuppositionalist.
- Hsin-Yi Lin, Assistant Professor, Dept. of Chemical Engineering & Biotechnology, National Taipei University of Technology. Has some low-tier publications in unrelated fields.
- Shieu-Hong Lin, Assistant Professor of Computer Science, Biola University. Appears to do research in unrelated fields.
- Peter Line, Ph.D. Neuroscience, Swinburne University of Technology. Apparently associated with Creation Ministries International, and has written several articles (e.g. for the Journal of Creation) vehemently denying that “apemen” belong to the same baramin as humans. Does not appear to have any current academic affiliation, and is not a scientist.
- Derek Linkens, Senior Research Fellow and Emeritus Professor (Biomedical Eng.), University of Sheffield. Used to be a real scientist. Currently on the Council of Reference for the British creationist organization Truth in Science.
- Wayne Linn, Professor Emeritus of Biology, Southern Oregon University. An authority on freshwater fisheries; published one or two low-tier papers in the 1960s but apparently nothing since.
- Alan Linton, Emeritus Professor of Bacteriology, University of Bristol. VP of the Prophetic Witness Movement International, author of “Israel in History and Prophecy”, and central in the British creationist movement. Apparently convinced Prince Charles, who said: “As Professor Alan Linton of Bristol University has written, ‘evolution is a man-made theory to explain the origin and continuance of life on this planet without reference to a creator’,” which is a complete misrepresentation by a known promoter of pseudoscience.[150] No original research less than 30 years old found.
- Theodor Liss, Ph.D. Chemistry, MIT. May have some publications from the 1960s. No updated information found.
- Garrick Little, Ph.D. Organic Chemistry, Texas A & M University. At LI-COR Biosciences; retired. Has a few publications in an unrelated field, some of which seem legitimate. Biblical literalist[151] who has defended Intelligent Design in various places, claiming that the fact that there is still public discussion about evolution shows that Intelligent Design must be a solid scientific theory.
- Stephen Lloyd, Ph.D. Materials Science, University of Cambridge. Does not hold an academic position and seems not to have been involved in research. Currently pastor of Hope Baptist Church in Gravesend, Kent, and works part-time as a speaker and writer for Biblical Creation Ministries. Has written articles for Origins, the journal of the Biblical Creation Society (“‘God of the Gaps’: A Valid Objection?”) and the Evangelical Alliance (“Creation and Evolution – ‘Designed to be significant’”).
- Christian M. Loch, Ph.D. Biochemistry and Molecular Genetics, University of Virginia. Senior Scientist at LifeSensors Inc. Has some research in not obviously unrelated fields.
- Justin Long, Ph.D. Chemical Engineering, Iowa State University. No further information found.
- C. Roger Longbotham, Ph.D. Statistics, Florida State University. Currently Professor of Business Statistics at Tianjin University. Has written about business management but has also done some research, as well as arranging adult Sunday School meetings on Intelligent Design and evolution for the Living Hope Bible Church.[152]
- †Leonard Loose, Ph.D. Botany, University of Leeds. “Longest living” member of the Creation Science Movement,[153] who signed the list at age 96 and apparently joined the Evolution Protest Movement “in either late 1933 or in 1934”. Used to claim that the “Darwinian presentation of evolution has become the arch enemy of the Word of God and is the work of an anti-Christ;” also Hitler and pretty much every other creationist PRATT in the book.
- Raúl Erlando López, Ph.D. Atmospheric Science, Colorado State University. May have done research in the 1970s, but currently seems to publish only in the Journal of Creation Research, Answers Magazine and similar young earth creationist venues. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
- Paul Lorenzini, Ph.D. Nuclear Engineering, Oregon State U. CEO of NuScale. No research found; does not seem to be a scientist.
- Charles B. Lowrey, Ph.D. Chemistry, University of Houston. No research or current affiliation found.
- Harry Lubansky, Ph.D. Biological Chemistry, University of Illinois, Chicago. Systems engineer at Complete Computing Inc. Has some publications in an unrelated field; nothing from the last 20 years found.
- Ken Ludema, Emeritus Professor of Mechanical Engineering, University of Michigan. Has some research in an unrelated field (tribology).
- C. Thomas Luiskutty, Ph.D. Physics, Univ. of Louisville. Has been professor and chair of the Engineering & Physics Department at Oral Roberts University, but seems currently to be Principal of the New India Bible Seminary. Publishes on theology. No recent scientific research found; not a scientist.
[edit] M
- Fred B. Maas, Ph.D. Agronomy, Purdue University. Owner, Wheat Dynamics Inc. No research found. Does not seem to be a scientist.
- Malcolm W. MacArthur, Ph.D. Molecular Biophysics, U. of London. Appears to have a research record, but little information found.
- Christopher Macosko, Ph.D. Chemical Engineering, Princeton University. Professor at University of Minnesota and recipient of Templeton funding, apparently to study intelligent design (he has done real research as well, but in a completely unrelated field). Macosko apparently became a born-again Christian as an assistant professor after a falling-out with a business partner, and for many years he would teach the freshman seminar “Life: By Chance or By Design?” According to Macosko “[a]ll the students who finish my course say, ‘Gee, I didn’t realize how shaky evolution is.’ ”[154]
- Jed Macosko, Ph.D. Chemistry, University of California, Berkeley. A Fellow of William Dembski’s International Society for Complexity, Information, and Design and Fellow of the Discovery Institute’s Center for Science and Culture between 2001 and 2003. He is currently assistant professor (of biophysics) at Wake Forest University, and unlike most ID proponents he appears to publish peer-reviewed scientific research – none of it seems to touch upon Intelligent Design, however. He also contributes non-peer-reviewed material that does, allegedly, support Intelligent design, and is perhaps best known for co-editing, with Dembski, “A Man For This Season: The Phillip Johnson Celebration Volume”.[155] Also on the editorial team of Bio-Complexity.
- Gildo Magalhães, Professor of the History of Science & Technology, University of São Paulo. Professor of the History of Science & Technology, University of São Paulo. Has some history publications in Portuguese. Hardly a scientist.
- Allen Magnuson, Ph. D. Theoretical & Applied Mechanics, University of New Hampshire. Hardcore Young earth creationist affiliated with the worldwideflood project.[156] May nevertheless have published some test results (naval engineering) and a research paper or two in completely unrelated fields. No academic affiliation found.
- Donald Mahan, Professor of Animal Nutrition, Ohio State University. Has a few research publications, but none that are clearly relevant.
- Thomas C. Majerus, PharmD; FCCP, University of Minnesota; a professional rather than research doctorate.
- Gary Maki, Director, Ctr. for Advanced Microelectronics and Biomolecular Research, University of Idaho. Has some real publications on unrelated issues, though he lost his position as director in 2007 after an audit of the center found Maki and other employees improperly using university resources to benefit companies in which they had an interest.[157] Maki responded to the charges by attempting to bully the investigators.[158] Despite the facts of the matter the University allowed him to retain his position as professor, though he has later retired.
- Wusi Maki, Research Asst. Professor, Dept. of Microbiology, Mol. Biology, & Biochem., University of Idaho. Signatory to the Discovery Institute initiated Amicus Brief supporting Intelligent Design in the Kitzmiller v. Dover case.[159] Has nevertheless contributed to research, though none of it seems to touch on evolution.
- Richard Mann, Ph.D. Physical Chemistry, Princeton U. Affiliated with Berean Watch Ministries; retired. No information or research located.
- L. Whit Marks, Emeritus Professor of Physics, University of Central Oklahoma. On the board of directors of Oklahomans for Better Science Education, a religious, creationist organization, and the Oklahoma counterpart to Texans for Better Science Education. No research found.
- Robert Marks,[160] Distinguished Professor of Electrical and Computer Engineering at Baylor University. Marks has peer-reviewed publications, generally not related to intelligent design (though he sometimes seems to think it is), and ought to know his math.[161] His and Dembski’s The Search for a Search - Measuring the Information Cost of Higher Level Search and Conservation of Information in Search - Measuring the Cost of Success merit their own articles (particularly notable is their obsession with Methinks it is like a weasel[162]). Suffice to say that they do not quite show what Marks and Dembski think they show.[163][164] Marks’s creationist view is highlighted in “Genesis and Science: Compatibility Extraordinaire.”[165]
- Joseph M. Marra, Director, Interventional Radiology, & Adjunct Professor of Medicine, Niagara Falls Memorial Medical Center. Radiologist; not involved in science (Google scholar returns a single paper from 1999).
- Glenn A. Marsch, Associate Professor of Physics, Grove City College. Has done some research, as well as work on how modern physics purportedly points to Jesus. Claims that liberals are as anti-science as young-earth creationists because of their adherence to the pagan religion of environmentalism and that scientists with a Christian worldview are persecuted.[166]
- Graham Marshall, Ph.D. Analytical Chemistry, U. of Pretoria. President of Global FIA. Has a few publications in an unrelated field.
- John B. Marshall, Professor of Medicine, U. of Missouri School of Medicine. Gastroenterologist. Has some papers in an unrelated field.
- Julie Marshall, Ph.D. Chemistry, Texas Tech U. Currently Associate Prof. at Lubbock Christian University. No research record found.
- Thomas H. Marshall, Adjunct Professor, Food Agricultural and Biological Engineering, Ohio State University. No research located.
- Heikki Martikka, Professor of Machine Design, Lappeenranta University of Technology. Has done some research in unrelated fields.
- L. Kirt Martin, Professor of Biology, Lubbock Christian University. Has a single (unrelated) 2003 paper in the journal Cellulose. Apparently not a scientist.
- Alvin Masarira, Senior Lecturer for Structural Engineering and Mechanics, University of Cape Town. Seems to be minimally involved in research (google scholar returns a single publication). Rabid religious fanatic whose work is primarily concerned with church- and mission-related matters, e.g. how the church can contribute to fighting AIDS in Africa by pushing male circumcision. Masarira is a Seventh-day Adventist Church elder associated with the Institute of World Missions, and has for instance argued forcefully and repeatedly against ordaining women pastors.
- Perry Mason, Professor of Mathematics and Physical Science, Lubbock Christian University. No research found.
- Bert Massie, Ph.D. Physics, University of California, Los Angeles. No updated affiliation or research found.
- Steve Maxwell, Associate Professor of Molecular and Cellular Medicine, Texas A&M Health Science Center. Does real research on the genetics and biology of cancer cells.
- David McClellan, Assistant Professor of Family & Community Medicine, Texas A&M University College of Medicine. Pubmed returns some publications in an entirely unrelated field.
- Jacquelyn W. McClelland, Professor (Ph.D. Nutritional Biochemistry), North Carolina State University, NCCE. Seems to be in most ways a standard academic (unrelated field), but has written a glowing review of nutjob Lisa Shiel’s book “The Evolution Conspiracy”.[167], in which she admitted that she didn’t double-check Shiel’s claims.
- Charles H. McGowen, Assistant Professor of Medicine, Northeastern Ohio Universities College of Medicine. Author of “In Six Days” (1976), a “treatise on the creation/evolution controversy”, and “In Six Days: A Case For Intelligent Design” (2002), another “great teaching tool”, i.e. a book to convince audiences, in particular children – McGowen has of course done no research on evolution or design, but then again the Intelligent Design movement is not about science or research, but about public relations. Rejects theistic evolution in part because it “requires a refutation of the absolute, inspired, inerrant truth of God’s Word,” which is not a scientific dissent. Also Contributing Editor for Reformation & Revival Journal. Does not appear to be a scientist at all (no scientific research found).
- Andy McIntosh, Full Professor of Thermodynamics and Combustion Theory, University of Leeds. On the Board of Directors of the British creationist organization Truth in Science. Evangelical Christian and creationist, author of “The Delusion of Evolution”, and (at least sometime) member of the council of reference of Biblical Creation Ministries; also author of "Genesis for Today", which promotes a literal interpretation of the Book of Genesis. Unsurprisingly, he is also a signatory to the CMI list of scientists alive today who accept the biblical account of creation. Claims that his research suggests that the Bombardier beetle is unlikely to have been brought about through natural selection.[168] Has published on the bombardier beetle in various pseudojournals and some (apparently) real engineering journals (on what engineering can learn from these beetles) – not in biology journals since the biology of bombardier beetles is in fact well understood,[169] and no problem for the theory of evolution.[170]
- Tom McMullen, Ph.D. History & Philosophy of Science, Indiana University. Currently at Georgia Southern University. Hardcore creationist who appears to fancy himself a “real scientist”,[171] apparently claiming that most of the scientific research favoring evolution is “fraud”. Says that evolution is a matter of belief and that the "religion of humanism" is pushing for evolution without scientific support.[172] Does not appear to be involved in research in any field, and is definitely not a scientist.
- William McVaugh, Associate Professor of Biology, Department of Natural Sciences, Malone College (a small, fundamentalist evangelical[173] liberal arts college). No research found.
- †David B. Medved, Ph.D. Physics, University of Pennsylvania. Seems to have a respectable track record in technology development, but has also written books such as “Hidden Light: Science Secrets of the Bible”. Father of Michael Medved.
- Tony Mega, Ph.D. Biochemistry, Purdue University. Used to be tenured faculty of Whitworth College (a small Presbyterian liberal arts college), but was fired due to “lack of ‘collegiality’”.[174] Has some publications in an unrelated field, but none from the last 20 years. Also member of the fundamentalist “The Local Church”.
- James Menart, Associate Professor of Mechanical Engineering, Wright State University. Has some publications in an unrelated field.
- Ricardo Bravo Méndez, Professor of Zoology and Ichthyology, Universidad de Valparaíso. No research found.
- Angus Menuge,[175][176] Ph.D. Philosophy of Psychology, University of Wisconsin-Madison, professor at Concordia University,[177] and Fellow at the Discovery Institute. Testified in the Kansas evolution hearings, where he refused to answer the question of how old the earth is.[178] Has a few decent philosophy publications, and a lot of apologist material that can hardly be counted as scientific research under any definition. Not a scientist.
- J. C. Meredith, Assistant Professor, Chemical Engineering, Georgia Institute of Technology. Real researcher, unrelated field.
- Jussi Meriluoto, Professor, Dept. of Biochemistry & Pharmacy, Abo Akademi U. Does research in not obviously entirely unrelated fields.
- Stephen C. Meyer,[179] Ph.D. Philosophy of Science, Cambridge University. Not a scientist, though he worked as a geophysicist for the Atlantic Richfield Company early in his career. One of the main characters in the Intelligent Design movement; co-founder and vice-president of the Discovery Institute’s Center for Science and Culture and described as “the person who brought ID to DI”. Co-author of Explore Evolution, contributor to Of Pandas and People, author of Signature in the Cell,[180][181], which contained a dozen ID-inspired predictions that don’t quite conform to the general format of scientific predictions. Meyer is partially responsible for the Wedge document, and for the Teach the controversy strategy, and has been caught being dishonest on numerous occasions.[182]
- †Ruth C. Miles, Professor of Chemistry, Malone College (a small, fundamentalist evangelical[183] liberal arts college).
- John Millam, Ph.D. Computational Chemistry, Rice University. Software developer. Testified during the Kansas Evolution hearings, where he rejected common descent and affirmed his belief in design.[184] Nevertheless appears to be a member of the Midwest Skeptics Society.
- Aaron J. Miller, Ph.D. Physics, Stanford University. Currently Assistant Professor at Albion College. May have some publications, though most of them appear to be on arXiv.
- Brian Miller, Ph.D. Physics, Duke University. Currently Instructor for Campus Harvest, “a division of Every Nation Ministries, a member of the Evangelical Council for Financial Accountability,” and travels around giving lectures on science “from a faith perspective”,[185] such as “Empirical Evidence for the Resurrection of Jesus Christ”. Coauthored some letters and workshop presentations with his advisor during his student days. No real research record found; not a scientist.
- Laverne Miller, Clinical Associate Professor of Family Medicine, Medical College of Ohio. Family doctor; no research located.
- Gordon Mills, Emeritus Professor of Biochemistry, University of Texas, Medical Branch. Has done quite a bit of real research, as well as writing on religious matters, and is one of few signatories who may have something resembling relevant qualifications. Claims that macroevolution goes far beyond the evidence.
- Thomas Milner, Associate Professor of Biomedical Engineering, University of Texas, Austin. Real scientist, unrelated field (optical tomographic imaging modalities and laser surgical procedures).
- Forrest Mims, Atmospheric Researcher, Geronimo Creek Observatory. Has no formal academic training in science, but has nevertheless written quite a bit about science (most famous for his instructional electronics books). Mims teaches electronics and atmospheric science at the University of the Nations, an unaccredited Christian university in Hawaii. He is also a Fellow at the Discovery Institute and the International Society for Complexity, Information and Design, and a global warming denialist. Famous for his 1988 proposal to Scientific American to take over The Amateur Scientist column; he was offered to write some sample columns, but was not offered the position, a decision that according to Mims must have been made because of his Christian and creationist views.[186] He also received some flak for his claim that ecologist Eric Pianka advocates mass genocide by ebola.[187]
- Scott Minnich,[188] Professor, Dept of Microbiology, Molecular Biology & Biochemistry, University of Idaho, (Wikipedia says “associate professor of microbiology”). Fellow at the Discovery Institute's Center for Science and Culture. Hardcore promoter of Irreducible complexity and co-author of “Explore Evolution” with Stephen Meyer. A central witness for the Defense in Kitzmiller v. Dover. Caught listing conference presentations on Intelligent design as peer reviewed research, though he has some real published research as well and is one of few central ID proponents with legitimate credentials.
- Paul Missel, Ph.D. Physics, Mass. Institute of Technology. Currently at Alcon Research Ltd. Does research in an unrelated field.
- Timothy A. Mixon, Assistant Professor of Medicine, Texas A&M University. Has some publications in an unrelated field.
- Raymond C. Mjolsness, Ph.D Physics, Princeton University. Seems to have been a respected scientist back in the day, though his research stems primarily from the 60s and 70s. Currently signatory to the Oregon Petition as well; no affiliation found.
- Lennart Möller, Professor, Center for Nutrition & Toxicology, Karolinska Institute. Works on tracing health risks in the environment (and appears to have contributed to some publications on that topic). Also prominent member of the Swedish Evangelical Mission and author of (e.g.) “The Exodus Case. A scientific examination of the Exodus story - and a deep look into the Red Sea”.[189] Apparently a fan of none other than the late Ron Wyatt.[190]
- David Monson, Ph.D. Analytical Chemistry, Indiana University. Director of Client Services (USA), AgriFood Health and Life Sciences Global Business, Battelle. Appears to have some publications in unrelated fields.
- Eric Montgomery, Ph.D. Physics, Stellenbosch University. No information, research, or affiliation found.
- J.D. Moolenburgh, Ph.D. Epidemiology, University of Rotterdam. Rheumatologist. Does research in an unrelated field.
- Murray E. Moore, Ph.D. Mechanical Engineering, Texas A&M University. Technical staff member at Los Alamos National Laboratory. Has done a little research, but there isn’t much from later years.
- Daniel L. Moran, Ph.D. Molecular & Cellular Biology, Ohio U. Seems to have a few publications, but no updated information found.
- Christopher Morbey, Astronomer (Ret.), Herzberg Institute of Astrophysics. Has done research in an unrelated field. Google reveals several online posts on various websites signed with his name defending Intelligent Design and global warming denialism.
- Terry Morrison, Ph.D. Chemistry, Syracuse University. Wilberforce Fellow. Has been at the InterVarsity Christian Fellowship for the last forty years and contributed to various religious conferences, but has no current academic affiliation or publications, and seems not to be involved in science.
- K. Mosto Onuoha, Shell Professor of Geology & Deputy Vice-Chancellor, U. of Nigeria. Seems to do some research in an unrelated field.
- Donald R. Mull, Ph.D. Physiology, University of Pittsburgh. No current affiliation found, and no research apart from two 1970 papers.
- Thomas Mundie, Dean of the School of Science & Technology, Georgia Gwinnett College. Does in fact teach evolution at said college, and on ratemyprofessor the comments are “Very good at presenting material on evolution without revealing his personal beliefs. He addressed all sides of the issues involved.” Has also given talks on evolution and religion (though it is unclear whether he actually rejects evolution). His PhD is in biochemical research, and he has a decent research record that does not seem to touch on evolution.
- Rosa María Muñoz, Head of Biopharmacy, Department Autonomous University of Guadalajara. Google Scholar returns several publications, but none since 2003.
- Carlos M. Murillo, Professor of Medicine (Neurosurgery), Autonomous U. of Guadalajara. No research or further information located.
- C. Steven Murphree, Professor of Biology, Belmont University. Has later rejected intelligent design in favor of theistic evolution: “10 years ago I signed the Discovery Institute's ‘Scientific Dissent from Darwinism,’ a choice that I now genuinely regret.”[191] He is currently the 1184th signatory to Project Steve.
- Terrance Murphy, Prof. of Chemistry, Weill Cornell Medical College, Qatar. Has published in unrelated fields, but little in recent years.
- William Murphy, Ph.D. Chemistry, Columbia University. No information found.
[edit] N
- Takeo Nakagawa, Chancellor (Ph.D. Physics, Monash U.), White Mountains Academy. Has some research in completely unrelated fields.
- Glen Needham, Associate Professor of Entomology; Ohio State U. Works in an unrelated field, but known proponent of Intelligent Design. Played a significant role in the Bryan Leonard affair,[192] which once again showcased the dubious tactics of the ID movement.
- Ed Neeland, Professor of Chemistry, Okanagan University. Has a few publications, but seems primarily to have published on education for the last 15 years. Runs a local creationist club where he equates the theory of Evolution with the Big Bang and abiogenesis. Does evidently not understand the notion of falsifiability.[193] Fond of strawmen.[194]
- B. K. Nelson, Research Toxicologist (retired), Centers for Disease Control and Prevention. Has done real research (unrelated field).
- Bijan Nemati, Ph.D. High Energy Physics, University of Washington. Currently Senior Engineer at the Jet Propulsion Lab (California Institute of Technology). Featured in the Discovery Institute-produced documentary “The Privileged Planet” together with several of the luminaries of the ID movement. Involved in research, but it has nothing to do with biology.
- Richard R. Neptune, Associate Professor, Department of Mechanical Engineering, University of Texas, Austin. Been involved in research, e.g. on bipedal walking (some of which explicitly relies on evolution and is evidence for, not against, it).
- David Ness, Ph.D. Anthropology, Temple University. No information, affiliation or research found.
- Paul Nesselroade, Associate Professor of Experimental Psychology, Asbury College (a small Christian liberal arts college). Contributor to The Wedge Update, a creationist blog in support of the Discovery Institute’s Wedge document, where he for instance defends Georgia State’s superintendent’s proposal to remove the word “evolution” from science textbooks since this motion may be helpful in promoting Intelligent Design.[195] May have a few low-tier publications in unrelated fields.
- Richard J. Neves, Professor of Fisheries, Virginia Tech. Retired. Has a research record, though his field is not directly related.
- Robert Newman, Ph.D. Astrophysics, Cornell University. Currently Professor of New Testament and Director of the Interdisciplinary Biblical Research Institute at the Biblical Theological Seminary of Hatfield. Has written extensively about Genesis, evolution, creationism, and science, but is not a scientist; no real research record found. Thinks that certain complex features of organisms to support e.g. predation are the work not of God, but of "malevolent spirit beings" (or perhaps "the work of non-spiritual intelligences (extra-terrestrials)"), which is evidence for the controversial hypothesis that "the fall of Satan is much earlier than that of Adam, and creation is already not so good by the time Adam comes along.[196]
- John Nichols, Ph.D. Mathematics, University of Tennessee. Later at Oklahoma Baptist University (retired) and State Secretary for the Oklahoma Gideons. Has some online documents on software application, but no real research record found.
- M.M. Ninan, Former President, Hindustan Academy of Science, Bangalore University. Batshit crazy crackpot and conspiracy theorist. Trained as an engineer, but emphatically not a scientist. Instead, Ninan is a missionary who teaches the Bible and has written a long row of books, including “Angels, Demons, and All the Hosts of Heaven and Earth”[197] and “Hinduism: What Really Happened in India”, where he tries to argue that Hinduism is much younger than previously thought and in fact originated as "a Heresy of the Early Indian Christianity established by St.Thomas who landed in India in AD 52 and had his mission till AD 72.”
- Art Nitz, Ph.D. Anatomy & Neurobiology, University of Kentucky. Now Professor of Physical Therapy at the same institution. Appears to have some publications on sports medicine. Also gives creationist talks at various venues, and is chairman of the Kentucky Family Foundation as well as president of the Frankfort Alliance Church Of The Christian And Missionary Alliance, Inc.[198]
- Alastair M. Noble, Ph.D. Chemistry, U. of Glasgow. Director of the Centre for Intelligent Design in Glasgow. Has suggested that Intelligent Design should be taught in British public schools. Calls himself “educational consultant and lay preacher.” Not an active scientist.
- Charles Edward Norman, Ph.D. Electrical Engineering, Carleton University. No current affiliation or research found.
- Scott Northrup, Chair and Professor of Chemistry, Tennessee Tech University. Appears to have some research in an unrelated field, though little from recent years found.
- William Notz, Professor of Statistics, Ohio State University. Does real research in an unrelated field. Also signatory to the 2002 Ohio Academic Freedom Act list submitted to the Ohio State Board of education.
- Omer Faruk Noyan, Assistant Professor, Celal Bayar University. May be the only signatory with any background in palaeontology. Has some low-tier publications, primarily in Turkish.
- †Hugh Nutley, Professor Emeritus of Physics & Engineering, Seattle Pacific University.
- Flemming Nyboe, Ph.D. Electrical Engineering, Technical University of Denmark. May have done some research in unrelated fields.
- †Wesley Nyborg, Emeritus Professor of Physics, University of Vermont.
- James E. Nymann, Emeritus Professor of Mathematics, University of Texas at El Paso. May have some old publications in an unrelated field, but nothing from the last 35 years found.
[edit] O
- Don Olson, Ph.D. Analytical Chemistry, Purdue University. CEO of Global Fia, Inc. and a scientist in an unrelated field. Has also written and given lectures about “the harmony between science and Christian beliefs.”
- Dónal O'Mathúna, Ph.D. Pharmacognosy, Ohio State University; professor of Bioethics & of Chemistry at Mount Carmel College of Nursing; and involved in the Xenos Christian Fellowship. Has written on a lot of issues, including (with Walt Larimore) the book “Alternative Medicine: the Christian handbook”, which defends alternative medical treatments that are based on a Christian approach to holistic health.[199] A well-known defender of woo, in particular therapeutic touch, and would probably not be able to distinguish science from pseudoscience under any circumstances.
- †John Omdahl, Professor of Biochemistry & Molecular Biology, University of New Mexico. Famous for devoting part of the final lecture in his biochemistry and molecular biology class (which avoided introducing evolution to students) with his reasons for favoring Intelligent Design. Also co-signatory to the Ad Hoc Origins Committee letter defending Phillip Johnson after Stephen Jay Gould’s scathing review of “Darwin on Trial”. In 2002 he wrote a letter to science department chairs of 77 New Mexico middle and high schools, accompanied by a copy of Michael Behe’s “Darwin’s Black Box”, using his affiliation to promote it.[200] Also contributed a chapter (co-authored with John Oller) to J.P. Moreland’s anthology “Creation Hypothesis”.[201]
- Jane M. Orient, Clinical Lecturer in Medicine, University of Arizona College of Medicine. Executive director of the wingnut quack organization[202] Association of American Physicians and Surgeons, whose journal JPANDS[203] published an infamous and deeply flawed study linking abortion and breast cancer[204] (the author, Joel Brind, is also a signatory to this list) as well as papers in support of HIV denialism. Also contact for the AAPS Educational Foundation, Doctors for Disaster Preparedness,[205] Physicians for Civil Defense, and the Southwestern Institute of Science. Known to promote anti-vaccine articles, e.g. by the Geier family.[206] Also faculty member at Oregon Institute for Science and Medicine and vehement global warming denialist.
- Rebecca Orr, Ph.D. Cell Biology, University of Texas, Southwestern. Currently at Collin County Community College. Does not seem to be involved in research at present.
- Robert D. Orr, Professor of Family Medicine, University of Vermont College of Medicine. Currently (also?) Senior Fellow of Bioethics and Human Dignity at Trinity International University. His recent work is concerned with bioethics from a Christian fundamentalist perspective; no actual scientific research located.
- Lawrence Overzet, Professor of Engineering & Computer Science, U. of Texas, Dallas. Has a decent but unrelated research record.
[edit] P
- Philip R. Page, Ph.D. Theoretical Particle Physics, University of Oxford. Currently Research Scientist at Los Alamos National Laboratory. Has done some research in an unrelated field. Has also written for Dembski’s organization, the International Society for Complexity, Information, and Design.
- Mehmet Pakdemirli, Professor of Mechanical Engineering, Celal Bayar University. Has a decent research record in unrelated fields.
- Emil Palecek, Prof. of Molecular Biology, Masaryk University (Dept. of Pharmacology). Has a decent research record, but it does not seem to touch on evolution.
- Einar W. Palm, Professor Emeritus, Dept. of Plant Pathology, U. of Missouri, Columbia. Google scholar returns no research publications.
- Sami Palonen, Ph.D. Analytical Chemistry, U. of Helsinki. Currently at the Dept. of Chemistry. Does research in an unrelated field.
- Siddarth Pandey, Assistant Prof., Chemistry, New Mexico Institute of Mining and Technology. Has done research in an unrelated field.
- Manfredo Pansa, Ph.D. Computer Science, University of Turin. No affiliation, research or information found.
- Annika Parantainen, Ph.D. Biology, University of Turku. Currently affiliated with A Rocha, a religious creationist organization. Has a few low-tier publications on ecology, fully unrelated to the issue at hand.
- Yongsoon Park, Ph.D. Nutritional Biochemistry, Washington State University. Currently at Hanyang University. Has some publications in unrelated fields (clinical nutrition).
- Janet Parker, Professor of Medical Physiology, Texas A&M University, Health Science Center. Seems to do research in unrelated fields.
- Lynne Parker, Professor of Computer Science, Distributed Intelligence Lab, University of Tennessee. Seems to be a respected scientist in her field, which is not particularly directly related to evolution – though some of her work may not be entirely irrelevant.
- Darrell R. Parnell, Ph. D. University Level Science Education, Kansas State U. No affiliation or research found for the last 50 years. Said that evolution is “not science because it's close-minded. It's not open to anything else. You've got to think outside the box, and that's what some scientists have done before."[207]
- Ken Pascoe, Ph.D. Electrical Engineering Air Force Institute of Technology. Chief, Directed Energy Weapons Safety at the US Air Force. Has done research in an unrelated field.
- Rafe Payne, Ph.D. Biology, U. of Nebraska. Emeritus professor of biology, Biola U. Appears to have a few older research publications.
- Russel Peak, Senior Researcher, Engineering Information Systems, Georgia Institute of Technology. No information or research found.
- Gérald Pech, Ph.D. Satellite Communications & Networking, Supaero. No affiliation or research found.
- †S. W. Pelletier, Emeritus Distinguished Professor of Chemistry, University of Georgia, Athens.
- Edward Peltzer,[208] Ph.D. Oceanography, University of California, San Diego (Scripps Institute). Senior research specialist at the Monterey Bay Aquarium Research Institute and has some peer-reviewed publications. Extensively used by creationists, e.g. in the Kansas Evolution Hearings;[209] after a lengthy presentation on the problems of abiogenesis Peltzer spent the last minute of his testimony to launch an incoherent rant about “the religion of naturalism” in modern science,[210] asserting his position as an old-earth creationist. He has done some serious work related to global warming, however.
- A. Cordell Perkes, Ph.D. Science Education, Ohio State U. Has a few unrelated publications, though none from the last 30 years.
- Todd Peterson, Ph.D. Plant Physiology, University of Rhode Island. Also signatory to the apparently Discovery Institute initiated Amicus Brief supporting Intelligent Design in Kitzmiller v. Dover. Has some publications in apparently unrelated fields.
- Rosalind Picard, Sc.D. Electrical Engineering & Computer Science, Massachusetts Institute of Technology. Picard is credited with starting the branch of computer science known as “affective computing” and is apparently a well-respected scientist in her field. Has voiced some reservations about intelligent design, saying it isn't being sufficiently challenged by Christians and other people of faith,[211] arguing that the media has created a false dilemma by dividing everyone into two groups - supporters of intelligent design or evolution. “To simply put most of us in one camp or the other does the whole state of knowledge a huge disservice,” she has said, which means that either she confuses Intelligent Design creationism with Guided evolution, or she has bought into the Discovery Institute “controversy” rhetoric. In any case, her views on the matter seem decidedly woolly.
- Martin Poenie, Associate Professor of Molecular and Cell Biology, University of Texas, Austin. Involved in real research, and is indeed among the vanishingly small number of signatories with real credentials and a relevant research background. But even though he is a signatory to the list, Poenie does not appear to reject evolution, and has for instance voiced his opposition to creationist attempts to get the Texas Board of Education to adopt more creationist friendly language in their public education standards.[212] He has explained his position in a letter to the Board,[213] where he also pointed out that the Discovery Institute had used his name without authorization in their “40 Texas Scientists Skeptical of Darwin” list (a spin-off from the Scientific Dissent one targeted specifically at Texas).
- Richard W. Pooley, Professor of Surgery (retired), New York Medical College. Has done research in an unrelated field. Also affiliated with the Ludwig von Mises Institute in Auburn, Alabama.
- Carl Poppe, Ph.D. Physics, University of Wisconsin. Currently at Livermore Laboratory, and has some publications in unrelated fields, but none from the last 15 years has been located.
- Mark C. Porter, Ph.D. Chemical Engineering, MIT. Appears in a few publications from the early 70s; no updated information found.
- William J. Powers, Ph.D. Physics, University California, San Diego. No further information found.
- Ernest Prabhakar, Ph.D. Experimental Particle Physics, California Institute of Technology. Open Source Product Manager. Does not appear to be involved in scientific research.[214]
- Tony Prato, Prof. of Ecological Economics, University of Missouri (emeritus). Has done real research in an unrelated field.
- David Prentice,[215] Professor, Dept. of Life Sciences, Indiana State University. Not currently affiliated with Indiana State; instead Senior Fellow for Life Sciences at the Family Research Council and former science advisor to Sam Brownback (a known promoter of creationism[216]). Most famous as a stem cell research opponent who (against his better judgment) has attempted to claim that we don't need to fund embryonic stem cell research because adult stem cells can do so many things.[217] His claim was backed up by the assertion that adult stem cells have treated at least 65 human diseases, but when scientists curious about the number checked his list[218] of research allegedly supporting his claim, they found that one entry was based on an anecdote in a newspaper article, others on statements of personal opinion in Congressional testimony, and of the cited references few if any actually support Prentice’s claim.[219][220] Despite being obviously fraudulent, the list has been cited innumerable times by wingnuts who like Prentice’s conclusion and would like it to be correct,[221] such as Karl Rove.[222]
- Mark Pritt, Ph.D. Mathematics, Yale University. Cannot verify any current affiliation or research.
- Mark L. Psiaki, Professor of Mechanical and Aerospace Engineering, Cornell U. A very respectable scientist, in an unrelated field.
- Alexander F. Pugach, Ph.D. Astrophysics, Ukrainian Academy of Sciences. Appears to have some publications in unrelated fields.
- Pattle Pun. Professor of Biology, Wheaton College (a religious institution), and Fellow of the Discovery Institute’s Center for Science and Culture. Calls himself a ‘progressive’ creationist, and has a few publications in real, low-tier journals, as well as many publications in religious journals and several religious books.
- William Purcell, Ph.D. Physical Chemistry, Princeton U. Currently CEO, Molecular Design International Inc. No recent publication found.
- Georgia Purdom,[223]Ph.D. Molecular Genetics, Ohio State University. Associated with Answers in Genesis. Purdom is a young earth creationist and signatory to the CMI list of scientists alive today who accept the biblical account of creation, has contributed several articles to AIG’s house journal Answers Research Journal, and was an early critic of Richard Lenski’s famous experiment, entering the fray even before Andy Schlafly. She is fond of the Different Worldviews gambit, but has also claimed that “the Christian worldview accounts not only for morality but also for why evolutionists behave the way they do. Even those who have no basis for morality […] hold to a moral code […] because in their heart of hearts they really do know the God of creation, despite their profession to the contrary. Scripture tells us that everyone knows the biblical God, but that they suppress the truth about God”. So there. Known for her ability to take any strong evidence for a hypothesis to be evidence for a completely opposite one by applying the standard creationist data handling rules: distort, mangle, quote-mine, confuse and assert.[224] Despite her own assertions to the contrary, she is not, by any measure, a scientist.[225]
- Christian W. Puritz, Ph.D. Mathematics, University of Glasgow. Published a paper or two in the 1970s, but seems later to have been involved primarily in mathematics education. No affiliation found.
[edit] R
- Larry B. Rainey, Principal Space Systems Engineer, Missile Defense Agency. Works in a completely unrelated field.
- Fazale Rana,[226] Ph.D. Chemistry, Ohio University (where he received his Ph.D.; he is currently affiliated with the Reasons To Believe Ministry). Well-known old earth creationist, author of The Cell's Design: How Chemistry Reveals the Creator's Artistry, and unfailing apologist for religious fundamentalism. Rana has written (published in BIO-Complexity) on why harmful bacteria would exist if they were created by a good and benevolent god. Evolution could apparently not answer that one.[227] Rejects the possibility of extra-terrestrial life since there was one unique Jesus who died for all sinners; since he didn’t on other planets such aliens must be without sin or else not exist, and the latter is apparently more plausible.
- Luke Randall, Ph.D. Molecular Microbiology, University of London. Apparently a young earth creationist. Affiliated with the Department of Food and Environmental Safety, Veterinary Laboratories, and co-author of some research papers[228]
- Paul Randolph, Ph.D. Mathematical Statistics, University of Minnesota. No information, affiliation, or publications confirmed.
- James E. Rankin, Ph.D. General Relativity, Yeshiva U. Consultant at Rankin Consulting. Google Scholar returns a single 1979 paper.
- Don Ranney, Emeritus Professor of Anatomy and Kinesiology, University of Waterloo. Has published in unrelated fields. Currently appears to spend his time doing pseudo-philosophy, including praising Mario Beauregard & Denyse O'Leary’s Non-materialist neuroscience tract “The Spiritual Brain”.[229]
- Dennis Dean Rathman, Staff Scientist, MIT Lincoln Laboratory. Seems to be second-author on a few publications in unrelated fields. Also signatory to a HIV denialist petition for a “Scientific Reappraisal of the HIV-AIDS Hypothesis”.[230]
- Alfred G. Ratz, Ph.D. Engineering Physics, U. of Toronto. Most recent publication from 1975. No current affiliation or information found.
- David Reed, Ph.D Entomology, University of California, Riverside. No further information, affiliation, or research found.
- Colin R. Reeves, Professor of Operational Research (Ph.D. Evolutionary Algorithms), Coventry University. Currently emeritus. Has published papers on applications of neural networks to pattern recognition problems. Has also published articles with the Biblical Creation Society and said that “[w]ithout the initial activity of an intelligent agent, the evolutionary mill has no grist to work on.”[231] Also on the editorial team of Bio-Complexity and affiliated with the Biologic Institute.
- Patricia Reiff, Director, Rice Space Institute, Rice University. Real researcher in an unrelated field. Has said that she agreed to have her name added to the Discovery list because there are events in the evolutionary process that are mathematically “quite improbable.” Reiff is nevertheless convinced by the evidence for evolution, and her additional claim that “life from nonlife is very, very improbable”[232] has nothing to do with evolution.
- Scott A. Renner, Ph.D. Computer Science, University of Illinois at Urbana-Champaign. Has done a bit of science. No current academic affiliation found (works for MITRE).
- Anthony Reynolds, Ph.D. Philosophy of Science (thesis on the Argument for Design), University of London. No information located.
- Dan Reynolds, Ph.D. Organic Chemistry, U. of Texas, Austin. Senior Scientific Investigator, GlaxoSmithKline; Chairman, the Triangle Association for the Science of Creation, and heavily involved in creation "science". Appears to have some publications in unrelated fields.
- Michael C. Reynolds, Assistant Professor of Mechanical Engineering, University of Arkansas, Fort Smith. Currently department head; hard to locate any substantial research, but seems to have a few papers in unrelated fields.
- Terry Rickard, Ph.D. Engineering Physics, University of California, San Diego. Research Director at Distributed Infinity, Inc. Appears to do research in an unrelated field.
- John P. Rickert, Ph.D. Mathematics, Vanderbilt University. Currently a Catholic priest who campaigns against reductionism by giving talks in churches. Young earth creationist, and seems to have quite a bit of trouble understanding even the basics of reasoning.[233] No research record found; not a scientist.
- Karen Rispin, Assistant Professor of Biology, LeTourneau University (a fundamentalist institution “built upon a foundation of biblical authority”). Does apparently not have a Ph.D. and is as such not formally qualified for the list even by the Discovery Institute's already relaxed standards. Has published some abstracts on wheelchairs, as well as on local web outlets, but it is doubtful that these count as research publications.
- Eliot Roberts, Ph.D. Soil Chemistry, Rutgers University. Director of the Lawn Institute, a non-profit organization that provides lawn care advice, for instance on how to mow the lawn properly. Does have some research publications from the 1960s.
- Arthur B. Robinson, Professor of Chemistry, Oregon Institute of Science & Medicine. Infamous crank magnet, dominionist, AIDS denialist, and responsible for the Oregon Petition.
- Mark A. Robinson, Ph.D. Environmental Science, Lacrosse University. No research or affiliation found.
- Nigel E. Robinson, Ph.D. Molecular Biology, University of Nottingham. No current affiliation or research found.
- Edson R. Rocha, Research Assistant Professor, Microbiology, East Carolina University. Currently Pharmacist/Biochemist, State University of Londrina. Has some publications; those few that are not unrelated to evolution assume and does not attempt to challenge it.
- John S. Roden, Associate Professor of Biology, Southern Oregon University. Has some publications in unrelated fields (environmental issues and plant health/ecology).
- Charles A. Rodenberger, Ph.D. Aerospace Engineering, University of Texas, Austin. Later professor at Texas A&M University; now retired, but teaches Sunday school and writes for Livestock Weekly. Has said that he is “convinced that Evolution is a nonscientific teaching based on faith because the laws of physics and chemistry prove that evolution of living molecules from the random interaction of hydrogen atoms is statistically impossible.” Has implored the United Methodist Church to teach the “evolution/creation controversy” in church.[234] Has a few unrelated publications, fewer in reputable journals, and none from the last 30 years.
- Miguel A. Rodriguez, Undergraduate Lab. Coordinator for Biochemistry, University of Ottawa. Retired. May have contributed to some not obviously related research.
- E. Byron Rogers, Prof. of Chemistry; Chair, Dept. of Mathematics & Physical Sciences, Lubbock Christian U. No research record located.
- Quinton Rogers, Prof. of Physiological Chemistry, Dept. of Molecular Biosciences, Univ. of California, Davis, School of Vet. Medicine. Has some publications in not obviously unrelated fields (and seems to accept evolution in those). Nevertheless caught praising Lisa A. Shiel’s “The Evolution Conspiracy”,[235] as reported in the book description on Amazon. The other “experts” cited include Michael Cremo.
- Rod Rogers, Ph.D. Agronomy, Iowa State U. Prof. Emeritus at Lubbock Christian U. No research record found. Apparently not a scientist.
- David Rogstad, Ph.D. Physics, California Institute of Technology. Currently “Research Scholar” at Reasons To Believe. No recent publications in peer-reviewed journals found.
- Charles T. Rombough, Ph.D. Engineering, University of Texas. President, CTR Technical Services Inc. No research in peer-reviewed journals found.
- Daniel Romo, Professor of Chemistry, Texas A&M University. Apparently a real scientist, but also a staunch creationist. One of Gail Lowe’s many creationist nominee’s for various panels on education when she was in charge of the Texas Board of Education.[236] Has said that “not all data proposed within the evolution model are settled science;” his example of an open question was abiogenesis, which makes one wonder if he has even the faintest clue about what he is talking about.[237]
- Paul Roschke, A.P. and Florence Wiley Professor, Dept. of Civil Engineering, Texas A&M U. Apparently a real scientist, unrelated field.
- Kay Roscoe, Ph.D. High Energy Particle Physics, University of Manchester. Now teacher of Science (“Biology specialism”) at Saddleworth School. No research found; does not appear to be a scientist.
- Douglas Nelson Rose, Research Physicist, United States Army. Has a few (generally not peer reviewed) documents on military equipment.
- Peter M. Rowell, D.Phil. Physics, University of Oxford. Google reveals no information on Rowell.
- David W. Rusch, Senior Research Scientist, Lab. for Atmospheric and Space Physics, U. of Colorado. Does real research, unrelated field.
- Donald W. Russell, Adjunct Assistant Clinical Professor, University of North Carolina School of Medicine. MD. No research record found.
- James P. Russum, Ph.D. Chemical Engineering, Georgia Institute of Technology. Works for Multi-Chem; does research in unrelated fields.
- Rodney M. Rutland, Department Head & Associate Professor of Kinesiology, Anderson University (an institution affiliated with the extremist group South Carolina Baptist Convention). Coauthor of two 1996 papers; no other research found.
[edit] S
- Lennart Saari, Adjunct Professor, Wildlife Biology, University of Helsinki. Young Earth creationist who has compared scientists with the clergy of the Medieval Church and responsible for persecution of dissenters. Has some publications, but admits that they provide no support for his anti-evolution stance.[238] Has said about Explore Evolution: “What a superb book! Clear, understandable, impartial, and intellectually honest.”
- David Sabatini, Professor Civil Engineering & Environmental Science, U. of Oklahoma. Respected researcher in his (unrelated) field.
- Jeffrey Sabburg, Ph.D. Physics, Queensland University of Technology. No affiliation or research found.
- Victoriano Saenz, Professor of Medicine, Autonomous University of Guadalajara. Appears to do research in unrelated fields.
- Eduardo Sahagun, Professor of Botany, Autonomous University of Guadalajara. Seems to have some genuine publications in a field that is not obviously entirely unrelated.
- Theodore Saito, Ph.D. Physics, Pennsylvania State University. Senior Engineer at Lawrence Livermore National Lab. Has been involved in research in an unrelated field, but little if any from the last 20 years found.
- Thomas Saleska, Professor of Biology, Concordia U., a famous religious institution. Not involved in research or science.
- Stanley Salthe,[239] Emeritus Professor, Biological Sciences, Brooklyn College of the City University of New York. May have done real scientific research earlier in his career, but is currently primarily doing postmodernist, structuralist or deconstructivist critiques of science, since science is part of the myth of modernism. Said that when he endorsed a petition he had no idea what the Discovery Institute was, stating that “I signed it in irritation.” While no fan of evolutionary explanations - he appears to reject Darwinism partly as “a myth congenial to Capitalism."[240] - he seems to have been pretty dismissive of intelligent design.[241] Salthe claims to be an atheist, but he has nevertheless worked closely with well-known hardcore creationists such as Don Batten.
- John C. Sanford, Courtesy Associate Professor of Horticultural Sciences, Cornell University. Young earth creationist[242] who has argued for devolution (in his book “Genetic Entropy & the Mystery of the Genome”) and defended the notion of Complex Specified Information. Has also apparently done some real research, though he is most famous for his patents. His credentials are used for all they are worth by creationists as evidence for the scientific status of Intelligent Design, and Sanford testified for instance at the Kansas Evolution hearings.[243] He also used his affiliation to give a sheen of legitimacy to the conference Biological Information: New Perspectives.
- Charles G. Sanny, Prof., Biochemistry, Oklahoma State U. Ctr. for Health Sciences. Seems to do real research in medical biochemistry.
- Fernando Saravi, Professor, Department of Morphology and Physiology, Med. Sciences School, Univ. Nacional de Cuyo. Has some publications attempting to link cell phone usage to health risks. Primarily known for his writings on “future eschatology”, the imminent end times and similar stuff, and has written books such as “Hope of Israel: The Jewish People and the Messiah,” and “The Invasion From the East: The Dangers of new Hindu philosophy and Mormonism Uncovered.”
- Phillip Savage, Professor of Chemical Engineering, University of Michigan. Real researcher in an unrelated field. Signed the petition not because he has any problems with evolution, but because he doesn’t think Darwinism explains abiogenesis, which it does, of course, not purport to do to begin with.[244]
- Dale Schaefer, Professor, Materials Science & Engineering, University of Cincinnati. Publishes in an unrelated field.
- G. Bradley Schaefer, Professor of Pediatrics, U. of Nebraska Medical Center. Currently Director, Division of Genetics, College of Medicine at the U. of Arkansas for Medical Science. Seems to be a respected scientist in a field that is at best tangentially related to evolution.
- Henry Schaefer,[245] Director, Center for Computational Quantum Chemistry, University of Georgia. Fellow of the Discovery Institute's Center for Science and Culture and Dembski's International Society for Complexity, Information, and Design.[246] Describes himself as “sympathetic” to Intelligent Design but primarily a “proponent of Jesus.” Real scientist in an unrelated field. Doesn’t really understand evolution.[247][248] The Discovery Institute has been caught attempting to inflate his credentials on several occasions.[249][250]
- Norman Schmidt, Professor of Chemistry, Georgia Southern University. Currently at Tabor College (whose slogan is “Decidedly Christian”). Has some publications in unrelated fields. Appears to have advocated Intelligent Design in various venues.
- Eduard F. Schmitter, Ph.D. Astronomy, University of Wisconsin. Retired Professor at Pan-African University. No research found.
- Andrew Schmitz, Ph.D. Inorganic Chemistry, University of Iowa. No current affiliation or research found.
- Fred Schroeder, Ph.D. Marine Geology, Columbia University. No published record since 1993, when he was working for Exxon as a petroleum geologist.
- Gerald Schroeder, Ph.D. Earth Sciences & Nuclear Physics, MIT. Lecturer and teacher at College of Jewish Studies Aish HaTorah’s Discovery Seminar, Essentials and Fellowships programs and Executive Learning Center. Has spent the last 35 years investigating “the confluence of science and Torah,” constructing excruciatingly elaborate, tortured ad hoc explanations to get the apparent age of the universe fit with the literal (non-metaphorical) Biblical six-day account of creation. Debunked in detail by Mark Perakh here. May have been a real scientist at one point, but is currently (exclusively, it seems) involved in apologetics, having written several books with titles such as “The Science of God: The Convergence of Scientific and Biblical Wisdom.” Awarded the Trotter Prize by Texas A&M University's College of Science in 2012.[251] Antony Flew credited Schroeder with changing his mind on atheism.[252]
- W. Christopher Schroeder, Associate Professor of Mathematics, Morehead State U. Has a single 2003 publication. Hardly a scientist.
- Dean Schulz, Ph.D. Computer Science, Colorado State U. President, Conceptual Assets, Inc. Has some patents; no research found.
- Jeffrey Schwartz, Assoc. Res. Psychiatrist, Dept. of Psychiatry & Biobehavioral Sciences, University of California, Los Angeles. A proponent of mind/body dualism and Non-materialist neuroscience; appeared in Expelled, where he told Ben Stein that science should not be separated from religion. Schwartz, however, seems to accept common descent and evolution, though he claims – relying on Buddhism and theology – that humans are exempt, being able to transcend those origins for reasons that seem to lay closer to Deepak Chopra than Ray Comfort. Otherwise a respectable scientist (neuroplasticity).
- J. Benjamin Scripture, Ph.D. Biochemistry, University of Notre Dame. No current affiliation found, and only a few, older research papers. He is, however, on the record as a staunch creationist who is unable to distinguished fossilized brains from rocks.[253]
- Christopher Scurlock, Ph.D. Chemistry, Arizona State University. Research Leader at Battelle. Routinely opines on climate change and evolution denial online at Vox Popoli and CSNbbs under the handle "DrTorch". No publications found.
- Ralph Seelke, Professor of Molecular and Cellular Biology, University of Wisconsin, Superior. On the board of the Biologic Institute and co-author of Explore Evolution. Testified during the Kansas Evolution Hearings.[254] One of few Intelligent Design defenders who have published in relevant fields.
- Giuseppe Sermonti, Retired Professor of Genetics, University of Perugia; also Editor of Rivista di Biologia, a journal that has published several pro-creationist “articles” (by e.g. Jerry Bergman) under the pretense of scientific peer review.[255] Sermonti is a prolific author (of e.g. “Why Is a Fly Not a Horse?”, a 2003 attempted critique of evolution), is considered one of Italy’s leading creationists, has been cited by Henry Morris, and testified during the Kansas Evolution Hearings. May have been a real scientist in his day, but is currently devoted to giving pseudoscience a sheen of legitimacy.
- Valdemar W. Setzer, Ph.D. Applied Mathematics, University of São Paulo. A follower of Rudolf Steiner’s anthroposophy[256] Works primarily on computers, education and perceived effects of computer use (it’s dangerous for children), as well as concocting (unpublished) philosophical rants defending dualism. Most of his publications are online documents, but he may have some publications in low-tier, local (mostly in Spanish) peer-reviewed journals as well.
- Granville Sewell,[257] Professor of Mathematics, University of Texas, El Paso. Has had an anti-evolutionary article published in The Mathematical Intelligencer, which is cited by the Discovery Institute as one of the “Peer-Reviewed & Peer-Edited Scientific Publications Supporting the Theory of Intelligent Design.”[258] Sewell's main schtick is that evolution violates the Second Law of Thermodynamics, and the claim is just as poorly supported as one might expect.[259][260][261] Mark Perakh has called Sewell's work “depressingly fallacious”.[262]. He is notoriously unable to see that even if evolution were to fail, it wouldn’t mean that Intelligent Design creationism is correct.[263] Also writes for Uncommon Descent.[264]
- Stephen Sewell, Assistant Professor of Family Medicine, Texas A&M University. MD; no research record found.
- Rowan Seymour, Ph.D. Computer Science, Queen’s University, Belfast. Currently hired by Partners in Health and the Ministry of Health in Rwanda to develop their medical record system. Has some online documents to his name, but does not seem to do scientific research.
- Gregory Shearer, Ph.D. Physiology, U. of California, Davis. Currently at Sanford Research. Has done research in an unrelated field.
- Robert B. Sheldon, Ph.D. Physics, University of Maryland, College Park. Currently at the National Space Science and Technology Center. Has some publications in unrelated fields. Also contributes to astrobiology research, and is for instance the author of “Comets, Information, and the Origin of Life”. Has worked with Nalin Chandra Wickramasinghe, one of the central defenders of continuous Exogenesis who himself testified for the creationists in McLean v. Arkansas Board of Education (though he is not a creationist). The connection may or may not be illuminating.
- Pingnan Shi, Ph.D. Electrical Engineering, University of British Columbia. Currently President at Overcome Depression Ministry. Has some online papers and conference transactions from earlier, but appears not to have done any research in the last 15 years.
- Evgeny Shirokov, Faculty Lecturer (Nuclear and Particle Physics), Moscow State University. Currently at Institute of Applied Physics, Russian Academy of Sciences, Nizhnii Novgorod. Seems to do research in unrelated fields.
- Mark Shlapobersky, Ph.D. Virology, Bar-Ilan University. Research Scientist at Vical Incorporated. Has done some research, though it is unclear how relevant it is to the question at hand.
- Haim Shore, Professor of Quality and Reliability Engineering, Ben-Gurion University of the Negev. A real researcher in an unrelated field. Also a hardcore crackpot who contributes to theology through (absolutely astonishingly ridiculous forms of) Gematria.[265]
- David Shormann, Ph.D. Limnology, Texas A&M University. President of the “Dive into Math” program and active in the homeschooling movement by developing materials and arranging workshops. Not an active scientist, but instead a young earth creationist who claims.” Has already displayed his belligerent incompetence on several occasions,[266] and was therefore appointed by Barbara Cargill to serve on science review panels that evaluate instructional materials for public schools submitted for approval by the Texas Board of Education in 2011.
- David K. Shortess, Professor of Biology (Retired), New Mexico Tech. Has a few (unrelated) research publications; the most recent appears to be from 1983.
- William P. Shulaw, Professor of Veterinary Preventive Medicine. Ohio State U. DVM; specialty: cattle and sheep; seems to have done some research in those fields. Also signatory to the 2002 Ohio Academic Freedom Bill list submitted to the Ohio State Board of education.
- Khawar Sohail Siddiqui, Senior Research Associate (Protein Chemistry), University of New South Wales. Does have a research record, and his expertise is not entirely irrelevant to the question at hand. None of his publications challenge evolution, however, and his writings suggest that Siddiqui supports and assumes the theory of evolution in his work.
- Theodore J. Siek, Ph.D. Biochemistry, Oregon State University. Toxicologist. President of Analytic Bio-Chemistries Inc. Has some publications in unrelated fields. Author of “Questions for Evolutionists”, the most pressing of which is apparently abiogenesis, which has nothing to do with evolution and thereby reveals Siek’s lack of expertise in the field. Signatory to the Discovery Institute initiated Amicus Brief supporting Intelligent Design in the Kitzmiller v. Dover case.[267][268]
- Arlen W. Siert, Ph.D. Environmental Health, Colorado State University. Industrial Hygienist at Xcel Energy. No research found.
- Charles A. Signorino, Ph.D. Organic Chemistry, University of Pennsylvania (also MAR, Westminster Theological Seminary). CEO of Emerson Resources, Inc. Has contributed to a few papers in an unrelated field.
- Arnold Sikkema, Associate Professor of Physics, Dordt College. Currently at Trinity Western University, a “faith-based institution”. Has some publications, though no recent ones. Involved with the Reformed Academic blog and persistent critic of Answers in Genesis, though he remains boldly silent on evolution (though seems willing to consider theistic evolution).
- Peter Silley, Ph.D. Microbial Biochemistry, University of Newcastle upon Tyne. Currently Managing Director of MB Consult Ltd. Has some research to his name, apparently largely concerned with safety measures and regulatory issues in biochemistry.
- John Silvius, Ph.D. Plant Physiology, West Virginia University. Senior Professor of Biology (emeritus) at Cedarville University,a fundamentalist Bible institution teaching young earth creationism. Silvius’s specialty is prairie research, though for the past 30 years his publications appear primarily in The American Biology Teacher. Central supporter for the Springboro schoolboard’s attempt to have creationism taught in public classrooms in 2011,[269] an effort that ultimately turned out unsuccessful.[270]
- Bruce Simat,[271][272] Associate Prof. of Biology at Northwestern College, a small fundamentalist school where he has for years passed blatant creationist apologetics and Flood geology off as “biology”.[273] Not a scientist. Testified at the Kansas Evolution Hearings.[274]
- †Philip Skell, Emeritus professor of Chemistry, Pennsylvania State University and member of the National Academy of Sciences (which he used to emphasize); known shill for the Discovery Institute.[275]
- Fred Skiff, Professor of Physics, University of Iowa. Scientist in an unrelated field. Very fond of the Worldview gambit (has claimed that “science is a worldview”) and strawman bashing.[276] Known to equate evolution with atheism,[277] for conflating evolution and abiogenesis,[278] and for rejecting macroevolution as “overly reductionistic”.
- Donald F. Smee, Research Professor (Microbiology), Utah State University. Has a decent publication record in an unrelated field.
- E. Norbert Smith, Ph.D. Zoology, Texas Tech University. Affiliated with Creation.com and formerly on the board of directors for the Creation Research Society. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Smith has taught a “graduate course” for the Institute for Creation Research as well as an online course in Creation for Liberty University. He appears to have written some real research papers back in the 70s and early 80s, but for the last 25 years he has primarily focused on articles for the Journal of Creation, children’s books (of course), and anti-science books such as “Evolution Has Failed” and “Battleground University”, where he laments the fact that universities teach critical thinking to students.[279] Does not appear to have any academic affiliation, and is not a working scientist by any standards.
- Ken Smith, Professor of Mathematics, Central Michigan University. Does research in unrelated fields.
- Robert Smith, Professor of Chemistry, University of Nebraska, Omaha. Also on James Inhofe’s list of 650 scientists who supposedly dispute the global warming consensus.[280] Appears to be a real scientist, in an unrelated field, but is also a well known climate contrarian.[281] Has characterized Sarah Palin as a “rare politician … with brains.”[282]
- William F. Smith, Ph.D. in Molecular & Cellular Biology, McGill University. No information, research, or affiliation found.
- Wolfgang Smith, Emeritus Professor of Mathematics, Oregon State University. Respected enough in the field of mathematics, but also famous for his excursions into philosophy. Apparently his acceptance of Intelligent Design is the result of his philosophical contemplations, which led him to a thomistic and appropriately medieval ontology. Science be damned if it fails to adhere to his a priori metaphysics and elaborate sophistry.
- David Snoke, Associate Professor of Physics & Astronomy, University of Pittsburgh. Co-authored a paper with Michael Behe in 2004, “Simulating Evolution by Gene Duplication of Protein Feature that Requires Multiple Amino Acid Residues”, claiming to support the notion of Irreducible complexity. The paper was heavily criticized, to put it mildly,[283] and contrary to Behe’s claims (e.g. during the Kitzmiller v. Dover trial) might even undermine the notion of irreducible complexity, as Behe had to admit under oath.[284] Otherwise a respectable scientist in his (unrelated) field.
- Gregory A. Snyder, Ph.D. Geochemistry, Colorado School of Mines. Appears to be a real scientist, in an unrelated field.
- Alexandre S. Soares, Ph.D. Mathematics, Federal University of Rio de Janeiro. No information, affiliation, or research found.
- Kevin E. Spaulding, Ph.D. Optical Engineering, U. of Rochester. At Eastman Kodak Company. Has done research in unrelated fields.
- Dexter F. Speck, Associate Professor of Physiology, University of Kentucky Medical Center. Publishes in an unrelated field.
- Georg A. Speck, Ph.D. Biology, Molecular Pharmacology, University of Heidelberg. No information (apart from a single 1999 paper) or current academic affiliation found.
- Dale Spence, Emeritus Professor of Kinesiology, Rice University. Among the signatories of a Discovery Institute-initiated letter urging the Texas Board of Education to dilute the educational standards in public schools with respect to evolution.[285] Does otherwise appear to have a few publications in an unrelated field.
- Richard Spencer, Professor, Electrical Engineering, University of California, Davis, Solid-State Circuits Research Laboratory. Retired. Has some publications in an unrelated field.
- Terry W. Spencer, Former Chair, Department of Geology & Geophysics, Texas A&M University. Seismologist (wave propogation and compressive strength of rock); real scientist in an unrelated field.
- Lee M. Spetner, Ph.D. Physics, MIT. Contributor to TrueOrigin, a creationist blog network, and known fan of PRATTs.[286] Probably best known for his rejection of “macroevolution” in his 1996 book “Not By Chance! Shattering the Modern Theory of Evolution”. Has claimed that Archaeopteryx was a fraud and that mutations invariably lead to a “loss of information”. Trained as a scientist, but appears not to have been involved in scientific research for 40 years.
- Thomas M. Stackhouse, Ph.D. Biochemistry, University of California, Davis. Currently Associate Director of the Technology Transfer Center (TTC) at the National Cancer Institute. Involved in research, though in unrelated fields.
- †John Stamper, Research Physicist, Naval Research Laboratory
- Timothy Standish, Ph.D. Environmental Biology, George Mason University. Affiliated with the Geoscience Research Institute, a Seventh Day Adventist creationist organization, and signatory to the CMI list of scientists alive today who accept the biblical account of creation. Does not do research (writes for Origins and various creationist publications), and has no current academic affiliation.
- Walt Stangl, Associate Professor of Mathematics, Biola University (listed as adjunct faculty on the faculty website). Google scholar returns a single publication in Mathematics Magazine (apart from publications in Perspectives on Science and Christian Faith).
- Walter L. Starkey, Professor Emeritus of Mechanical Engineering, Ohio State University. Author of “The Cambrian Evolution”, a defense of Intelligent design (recommended by e.g. Harun Yahya), and “Evolution Exposed and Intelligent Design Explained”. Does not appear to have done any scientific research at least since the 1950s.
- Albert J. Starshak, Ph.D. Physical Chemistry, Illinois Institute of Technology. No affiliation or research newer than 1970 found.
- Andrew Steckley, Ph.D. Civil Engineering, University of Western Ontario. Chief Technology Officer of PowerMand, Inc. Appears to have some publications in an unrelated field.
- Neil Steiner, Ph.D. Electrical Engineering, Virginia Tech. Computer Scientist at USC Information Sciences Institute. Appears to have some research in unrelated fields.
- Karl Stephan, Associate Professor, Dept. of Technology, Texas State University, San Marcos. Does research in an unrelated field. Also a climate change denialist, who has argued that birth control is among the greatest threats to civilization.[287]
- Richard Sternberg, Ph.D. Biology, Florida International University; also Ph.D. Systems Science, Binghamton University. Famous alleged victim of Darwinian persecution and one of the main characters featured in Expelled: No Intelligence Allowed. Lied a lot about the situations that led him to declare “persecution”,[288] for instance by denying that he was an advocate of Intelligent Design.[289] Subsequently put on wingnut welfare and currently associated with the Discovery Institute’s Biologic Institute.
- Joseph A. Strada, Ph.D. Aeronautical Engineering, Naval Postgraduate School. Appears to be President of Strada Supply And Services in Fairfax, VA. Not a scientist. Apparently sympathetic to geocentrism, saying of the book Galileo Was Wrong, The Church Was Right that “Sungenis and Bennett examine the anomalies that arise from the Copernican model […] A must read for those who can set aside prejudices and a priori assumptions.”
- Michael Strauss, Associate Professor of Physics, University of Oklahoma. Real scientist in an unrelated field.
- Richard A. Strong, Ph.D. Chemistry, Northeastern University. May have some publications, but no current affiliation found.
- Ben J. Stuart, Ph.D. Chemical & Biochemical Engineering, Rutgers University. Currently at the Department of Civil Engineering at Ohio University. Does research in an unrelated field.
- John Studenroth, Ph.D. Plant Pathology, Cornell University. Listed as Instructor in Biology at Pinebrook Junior College, though the college closed in 1992. Also Pastor at the Kutztown (PA) Bible Fellowship Church. No research found. Studenroth is not a scientist, though he is co-author of an online creationist paper, “The Status of Evolution as a Scientific Theory”.[290]
- Dennis M. Sullivan, Professor of Biology and Bioethics, Cedarville University (a Christian fundamentalist institution). Sullivan is an MD, and also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Runs a blog, Cedarethics,[291] devoted to bioethics from a fundamentalist perspective, and has published extensively on such issues, primarily in religious magazines. Does not appear to be involved in science or scientific research.
- Nigel Surridge, Ph.D. Electrochemistry & Photochemistry, University of North Carolina, Chapel Hill. Program Director, Blood Glucose Monitoring, Roche Diagnostics. Seems to be a real scientist in an unrelated field.
- Dean Svoboda, Ph.D. Electrical Engineering, The Ohio State University. No affiliation or research found.
- Chris Swanson, Tutor (Ph.D. Physics), Gutenberg College (a Bible school committed to a presuppositionalist epistemology that does not even pretend to offer a university education). Swanson is not a scientist, is not involved in research, has no research background, but writes “philosophical” articles on religious issues, for instance attempts to justify Intelligent Design.
- James Swanson, Professor of Biological Sciences, Old Dominion University. Coauthor of some research publications, none of which appear to touch on evolution. Writes for the MadSci network.
- Mark Swanson, Ph.D. Biochemistry, University of Illinois. No information located.
- Nancy L. Swanson, Ph.D. Physics, Florida State University. Currently at Abacus Enterprises. Has a few publications, unrelated field.
- Bela Szilagyi Ph.D. Physics University of Pittsburgh. His numerous publications are almost exclusively published on arXiv. No current academic affiliation found.
[edit] T
- Tetsuichi Takagi, Senior Research Scientist, Geological Survey of Japan. Appears to do real research in an unrelated field.
- Alfred Tang, Visiting Scholar (Ph.D. Physics, University of Wisconsin, Madison), The Chinese University of Hong Kong. Apparently a real scientist, but claims that “[t]he most important source of the limit of physics is the neglect of the supernatural,” and that “[t]he integration of science and theology is mutually beneficial,”[292] though he is vague about exactly what theology will contribute.
- James G. Tarrant, Ph.D. Organic Chemistry, University of Texas, Austin. Appears to have some papers from the 1990s. No current affiliation or research located.
- Greg Tate, Ph.D. Plant Pathology, University of California, Davis. No further information located.
- Philip S. Taylor, Research Fellow, Computer Science, Queen’s University Belfast. Currently a manager for the Business Intelligence Research Practice at SAP. Works on software processes and gives talks on issues such as “Business Continuity Management”.
- Richard N. Taylor, Professor of Information & Computer Science, U. of California, Irvine. Respectable researcher in an unrelated field.
- Wesley M. Taylor, Former Chairman of the Division of Primate Medicine & Surgery, New England Regional Primate Research Center, Harvard Medical School. DVM with a few publications on canine care. No current affiliation found.
- Daniel Tedder, Associate Professor, Chemical Engineering, Georgia Institute of Technology. Emeritus. Has a few publications in the completely unrelated field of waste management.
- Stephen C. Tentarelli, Ph.D. Mechanical Engineering, Lehigh University. No recent affiliation or research found.
- Charles Thaxton,[293] Ph.D. Physical Chemistry, Iowa State University. Fellow of the Discovery Institute's Center for Science and Culture and co-author of The Mystery of Life's Origin: Reassessing Current Theories, which employs the Second law of thermodynamics gambit, and “The Soul of Science”, as well as being – most famously – co-editor of Of Pandas and People.[294] Publishes widely on the relationship between Christianity and science, and although his Discovery Institute biography lists him as having "scientific publications", his single, co-authored paper from 1971 (even combined with being third author on one from 1979) does not make him a scientist.
- Lydia G. Thebeau, Ph.D. Cell & Molecular Biology, Saint Louis University. Currently Associate Professor at Missouri Baptist University. Has some publications, but her research seems to have reduced to a trickle in the last few years.
- Ernest M. Thiessen. Ph.D. Civil & Environmental Engineering. Cornell U. Currently President of iCan Systems Inc.; works on online dispute resolution and negotiation. Has some background in research projects but is not a working scientist (no academic affiliation).
- Christopher L. Thomas, Ph.D. Analytical Chemistry, University of South Carolina. Lean Six Sigma Specialist at Flextronics International Ltd. Apparently not a working scientist.
- Pavithran Thomas, Ph.D. Mechanical Engineering, Ohio State University. No current affiliation or research located.
- James R. Thompson, Noah Harding Professor of Statistics, Rice University. Has real publications, but also known for a string of online papers (not published in any respectable venues, of course) that use statistics in favor of “politically incorrect” strategies generally associated with “wingnuttery” in the war on terror, against Islamism, and to combat rampant gay behavior.[295]
- Richard Thompson, Ph.D. Computer Science, U. of Connecticut. Now Professor at U. of Pittsburgh. Real scientist, unrelated field.
- Frank Tipler, Prof. of Mathematical Physics, Tulane University, Fellow of the International Society for Complexity, Information, and Design. Has written quite a bit of crackpot literature - including “The Physics of Christianity” - which quite overshadows his earlier technical research.[296] Inventor of the Omega Point, a ghastly pseudo-scientific mix of cosmology and theology that supposedly proves God’s existence and the immortality of intelligence, or something. His book on the matter, “The Physics of Immortality”, was described by George Ellis as a “a masterpiece of pseudoscience … the product of a fertile and creative imagination unhampered by the normal constraints of scientific and philosophical discipline.”[297] Michael Shermer devoted a chapter of his book “Why People Believe Weird Things” to Tipler’s theory. Tipler also writes for Uncommon Descent.
- Mark Toleman, Ph.D. Molecular Microbiology, Bristol University. Seems to have done real research. Admits to not being an expert on evolution, but is skeptical because he dislikes Darwin’s relatives(!) and because of Jesus.[298]
- Olivia Torres, Professor-Researcher (Human Genetics), Autonomous University of Guadalajara. Appears to be a real scientist.
- Ferenc Tóth, Ph.D. Agricultural Sciences, Szent István University, Gödöllö. Currently at the University of Tennessee Institute of Agriculture, Large Animal Surgery Staff. Does have some publications related to his profession (not to evolution).
- Tibor Tóth, Professor of Product Information Engineering (D.Sc. Hungarian Academy), University of Miskolc. Appears to be a real scientist in an unrelated field.
- Harold Toups, Ph.D. Chemical Engineering, Louisiana State University; currently instructor at LSU’s Cain Department of Chemical Engineering. No research record found.
- James Tour, Chao Professor of Chemistry, Rice University. Real scientist in an unrelated field. Has said that he felt the explanations offered by evolution are incomplete, and that he found it hard to believe that nature can produce the machinery of cells through random processes, though does not (officially) rule it out. Does not accept Intelligent Design, though he accepts the bogus creationist distinction between micro- and macro-evolution. Describes himself as a Messianic Jew.
- Ide Trotter, Ph.D. Chemical Engineering, Princeton University. Member of Texans for Better Science Education who has testified before the Texas Board of Education on several occasions during their bizarre discussions over science standards for public schools. Trotter claims that the major scientific discoveries of the 20th century make evolution harder and harder to defend. As a consequence he was appointed by the Board to the science review panels that evaluate instructional materials submitted for approval by the Board. Trotter runs an investment management company and is former dean of business/professor of finance at Dallas Baptist U; not a scientist.
- Royal Truman, Ph.D. Organic Chemistry, Michigan State University. Currently employed at BASF AG’s headquarters in Ludwigshafen, Germany. Affiliated with Answers in Genesis, and has written several articles for the Creation Ex Nihilo Technical Journal. Also a signatory to the CMI list of scientists alive today who accept the biblical account of creation. No scientific research publications found.
- James Tumlin, Associate Professor of Medicine, Emory University. Real medical scientist (e.g. kidney injury). Also involved in urging the Cobb County, Georgia, school board to adopt the "teach the controversy" strategy in 2002, saying that "students should know that scientists have doubts about evolution."[299] The school board famously required biology textbooks to be equipped with anti-evolution disclaimers,[300] and originally intended to teach creationism in public schools.[301]
[edit] U
- Lasse Uotila, M.D., Ph.D. Medicinal Biochemistry, University of Helsinki. Has a research record in medicine. Also on the list of Physicians and Surgeons for Scientific Integrity (a.k.a. Doctors Doubting Darwin).[302]
[edit] V
- Jirí Vácha, Professor Emeritus of Pathological Physiology, Institute of Pathophysiology, Masaryk University. Has a research background in radiation hematology and endocrinology. According to his website,[303] his work on evolution comprise various articles on philosophy of science published in local journals (mostly theology journals) in Czech, in which has been trying to soften the “hard” scientific approach to medical and biological problems through the prism of phenomenologic methodology developed from Heidegger, Husserl, and Christian neoscholasticism. Vacha is also on the Editorial Board of BIO-Complexity.
- Jairam Vanamala, Postdoctoral Research Associate, Faculty of Nutrition, TAMU, College Station. Currently Assistant Professor at Colorado State U. Does real research in an unrelated field.
- Robert VanderVennen, Ph.D. Physical Chemistry, Michigan State University. Currently Executive Director of the Association for the Advancement of Christian Scholarship. Previously Founding Executive Director of Christian Studies International, and has indeed had several positions with religious institutions and written several religious books. VanderVennen is not a scientist and seems to have no scientific publications.
- Jeffrey L. Vaughn, Ph.D. Engineering, University of California, Irvine. Has done some real research. Also author of “Beyond Creation Science” (with Timothy P. Martin), which defends old earth creationism (against young earth creationism), argues that the Noachian Flood was local rather than global, and advocates preterism.
- Sergey I. Vdovenko, Senior Research Assistant, Department of Fine Organic Synthesis; Institute of Bioorganic Chemistry and Petrochemistry, Ukrainian National Academy of Sciences. Google scholar returns some research publications, though they are unrelated to the theory of evolution.
- Brandon van der Ventel, Ph.D. in Theoretical Nuclear Physics, Stellenbosch University. Currently professor at Stellenbosch, where he appears to do real research in an unrelated field. Young earth creationist, and author of a screed called “Darwin and the lie of evolution” in which he claims that the promotion of the theory of evolution is part of an anti-Christian campaign, and that if we doubt the scientific truth of a literal interpretation of Genesis, then “many other stories in the Bible may be questioned or made out to be unscientific or implausible.”[304] How this makes his dissent from Darwinism “scientific” is unclear.
- Charles N. Verheyden, Prof. of Surgery, Texas A&M College of Medicine. Plastic surgeon; has publications in that (unrelated) field.
- Etienne Y. Vernaz, Professor & Director of Research Director, CEA (French Atomic Energy Agency). A real scientist, in an unrelated field, who claims that in his entire career he has never come across any contradiction between Scripture and the world as revealed by science.[305] Claims that evolution confronts epistemological difficulties, that there are no transitional fossils, and that intelligent design is an idea shared by non-Christians, too.
- Mike Viccary, Ph.D. Sold [sic] State Chemistry, U. of Bradford. Writes articles for creation.com; no affiliation or actual research found.
- Vincente Villa, Emeritus Professor of Biology, Southwestern University. Google Scholar returns no research for the last 30 years.
- Suzanne Sawyer Vincent, Ph.D. Physiology & Biophysics, University of Washington. Associate professor of Biology at Oral Roberts University. Hardly involved in research; cannot reasonably be counted as a scientist.
- Vladimir L. Voeikov, Vice-Chairman, Chair of Bio-organic Chemistry, Faculty of Biology Lomonosov, Moscow State University. A true crackpot. Also member of The Chopra Foundation and associate editor of “WATER: A Multidisciplinary Research Journal”. Promoter of homeopathy, and in particular the notion of water memory, claiming that several experiments support the idea.[306]
- Øyvind A. Voie, Ph.D. Biology, University of Oslo. No research or current academic affiliation found.
- Robert G. Vos, Ph.D. Civil/Structural Engineering, Rice University. No research or current affiliation found.
- Anne E. Vravick, Ph.D. Environmental Toxicology, University of Wisconsin, Madison. No affiliation or research found.
- András Vukics, Ph.D. Physics, University of Szeged. Postdoc at U. of Innsbruck; apparently involved in research in an unrelated field.
[edit] W
- Margil Wadley, Ph.D. Inorganic Chemistry, Purdue University. Not involved in science or research.
- Carston Wagner, Associate Professor of Medicinal Chemistry, University of Minnesota. Currently professor and endowed chair. Seems to be a respectable scientist in an unrelated field.
- John Walkup, Emeritus Professor of Electrical & Computer Engineering, Texas Tech University. Currently on the staff of Christian Leadership Ministries. Has said that a legitimate debate on origins must have at least two hypotheses: “Then, like in a court of law, you can go back and look at the evidences – which of these two hypotheses appear more reasonable,”[307] which is rather far removed from how science actually works (but close to how Phillip Johnson thinks it works). No research record found.
- Linda Walkup, Ph.D. Molecular Genetics, University of New Mexico Medical School. Homeschooler. No academic affiliation or research located. Member of the Creation Science Fellowship of New Mexico, and gives talks in churches to friendly audiences on topics such as “Development of Antibiotic Resistance: Evolution or Design”. Her articles (or rants) on Junk DNA are featured at Answers in Genesis.
- Max G. Walter, Associate Professor of Radiology, Oklahoma U. Health Science Center. Radiologist. Not much research found.
- John C. Walton, Professor of Reactive Chemistry, University of St. Andrews. Known Intelligent Design proponent,[308] who e.g. contributed a chapter “The origin of life: scientists play dice” to Norman Nevin’s anthology “Should Christians Embrace Evolution”. Seems to deliver regular lectures in churches on the topic.[309] A real scientist nonetheless, but none of his research touches on evolution. Also on the editorial team of Bio-Complexity.
- Robert Waltzer, Associate Professor of Biology, Belhaven College. Was at the board of reviewers of Explore Evolution. Funny that all of those reviewers also appear on this list. According to the website Returntotheword, which apparently ranks Christian colleges from a Biblical literalist point of view, Belhaven College “does not teach evolution as a viable option.”[310]
- Ge Wang, Professor of Mathematical Sciences, University of Iowa. Has a respectable research record in an unrelated field.
- Tianyou Wang, Research Scientist Center for Advanced Studies in Measurement & Assessment, University of Iowa. Has a respectable research record in an unrelated field.
- James Wanliss, Associate Professor of Physics, Embry-Riddle University. Currently Associate Professor of Physics and Computer Science at Presbyterian College. Maintains that “the green movement is not about science, or the environment, but is offered as an alternative to Christian faith”. His research is completely unrelated to evolution. Also publishes for the Cornwall Alliance, a global warming denialist organization, and appears on James Inhofe’s list of 650 scientists who supposedly dispute the global warming consensus.[311]
- Amy Ward, Ph.D. Mathematics, Clemson University. No current affiliation or research found.
- Jason David Ward, Ph.D. Molecular Biology and Biochemistry, Glasgow University. Google reveals no current affiliation or research.
- Wade Warren, C.J. Cavanaugh Chair in Biology, Louisiana College, whose mission includes a commitment to the inerrancy of the Bible. Notable for his role in promoting the creationist-friendly Louisiana Science Education Act of 2008.[312] Also testified before the Texas Board of Education during their 2009 evolution hearings.[313] Does not appear to be an active scientist.
- Robert L. Waters, Lecturer, College of Computing, Georgia Institute of Technology. No further information found.
- Joe Watkins, Military Professor, Department of Mechanical Engineering, United States Military Academy. Google returns no information.
- Todd Watson, Assistant Professor of Urban & Community Forestry, Texas A & M University. Certified Master Arborist working on tree preservation in urban areas, and has some publications on these matters, which are at best tangential to the issue at hand.
- Woody Weed, Mechanical Engineer, Science & Technology Division, Sandia National Labs. Works on vacuum system and vacuum process engineering. No connection to biology or scientific research.
- Gerald Wegner, Ph.D. Entomology, Loyola University. Currently Technical Director at Varment Guard Environmental Services, Inc. There is a “Gerald Wegner” who used to be president of the Creation Research, Science Education Foundation as well, which is probably not a coincidence. Has a two or three papers and some online documents to his name in a not obviously related field.
- George C. Wells, Professor of Computer Science, Rhodes University. Does (unrelated) research/development of computer languages.
- Jonathan Wells, Ph.D. Molecular & Cell Biology, University of California, Berkeley. Member of the Discovery Institute and follower of the Unification Church. Author of “Icons of Evolution” and “The Politically Incorrect Guide to Darwinism and Intelligent Design”.[314] One of the central figures of the Intelligent Design movement, and one of the few people in the ID movement with demonstrably legitimate credentials. Also into HIV denialism (in fact, he is a signatory to Rethinking AIDS, a list of HIV “skeptics” and to the petition for a “Scientific Reappraisal of the HIV-AIDS Hypothesis”.[315])
- Kjell Erik Wennberg, Ph.D. Petroleum Engineering, Norwegian University of Science & Technology. Production engineer with Statoil; does not appear to be involved in research.
- Robert Wentworth, Ph.D. Toxicology, University of Georgia. Has taught at a local Christian school; currently Human Resources Manager of The University of Georgia, which is not a research position.
- R. P. Wharton, Ph.D. Electrical Engineering, Georgia Institute of Technology. No research or current affiliation found.
- Elden Whipple, Affiliate Professor of Earth & Space Sciences, University of Washington. Appears to have done some real research, but no updated information found (he is not on the UoW's faculty list as of 2012).
- Howard Martin Whitcraft, Ph.D. Mathematics, University of St. Louis. Online math instructor with no academic affiliation. Not a scientist.
- Lowell D. White, Industrial Hygiene Specialist, University of New Mexico. Has a management position; no research record found.
- Mark White, Professor of Chemical Engineering, Georgia Institute of Technology. Appears to be a real scientist in an unrelated area.
- Raleigh R. White, IV, Professor of Surgery, Texas A&M University, College of Medicine. Apparently a certified plastic surgeon, and has some publications in the Annals of Plastic Surgery.
- Paul Whitehead, Ph.D. Chemical Thermodynamics, University of Natal. No information or current affiliation found.
- John H. Whitmore, Associate Professor of Geology, Cedarville University. Associated with Creation Ministries International. Staunch young earth creationist and supporter of Flood geology and Flooddidit arguments.[316] Has publications in the Journal of Creation, but is not a scientist by any stretch of the imagination (even his M.S. in biology is from the Institute for Creation Research). Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. Cedarville's geology program "holds to a literal six-day account of Genesis."
- Christian A. Widener, Ph.D. Mechanical Engineering, Wichita State University. Affiliated with the National Institute for Aviation Research. His work (or education) is not remotely related to evolution.
- Leslie J. Wiemerslage, Emeritus Professor (Ph.D. Cell Biology, Univ. of Pennsylvania), Southwestern Illinois College. May have done a little research back in the days, but none of it seems to describe any challenges to evolution
- Roger Wiens, Ph.D. Physics, University of Minnesota; researcher at Los Alamos National Laboratory. Has an otherwise impressive publication record and research background, and has been substantially involved in criticizing creationist geophysics, in particular by supplying thorough defenses of radiometric dating[317]
- Jay L. Wile, Ph.D. Nuclear Chemistry, University of Rochester. Famous proponent of creation geophysics. His CV states that he was an assistant professor from 1990 to 1995, but he has no current academic affiliation. Works instead with Apologia Educational Ministries (which he founded), a publisher of creationist home-schooling material such as his own “Exploring Creation” series of textbooks, which attempts to reconcile young earth creationism with scientific principles.[318] Many of the inaccuracies and inadequacies of Conservapedia’s information about science and scientific theory have been traced back to Wile.[319]
- Gregg Wilkerson, Ph.D. Geologic Science, University of Texas, El Paso. Has done some real work in geology, including contributing careful debunkings of the Paluxy River tracks. Heavily into Biblical archaeology, but supports an old earth and has said that he “believes in some aspects of evolutionary theory, but [not] that humans evolved from fishes and apes.”[320]
- Christopher Williams, Ph.D. Biochemistry, Ohio State University. Maintains that “[f]ew people outside of genetics and biochemistry realize that evolutionists can still provide no substantive details at all about the origin of life … Clearly the origin of life – the foundation of evolution – is still virtually speculation,”[321] although the question of abiogenesis does, of course, not have anything to do with the theory of evolution. No current academic affiliation found.
- Sarah M. Williams, Ph.D Environmental Engineering, Stanford University. Google returns no current academic affiliation or research.
- Gordon L. Wilson, Ph.D. Environmental Science and Public Policy, George Mason University. Apparently that sounds better than "Senior Fellow of Natural History and Director of Student Affairs at the New Saint Andrews College (formerly in the biology department at Liberty University)," an unaccredited conservative Calvinist institution that teaches a “Biblical worldview” to something in the vicinity of 130 four-year students. Active in the Creation Biology Society and frequent contributor to the Answers magazine. His specialty seems to be the Origins of Natural Evil in the biological world,[322]. According to Wilson “Many pathogens, parasites, and predators have sophisticated genetic, morphological, and behavioral arsenals (natural evil) that clearly testify to the God’s eternal power and divine nature (Romans 1:20), i.e. they are not the result of mutation and natural mutation,” but that they used to be “completely benign in all respects but at the Fall the enemy (Satan, et. al.) engaged in post-Fall genetic modification and/or bestiality that resulted in creatures with malignant behavior and morphology.” Take that, evolutionists.
- Samuel C. Winchester, Klopman Distinguished Professor Emeritus, North Carolina State University College of Textiles. Works primarily on supplier selection in the textile industry, which, of course, has nothing to do with evolution.
- Étienne Windisch, Ph.D. Engineering, McGill University. Has done some research related to the entirely unrelated field of engineering.
- Luman R. Wing, Associate Professor of Biology, Azusa Pacific University. Pastor Wing is currently adjunct faculty at Calvary Chapel Bible College. No record of any research found.
- J. Mitch Wolff, Professor of Mechanical Engineering, Wright State University. Has contributed to publications in an unrelated field.
- John Worraker, Ph.D. Applied Mathematics, University of Bristol. Also known as Bill Worraker. Young Earth creationist who has a long record as a creationist activist.[323] Associated with Genesis Agendum and the Biblical Creation Society, and has written for Answers in Genesis. Does not hold an academic position.
- Shawn Wright, Ph.D. Crop Science, North Carolina State University. Currently (?) a horticulturist at the Ohio State University South Center. No publication record found.
[edit] Y
- Alexander Yankovsky, Assistant Professor of Physical Oceanography, Nova Southeastern University. Has done real scientific work in an unrelated field.
- Chee K. Yap, Professor of Computer Science, Courant Institute, New York University. Real scientist, in an unrelated field.
- Pablo Yepes, Research Associate Professor of Physics & Astronomy, Rice University. Real scientist, in an unrelated field.
- Irfan Yilmaz, Professor of Biology, Dokuz Eylul University. Author of “Evolution: Science or Ideology”, which “aims to show how the theory of evolution has been abused to deny religious thought, and that the scientific evidence set forth to prove it actually serves the opposite,” and which “includes rational explanations derived from the Islamic understanding of creation,” according to the blurb.
- Hansik Yoon, Ph.D. Fiber Science, Seoul National University. No research or current affiliation found.
- Yasuo Yoshida, Ph.D. Physics, Kyushu University. Appears to do real research in an entirely unrelated field.
- Frank Young, Ph.D. Computer Engineering, Air Force Institute of Technology. No research or current academic affiliation found.
- Patrick Young, Ph.D. Chemistry, Ohio University. Involved in industrial research on film and polyester products (and hence lauded by e.g. Creation Ministries International as a cutting edge scientist for the space age). Also signatory to the CMI list of scientists alive today who accept the biblical account of creation. According to his bio at Answers in Genesis his “current interest involves the study of the time domain and how quantum theory and/or multidimensional string theories may be used to explain the Genesis account,” and he is the author of "The Genesis Flood- Where Did The Water Come From And Where Did It Go?" Claims that scientists fail to recognize the truth of creationism because they are arrogant and won't recognize any god but themselves.[324]
- Douglas C. Youvan, Former Associate Professor of Chemistry, MIT, founder of Karios Scientific Inc. Has some real research to his name, but claims that he “feels called by the Great Commission of Jesus Christ to ‘extinguish Darwinism’ and spread the word that an intelligent man can believe in literal Creation.” Has also written the (rabid) creationist screed “Questions of a Christian Biophysicist”.[325]
[edit] Z
- Leo Zacharski, Professor of Medicine, Dartmouth Medical School. Does real medical research; also involved in apologetics.
- †Stanley E. Zager, Professor Emeritus, Chemical Engineering, Youngstown State University.
- David Zartman Ph.D. Genetics & Animal Breeding, Ohio State University. Seems to have done research related to the cattle industry, and appears to have some patents to his name.
- Jonathan A. Zderad, Assistant Professor of Mathematics, Northwestern College (a small, private Christian liberal arts college devoted to a “Biblical worldview”). Has written articles such as "Creationism: A Viable Philosophy of Mathematics", but done little if any real, peer-reviewed research.
- Ke-Wei Zhao, Ph.D. Neuroscience, University of California, San Diego. Appears to have done research, e.g. related to enzymes.
- Yuri Zharikov, Post-Doctoral Research Fellow, Simon Fraser University. Does apologetics, but also real research on ecology.
- Audris Zidermanis, Ph.D. Nutrition & Molecular Biology, Texas Woman’s University. Currently teaching “science” (not further specified) at Dallas Christian College, and is a “Special Guest Lecturer” at the Institute for Creation Research’s School of Biblical Apologetics. Google Scholar reveals no scientific publications. Testified before the Texas Board of Education during their 2009 evolution hearings.[326]
- Robin D. Zimmer, Ph.D. Environmental Sciences, Rutgers University. Currently a private biotech consultant and affiliate of the Center for Faith and Science International (he is not involved in research). Staunch supporter of Tennessee’s proposed creationist-friendly 2011 bill and its “Teach the Controversy” language.[327]
- John C. Zink, Former Assistant Professor of Engineering, University of Oklahoma (retired). Has some online documents to his name, but no peer-reviewed journal publications found.
- John Frederick Zino, Ph.D. Nuclear Engineering, Georgia Institute of Technology. Google reveals no current affiliation, and no research.
- †Frederick T. Zugibe, Emeritus Adjunct Associate Professor of Pathology, Columbia U. College of Physicians and Surgeons. A legitimate forensics experts, but best known for his crucifixion and Shroud of Turin studies. Has made numerous TV appearances on these matters.
- Henry Zuill, Emeritus Professor of Biology, Union College (a Seventh Day Adventist college). Has written several articles for various religious organization, but seems to have done no scientific research at least for the last 30 years. Also signatory to the CMI list of scientists alive today who accept the biblical account of creation.
[edit] See also
[edit] References
- ↑ 1.0 1.1 Weekly Science Quiz by Douglas Clark (Monday, January 7, 2013)
- ↑ About the petition
- ↑ The list of names
- ↑ Hundert Autoren Gegen Einstein, edited by Hans Israel; et al. (1931). R. Voigtänder Verlag.
- ↑ Few Biologists but Many Evangelicals Sign Anti-Evolution Petition New York Times article, February 21, 2006
- ↑ John Lynch did some work on the 2008 version of the list, finding that only about 2% of the signatories had any training in evolutionary biology, Post from January 8, 2008.
- ↑ Alexander, Denis; Numbers, Ronald L. (2010). Biology and Ideology from Descartes to Dawkins. Chicago: University of Chicago Press. ISBN 0-226-60841-7.
- ↑ Pennock, Robert T. (2001). Intelligent design creationism and its critics: philosophical, theological, and scientific perspectives. Cambridge, Mass: MIT Press, pp. 322
- ↑ Sandwalk, post from January 2007
- ↑ Skip Evans, post at the NCSE website, April 8, 2002.
- ↑ Bruce Chapman, Letter to the Editor, New York Times, December 12, 2005
- ↑ The Discovery Institute’s Center for Science and Culture, Key Resources for Parents and School Board Members
- ↑ Bruce Chapman, Center for Science and Culture article, originally posted in 2003
- ↑ Eldredge, Niles & Scott, Eugenie C. (2005). Evolution vs. Creationism: An Introduction. Berkeley: University of California Press, p. 215
- ↑ TalkOrigins Archive on the Kansas evolution hearings, part 8
- ↑ Steven Schafersman of the Texas Citizens for Science (the real pro-science organization, not to be confused with denialist organizations with deceptively similar names) (2003): Texas Citizens for Science Responds to Latest Discovery Institute Challenge.
- ↑ "Another Steve leaves the Dark Side", Stones and Bones
- ↑ David Berlinski in the Encyclopedia of American Loons
- ↑ Blogpost from August 2007
- ↑ Chicago Tribune, Article from July 3, 1992.
- ↑ Deltoid, Post from December, 2008
- ↑ Blogpost from December 2008
- ↑ Debate letter, published in the (not particularly respectable newspaper) Expressen, February 13, 2009 (in Swedish).
- ↑ J. Bloom, Theistic Evolution Isn’t Fit for Survival, published in Biola Magazine, Fall 2011.
- ↑ Minnesota Citizens for Science, article responding to an op-ed piece written by Boldt and Todd Flanders.
- ↑ Said when signing a letter drafted by John Calvert (of the Intelligent Design Network) to be submitted to the Ohio State Board of Education during their evolution wars in 2002.
- ↑ Blogpost from 2006
- ↑ According to his Wikipedia article
- ↑ For instance as summed up by the Royal College of Obstetricians and Gynaecologists in The Care of Women Requesting Induced Abortion
- ↑ National Cancer Institute report, 2003
- ↑ Or perhaps just very bad journal, Blogpost from May 2009
- ↑ Marketed here; warning: the layout and color combinations on the website are not for the faint of heart.
- ↑ CanadianChristianity.com, Intelligent Design Opponent Blocked by Federal Council, (this one is scary), including comments from Brown.
- ↑ Nancy Bryson in the Encyclopedia of American loons.
- ↑ British Centre for Science and Education, page on Buggs.
- ↑ British Centre for Science and Education, Article on the Estelle Morris letter
- ↑ British Centre for Science and Eduction, Article on Burgess.
- ↑ According to his project description at the group’s homepage.
- ↑ Russell Carlson’s entry in the Encyclopedia of American Loons
- ↑ He lays out his views on science and religion here, if you are interested – but you can easily predict what he is going to say without reading the document.
- ↑ seattlest.com, article from August 23, 2006
- ↑ G.B. Chase, Gay Evangelicals?, published on Messiah College’s homepages.
- ↑ At least according to this blog, which seems pretty committed to its authenticity.
- ↑ Seminar description from his personal homepage.
- ↑ D. Clark, Stretching Out Heavens, published on the Creation Moments site (officially he is just asking questions, but we all know where he wants to go).
- ↑ Interview with Cogdell on the Christian Leadership Ministries webpage.
- ↑ According to the University the reason was simply that her services weren’t needed anymore; she was after all not tenured. Washington Post, article from February 3, 2006
- ↑ Thoughts from Kansas, post from May 2011
- ↑ Blogpost from February 2008
- ↑ Homepage for Leadership University.
- ↑ Steve Schafersman, Texas Citizens for Science Responds to Latest Discovery Institute Challenge, September 2, 2003.
- ↑ H. Walters, Intelligent Design Presentation Heavily Debated, The Signal, February 22, 2006.
- ↑ Here, along with many of the other usual suspects. Discovery Institute Fellow David DeWolf is marked as Counsel of Record.
- ↑ Here, along with many of the other usual suspects. Discovery Institute Fellow David DeWolf is marked as Counsel of Record.
- ↑ John A. Davison’s entry in the Encyclopedia of American Loons
- ↑ For instance W. DeJong & H. Degens, The Application of Artificial Evolution in Software Engineering, retrieved from evolutionskepsis.nl.
- ↑ Blogpost from November 2005
- ↑ The letter.
- ↑ Deltoid, Post from December, 2008
- ↑ Washington Times, op-ed from December 10, 2008
- ↑ Detwiler’s Three Position Papers on Intelligent Design in the Public Schools.
- ↑ [ NewsAdvance Article, February 15, 2009]
- ↑ Robert DiSilvestro, Where’s the Evidence, letter to the Boston Review, March 1997.
- ↑ R. DiSilvestro, Some Useful Info for Students in Undergraduate Biology Classes, at the Christian Leadership Ministries homepage. And yes, it’s the predictable list: the Miller-Urey experiment, Appeals to the Bible, fine-tuning, macroevolution has never been observed, Darwin was wrong on the fossil record, misunderstanding Punctuated equilibrium, and [Haeckel’s embryos were wrong … yes, Haeckel’s embryo, no less (he claims that Haeckel presented them “a while back”) – and this guy fancies himself as having any kind of scientific integrity!
- ↑ Dynamist blog, reporting on an exasperatingly “balanced” article in The Greenville News.
- ↑ And this guy claims a PhD in physics! His conversion story is found here.
- ↑ Quackwatch, Some notes on Jean Drisko.
- ↑ Quote from Dritt featured on Fundies say the darndest things.
- ↑ His contribution to the Proceedings of the Second International Conference on Creationism appears to be featured here.
- ↑ The claim that taxes are a threat to our freedom is stated explicitly in its mission
- ↑ Post from July 2009.
- ↑ For a game of "spot the creationist PRATTs" you could do worse than using his Life is Organized Without Darwinian Transitions; this is Kent Hovind territory.
- ↑ Their webpage.
- ↑ K. Duff, Dating, Intimacy, and the Teenage Years; check out the editorial review. He has also written “Bride of the High Places”, “Restoration of Men” and “Restoration of Marriage”, which presumably continue in the same vein.
- ↑ Comment in S. Jaschik, Believing in God and Evolution, Inside Higher Ed., October 14, 2009.
- ↑ Debate between van Dyke and George Murphy here.
- ↑ Quoted in the Salina Journal, article from March 16, 2008.
- ↑ Interview; the English translation is horrible.
- ↑ Here; warning – that is a link to Denyse O’Leary’s blog.
- ↑ Eckel’s profile with Answers in Genesis.
- ↑ The letter, reproduced at RedStateRablle.
- ↑ Science and Creation, Dissent from Darwin: So, who are these geologistst?, post from July 18, 2009.
- ↑ Huntington webiste, News section, May 13, 2004. By the way, the faculty of Huntington University subscribe to the following statement of faith: “We believe the Bible to be the inspired, the only infallible, authoritative Word of God.”
- ↑ A summary of a presentation by Ewert, which was not particularly impressive, apparently.
- ↑ Nick Matzke at the Panda’s Thumb, The immune system cross-examination still burns, post from December 20, 2010. Also discussed here.
- ↑ Blogpost from November 2011
- ↑ British Centre for Science and Education, Page on the Estelle Morris letter affair.
- ↑ Personal comment on this blog.
- ↑ In her own words (cached).
- ↑ Misunderestimation, [ post from July 20, 2006.
- ↑ Seattlest.com, [article from August 23, 2006].
- ↑ Ann Gauger in the Encyclopedia of American loons.
- ↑ Blogpost from October 2011
- ↑ The letter
- ↑ Blogpost from April 2008
- ↑ Article in Polish
- ↑ He is a signatory to this petition as well, which by the way has “independent researchers” as a separate category. The petition was commented on by Sean Carroll at Preposterous Universe, here (post from May 29, 2004).
- ↑ Open Parachute, Who are the “dissenters from Darwinism”?, published January 23, 2008.
- ↑ Guillermo Gonzalez in the Encyclopedia of American Loons.
- ↑ Blogpost from July 2007
- ↑ Blogpost from May 2009
- ↑ The Fall 2012 series has Gunasekera together with Walter Bradley, Mike Keas, and Robert Marks. Enough said.
- ↑ TalkOrigins FAQ about the Kansas Evolution Hearings.
- ↑ blogspost from 2005
- ↑ The Study, and the rebuttal
- ↑ Blogpost from April 2006
- ↑ Records for Healey at the British Centre for Science Education.
- ↑ Goodle Scholar results for D Hiddle
- ↑ Blogpost from November 2006
- ↑ Heddle's blog
- ↑ Such as this one. All the standard fallacies, lies, cherry-picking, and PRATTs are there, but of course children won’t know. She blatantly misrepresents physics as well; almost as if she didn’t know what she is talking about.
- ↑ M. Hill, Review: Adaptive State of Mammalian Cells and its Nonseparability Suggestive of a Quantum System, in Scripta Medica (Brno), October 2000 (not exactly a highly ranked journal).
- ↑ John Hodgson, Scientists avert new GMO crisis, Nature Biotechnology 18, 13 (2000).
- ↑ His homepage at the faculty website.
- ↑ Florida Citizens for Science, Post from June 2005
- ↑ For instance this.
- ↑ Florida Citizens for Science, Post from June 2005.
- ↑ Interview with Don Batten and Carl Wieland for Creation Ministries International.
- ↑ Union University website, News section, September 16, 2002.
- ↑ A representative example, Pandas Thumb, post from July 2010.
- ↑ Blogpost from January 2008
- ↑ Blogpost from October 2010
- ↑ Matt Young, Taner Edis: "Why Intelligent Design Fails: A Scientific Critique of the New Creationism". Rutgers University Press.
- ↑ KLTV, [article from January 23, 2009.
- ↑ Jelsma, T. (2009): “Is Creation Science Reformed?”
- ↑ Here, if you really want to read it. It is discussed at the website In Defense of Darwinism.
- ↑ Yup
- ↑ Here; although it may just be an unfortunate formulation, Jones’s endorsement of creationism makes the goal seem rather telling.
- ↑ Archive with two of Jones’s papers in defense of teaching creationism in public schools.
- ↑ Forskning.no, article from March 2007 (in Norwegian).
- ↑ Blogpost from March 2006
- ↑ MSNBC, Article from January 7, 2005
- ↑ At least it looks that way from this blogpost, which quotes something called “Quiverfull Digest”.
- ↑ [1].
- ↑ At least it seems that way, though the screed in the Daily Cougar in which he is cited is incoherent enough to make it unclear.
- ↑ At least according to this blogpost, posted April 13, 2005.
- ↑ The publication is here, if you want. The opening sentence is “Replacing his faith in Creator God with misplaced certainty in the power of science, Darwin subjected himself to a disquiet life and a hopeless death,” and you know you are in for some rigorous, impartial evaluation of the evidence for evolution in what follows.
- ↑ Sun and Shield, blogpost from September 13, 2005.
- ↑ Brian Landrum, Worldview, Ethics, and Society, presentation, Ratio Christi.
- ↑ The Ledger, Article from February 11, 2009
- ↑ According to the Discovery Institute’s own interview.
- ↑ LJWorld News, Article from February 21, 2006.
- ↑ The Link, Newletter for Kansas Citizens for Science, Summer/Fall 2004.
- ↑ ExChristian.net, post from November 29, 2005.
- ↑ Cleveland.com, The Plain Dealer, article from May 12, 2002.
- ↑ Sandwalk, The Biologic Institute Expands, post from August 8, 2009.
- ↑ A discussion of the journal can be found at the NCSE homepage: G. Branch, The Latest “Intelligent Design” Journal.
- ↑ W.E. Lillo, Religion vs. Science.
- ↑ Times Higher Education, [2], article from June 2000.
- ↑ Newsgroups post from April 12, 2006.
- ↑ Flyer here.
- ↑ Interview here.
- ↑ Article in the Wall Street Journal, November 2005
- ↑ Blogpost from March, 2006
- ↑ Homepage for the project.
- ↑ Article in the Spokesman.
- ↑ Nick Gier, Academic Tenure is Not Sacrosanct.
- ↑ Here, along with many of the other usual suspects. Discovery Institute Fellow David DeWolf is marked as Counsel of Record.
- ↑ Robert Marks in the Encyclopedia of American Loons
- ↑ Blogpost from April 2008
- ↑ Blogpost from September 2007
- ↑ Blogpost from August 2009
- ↑ Blogpost from May 2009
- ↑ The slides, if you must (download(!)).
- ↑ G.A. Marsch, How Liberals Can Be Anti-Science.
- ↑ The Review
- ↑ Interview with Answers in Genesis
- ↑ C.G. Weber, The Bombardier Beetle Myth Exploded, NCSE; article originally published in the Creation/Evolution Journal as far back as 1981, which illustrates how resilient the myth is and how little attention creationists actually pay to accuracy and truth.
- ↑ Genomicron, Reducibly complex bombardier beetles, post from March 2008.
- ↑ Homepage here.
- ↑ The Augusta Chronicle, report from a debate featuring McMullen, published November 4, 2004.
- ↑ Their “mission and doctrinal statement” is here, and it seems to put some rather severe restrictions on what science they can teach. Unsurprisingly, they appear to be unaccredited with respect their science programs
- ↑ Spokesman.com, article from September 8, 2005.
- ↑ Angus Menuge in the Encyclopedia of American Loons
- ↑ A debate between Menuge and PZ Myers is available here
- ↑ Menuge’s faculty homepage at Concordia
- ↑ Menuge’s testimonial, Blogpost from May 2005
- ↑ Stephen Meyer in the Encyclopedia of American Loons
- ↑ An apt review of the book
- ↑ Comments on a less apt review of the book.
- ↑ Blogpost from June 2007.
- ↑ Their “mission and doctrinal statement” is here, and it seems to put some rather severe restrictions on what science they can teach. And indeed, they appear to be unaccredited with respect their science programs
- ↑ The Wikipedia Article on the Kansas Evolution hearings.
- ↑ You can see Brian Miller discuss Intelligent Design with pastor Ron Lewis here. The advertisement reads: “Still confused if Darwinism is a fact or a theory? Is Science the friend or foe of your faith? Actually, an old adage says, ‘a little of Science can harm a person's faith, but a lot of Science will bring him or her right back to God.’”
- ↑ TalkOrigins’s page on the controversy.
- ↑ Panda’s Thumb, April 2006
- ↑ Scott Minnich’s entry in the Encyclopedia of American Loons.
- ↑ The Exodus Case
- ↑ Debunking Christianity, Blogpost from May 2008, concerning Möller’s documentary “The Exodus Conspiracy”; including a review of “The Exodus Case”.
- ↑ Stones and Bones, post from January 2012
- ↑ Panda’s Thumb, Archive on the Leonard affair.
- ↑ The Phoenix, A Review of Falsifiability, February 24, 2010, a response to an earlier article by Neeland (which has apparently disappeared from the site).
- ↑ Evolutionblog, Post from May, 2004.
- ↑ P. Nesselroade, Georgia, Ohio, and the Developing Dilemma for Darwinists.
- ↑ The Panda's Thumb, post from March 30, 2006.
- ↑ Available on google books. Blurb: “Traditional Angelology, Demonology, Satanology, Ghosts, Spirits and all the Hosts of Heavens and Earth are treated in the traditional hierarchial way. The book also presents the angels and the hosts of all dimensions as parts and organs of the body of God.”
- ↑ Ralph Long has some interesting information on Nitz and his affiliations here.
- ↑ Amazon product information on O’Mathúna & Larimore’s book.
- ↑ Forrest, B & Gross, P. (2004). Creationism's Trojan Horse: The Wedge of Intelligent Design, p.159-62. Oxford University Press.
- ↑ Review of the chapter by Graham Oppy, who was not impressed.
- ↑ Quackwatch List of questionable organizations
- ↑ Kathleen Seidel reviews JPANDS, article from March 12, 2006
- ↑ Review of the paper at Respectful Insolence and at Good Math, Bad Math
- ↑ Wikipedia article on Doctors for Disaster Preparedness.
- ↑ Respectful Insolence, post from December 2011
- ↑ LJWorld News, Article from February 21, 2006.
- ↑ Ed Peltzer in the Encyclopedia of American Loons.
- ↑ Peltzer’s contributions to the hearings.
- ↑ Blogpost from January 2006
- ↑ The Record, article from 2007
- ↑ NCSE, Evolution: Still Deep in the Hearts of Textbooks.
- ↑ M. Poenie & D. Hillis, Letter to the Board of Education, November 4, 2003.
- ↑ And either he is being very disingenuous in this comment thread, or he did not really understand what he was signing when he put his name on this Dissent list.
- ↑ David Prentice in the Encyclopedia of American loons.
- ↑ Blogpost from 2007
- ↑ Blogpost from July 2006
- ↑ The list
- ↑ Blogpost from 2007
- ↑ Letter to Science from three scientists concerning Prentice’s list.
- ↑ Blogpost from November 2006
- ↑ Blogpost from July 2006
- ↑ Georgia Purdom in the Encyclopedia of American Loons.
- ↑ A good example is described by Off Resonance, post from May 6, 2008
- ↑ As shown by her (and others’) attempts at defending what they are working at; Pharyngula Post from January 10, 2011
- ↑ Fazale Rana in the Encyclopedia of American Loons
- ↑ Apparently even young earth creationist Todd Wood took issue with that lame attempt.
- ↑ British Centre for Science and Education, Page on Randall.
- ↑ Review here.
- ↑ The petition can be found here.
- ↑ British Centre for Science and Education, Page on Reeves.
- ↑ Charisma Magazine, article from May 31, 2008; it should be mentioned that Charisma Magazine is a fundamentalist tract associated by the New Apostolic Reformation.
- ↑ As evidenced, for instance, by this post on his blog.
- ↑ The Panda’s Thumb, post from January 14, 2012, see comment 2.
- ↑ Lisa A. Shiel, The Evolution Conspiracy.
- ↑ TFN article from March 25, 2011
- ↑ Baptist Press article from 2011.
- ↑ Google group summary of an ID presentation in Finland, quoting Saari.
- ↑ The In Defense of Darwinism site is a good resource on Salthe.
- ↑ His homepage.
- ↑ Panda’s Thumb on the list, Article from February 2006
- ↑ Nick Matzke, Pandas Thumb, Inside Higher Ed on creo/ID volume, post from March 1, 2012, on one of Sanford’s talks at a Seventh-Day Adventism meeting, as part of his review of the Biological Information: New Perspectives pseudoconference.
- ↑ Talkorigins, transcript
- ↑ According to the Discovery Institute’s own interview.
- ↑ The In Defense of Darwinism site is a good resource on Schaefer.
- ↑ Evolutionwiki, International Society for Complexity, Information, and Design.
- ↑ Henry Schaefer’s Misunderstanding, lclane2.net post.
- ↑ Atheonomy.com, Report on Schaefer’s lecture “Big Bang, Stephen Hawking, and God”.
- ↑ The Panda’s Thumb article on Schaefer, May 3, 2004.
- ↑ Barbara Forrest, Academe Online article from 2005
- ↑ Press release from the Texas A&M University. The award again underlines the firm creationist basis of the Texas A&M.
- ↑ New York Times, Article from November 4, 2007
- ↑ Seriously; look at the evidence.
- ↑ Transcripts are archived here.
- ↑ Blogpost from 2007
- ↑ e.g. V. Setzer, Antroposophy, online publication.
- ↑ Sewell’s entry in the Encyclopedia of American Loons.
- ↑ Talkorigins’s 2006 list. Note that even judge Jones in the Kitzmiller v. Dover case rejected the claim that Sewell’s article did any such thing, as explicitly stated in the ruling.
- ↑ Good Math, Bad Math, Post from October 2006 and from September 2007.
- ↑ Denialism, blogpost from September 2007
- ↑ Jason Rosenhouse, “Does Evolution Have a Thermodynamics Problem”, published on the CSI site, May 19, 2006. The article focuses on Sewell’s work in particular.
- ↑ Talkreason, post from January 2006
- ↑ Blogpost from August 2007
- ↑ And yes, it’s just more of the same; blogpost from April 2007, despite the fact that the points have already been refuted an uncannily large number of times.
- ↑ Frum heretic, post from December 3, 2009.
- ↑ TFN report, July 7 2011
- ↑ Here, along with many of the other usual suspects. Discovery Institute Fellow David DeWolf is marked as Counsel of Record.
- ↑ The list of signatories, including Siek in particular, is discussed in some detail here.
- ↑ Middletown Journal, article from Aug 1, 2011.
- ↑ Steven Carter Novotni, Creating a Divine Mess, City Beat article from September 28, 2011.
- ↑ Simat’s entry in evolutionwiki.
- ↑ Bruce Simat in the Encyclopedia of American Loons.
- ↑ commentary from a former student.
- ↑ Simat’s testimony, TalkOrigins report.
- ↑ The In Defense of Darwinism site is a good resource on Skell.
- ↑ Blogpost, March 8, 2007
- ↑ Post from November 2005
- ↑ Blogpost from October 2005.
- ↑ Available online here.
- ↑ Deltoid, Post from December, 2008
- ↑ Bruce E. Johansen, post at Nebraskans for Peace].
- ↑ Bruce E. Johansen, How to Spot a Climate Contrarian.
- ↑ Michael Lynch, Simple evolutionary pathways to complex proteins, Protein Sci. 2005 September; 14(9): 2217–2225.
- ↑ Blogpost from August 2011
- ↑ S. Schafersman, [3], post from September 2, 2003.
- ↑ Good Math, Bad Math, Post from May 2007 that primarily discusses Spetner’s use of a hopeless mathematical argument about search spaces and optimization processes.
- ↑ Blogpost from September 27, 2010.
- ↑ The true story
- ↑ Blogpost from June 2007
- ↑ Yes, the fallacies and misleading claims are all there. See for yourself, if you must.
- ↑ Here, if you are so inclined.
- ↑ What is Ultimately Possible in Physics, entry for essay contest for the FQXi community.
- ↑ Charles Thaxton on evolutionwiki.
- ↑ Thaxton’s contributions to the book (discussion).
- ↑ An example. It does not require much knowledge of statistics, or anything else, to see that the input assumptions are, shall we say, dubious, and the results similarly unconvincing.
- ↑ Sean Carroll, The Varieties of Crackpot Experience, Cosmic Variance, post from January 5, 2009.
- ↑ George Ellis, Piety in the sky (Book review), Nature 371, 115 (8 September 1994).
- ↑ British Centre for Science Education, Page on Toleman, including a letter in which he explains his stance.
- ↑ Online Athens, article from September 22, 2002.
- ↑ The NCSE on the Cobb County stickers, Cobb County stickers.
- ↑ The NCSE on the Cobb County case, Article from September 27, 2002.
- ↑ Florida Citizens for Science, Post from June 2005.
- ↑ Vacha’s CV on his website
- ↑ Van der Ventel, Darwin and the lie of evolution, p.1.
- ↑ World religion watch (appears to be a fundie site), article from January 21, 2012
- ↑ A paper of his, posted on the Bad Science webpage, January 2000.
- ↑ Old Interview in Lubbock online.
- ↑ J. C. Walton Intelligent Design and its Critics, in Dialogue: An International Journal of Faith, Thought, and Action.
- ↑ A typical program from the Crieff Adventist Church.
- ↑ The criteria used by Returntotheword to rank colleges, and the list.
- ↑ Deltoid, Post from December, 2008
- ↑ Blogpost from June 2008
- ↑ Live Blogpost from March 2009
- ↑ Extensive review here (Reed A. Cartwright at Pandasthumb).
- ↑ The petition can be found here.
- ↑ Hanna Rosin, “Rock of Ages, Ages of Rock”, The New York Times Magazine, November 25, 2007.
- ↑ Rober Wiens (2002): Radiometric Dating: A Christian Perspective, The American Scientific Affiliation, revised version.
- ↑ As documented, for instance, in Jesus Camp.
- ↑ The Loom, post from February 21, 2007
- ↑ Pittsburgh Press, Article from Aug.1, 1990.
- ↑ Jim Nelson Black, “The Death of Evolution: Restoring Faith and Wonder in a World of Doubt” (a rabid creationist screed)
- ↑ Blogpost from September 2004
- ↑ British Centre for Science Education discusses the list
- ↑ CMI, Interview with Young, from 2001.
- ↑ His website
- ↑ Live Blogpost, March 2009
- ↑ The Sensuous Curmudgeon, Post from March, 2011 | http://rationalwiki.org/w/index.php?title=A_Scientific_Dissent_From_Darwinism&oldid=1794491 | CC-MAIN-2017-22 | en | refinedweb |
Summary: Microsoft Scripting Guy, Ed Wilson, illustrates how to explore WMI methods and writable properties from a Windows PowerShell script.
Weekend Scripter
Microsoft Scripting Guy, Ed Wilson, here. One of the things I like about Windows PowerShell is the ease in which I can modify things, experiment with things, play with things, and finally end up with a decent script. It is a rainy day in Charlotte, North Carolina, and it is beginning to actually look like spring outside. I am listening to Radiohead on my Zune HD, and sipping a cup of organic mint tea…it is just one of those sort of laid-back days.
I had an extremely busy week, with a presentation to the Charlotte IT Professionals Group thrown in for fun. I absolutely love speaking to user groups, either in person or via Live Meeting, and it is always the highlight of my week. I firmly believe and wish to support these groups because they are the embodiment of community. By the way, if you are looking for a Windows PowerShell user group in your area, check out the PowerShell Group site for listings. If you know of a group that is not listed in this directory, please add it. There is also help available if you want to start a user group.
One of the goals of a good scriptwriter should be to write reusable code. In Windows PowerShell, this generally means creating functions. When I created the Get-WmiClassesMethods.ps1 Windows PowerShell script and the Get-WmiClassProperties.ps1 script, I had code reuse in mind. For details about the essential logic of today’s script, refer to the following articles:
Use PowerShell to Find WMI Classes that Contain Methods
Use PowerShell to Find Writable WMI Properties
Therefore, I encapsulated the main logic into individual functions. Due to time constraints, the functions are not as reusable as I would like them to be, but I did have reuse in mind at the time. This also illustrates that there is often a tradeoff between code reuse and development time. It takes time to design a completely portable function, and sometimes it takes a long time to analyze how a function might be reused and to abstract everything.
On a rainy Saturday morning, I thought it would be a good idea to combine the two scripts into a single script that produces a consolidated listing of implemented methods and writable properties from all the WMI classes in a particular WMI namespace. I know I can use such a list, and I hope you find it useful as well.
To be sure, the script is a bit complicated, and it is most certainly long. That is why I uploaded the Get-WmiClassMethodsAndWritableWmiProperties.ps1 script to the Scripting Guys Script Repository. But the script is shorter and easier to understand that a corresponding script written in VBScript. In fact, I always wanted to write such a script in VBScript but I never got around to doing it—and I wrote the book on VBScript and WMI.
All right, I want to start with the Get-WmiClassMethods function. I will highlight the changes I made. The first change I made was to add a class parameter to the Param portion of the function. The reason for this is that I want to collect the WMI classes outside of the function, and pass the individual WMI class information to the function for processing. This will enable me to work on the same class and to retrieve methods and properties as appropriate. Because I am going to pass the class information to the functions, I do not need to collect the classes inside the function. Therefore, I comment out the lines in the function that perform the class collection duties. This revised portion of the code is shown here.
Function Get-WmiClassMethods
{
Param(
[string]$namespace = “root\cimv2”,
[string]$computer = “.”,
$class
)
$abstract = $false
$method = $null
#$classes = Get-WmiObject -List -Namespace $namespace | Where-Object { $_.methods }
#Foreach($class in $classes)
#{
I changed the output to include the Word METHODS in addition to the WMI class name. To do this, I used the New-Underline function. One of the cool things about this function, and one reason I use it so often, is that it allows one to specify the character to use for underlining, in addition to the colors to use for the text and the underline string.
I also had to comment out the closing curly bracket (closing braces…squiggly things…whatever) of the foreach class loop. The comments that I included when I originally created the script make lining up the curly bracket easier. This portion of the script is shown here.
if($method)
{
New-Underline -strIN $class.name
New-Underline “METHODS” -char “-“
}
$method
} #end if not abstract
$abstract = $false
$method = $null
# } #end foreach class
Because the two functions (Get-WmiClassMethods and Get-WmiClassProperties) are written in a similar style, similar changes are required for inclusion here. I need to add a class parameter to the Param section, and I need to comment out the code that gathers the WMI classes and implements the foreach class loop. The revised code is shown here.
Function Get-WmiClassProperties
{
Param(
[string]$namespace = “root\cimv2”,
[string]$computer = “.”,
$class
)
$abstract = $false
$property = $null
#$classes = Get-WmiObject -List -Namespace $namespace
#Foreach($class in $classes)
#{
I add the word PROPERTIES to the output portion of the script, and remove the closing curly bracket. This revision is shown here.
if($property)
{
New-Underline -strIN $class.name
New-Underline “PROPERTIES” -char “-“
}
$property
} #end if not abstract
$abstract = $false
$property = $null
# } #end foreach class
} #end function Get-WmiClassProperties
I moved the collection of the WMI classes, and the foreach $class construction to the entry point of the script. In addition, I added the –class parameter when I call each function. This portion of the script is shown here.
# *** Entry Point to Script ***
$classes = Get-WmiObject -List -Namespace $namespace
Foreach($class in $classes)
{
Get-WmiClassMethods -class $class
Get-WmiClassProperties -class $class
}
When the script runs, the output that is shown in the following image appears in the Windows PowerShell | https://blogs.technet.microsoft.com/heyscriptingguy/2011/03/12/explore-wmi-methods-and-properties-via-powershell-script/ | CC-MAIN-2017-22 | en | refinedweb |
view raw
So I have an assignment for my C++ class. Basically we have to create a 3x3 multidimensional array, calculate the sum of rows, sum of columns, sum of diagonal values and sum of anti-diagonal values, I usually just input 1 2 3 4 5 6 7 8 9 as values as a starting point.
Now, I'm not trying to be rude but my teacher is not very good, we basically spend 2 hours on a single problem without her doing much explaining. Other than that I started with C++ Primer and Programming: Principles and Practice Using C++ so I believe I'll be able to learn a lot on my own.
Anyhow my questions are probably quite stupid, but if anyone feels like helping, here they are:
for (i = 0; i < row_num; ++i)
for (j = 0; j < col_num; ++j)
if (i + j == row_num - 1)
anti-diagonal += A[i][j];
int sumRows[row_num] = { 0 };
#include "../../std_lib_facilities.h"
#include <iostream>
using namespace std;
#define row_num 3 //no. of rows
#define col_num 3 //no. of columns
int main()
{
int i = 0;
int j = 0;
int diagonal = 0;
int antidiagonal = 0;
int sumRows[row_num] = { 0 };
int sumCol[col_num] = { 0 };
int A[row_num][col_num];
//Input to matrix
for(i=0; i<row_num; i++)
for (j = 0; j < col_num; j++)
{
cout << "A[" << i << "]" << "[" << j << "]: ";
cin >> A[i][j];
sumRows[i] += A[i][j];
sumCol[j] += A[i][j];
}
cout << endl;
//Print out the matrix
for (i = 0; i < row_num; i++)
{
for (j = 0; j < col_num; j++)
cout << A[i][j] << '\t';
cout << endl;
}
//prints sum of rows
for (i = 0; i < row_num; i++)
cout << "Sum of row " << i + 1 << " "<< sumRows[i] << endl;
//prints sum of columns
for (j = 0; j < row_num; j++)
cout << "Sum of column " << j + 1 << " " << sumCol[j] << endl;
//Sum of diagonal values
for (i = 0; i < row_num; i++)
diagonal += A[i][i];
//Sum of antidiagonal values
for (i = 0, j = 2; i < row_num, j >= 0; i++, j--)
antidiagonal += A[i][j];
/*for(i=0; i<row_num; i++)
for (j = 2; j >= 0; j--)
{
antidiagonal += A[i][j];
}
*/
cout << "\nSum of diagonal values: " << diagonal << endl;
cout << "Sum of antdiagonal values: " << antidiagonal << endl;
return 0;
}
1) Your commented out loop sums all values, not just those along the antidiagonal.
2) It is different from your approach in that it will iterate over every value in the matrix, but then it will only add to the total if it detects it is in one of the appropriate cells. Your solution only iterates over the appropriate cells and doesn't have to evaluate any
ifs, so it will be more efficient. However, you need to change your loop condition to
i < row_num && j >= 0. Using a comma here will discard the result of one of the checks.
3)
int sumRows[row_num] = { 0 }; initializes the whole
sumRows array with 0's. | https://codedump.io/share/TX2CsAk0IMfl/1/c-for-loops-and-multidimensional-arrays | CC-MAIN-2017-22 | en | refinedweb |
So, you've got this cool new Edison, but what to do with it? Well, it DOES talk to the internet fairly well, so let's make it talk to the internet!
Interestingly enough, the ethernet libraries for Arduino work just as well on the Edison, so why not use them?
I am piggybacking off of a very good tutorial found here:
As a result of my piggybacking, I am using (and trusting) NeoCat's app that gets the twitter token you will need to tweet with your Edison. While I am a rather trusting person (perhaps destructively so), you may want to dig around to get your token from . Unfortunately, at the time of writing this, the developer site is by invite only....so NeoCat's option was definitely the best..
I am also counting on the fact (it IS a fact, right?) that you have already gone through the Getting Started tutorials Intel has so kindly made for us. You can find them here:
And, of course, FOLLOW ME ON TWITTER! @foxrobotics
Step 1: Let's Add Some Sensors
A randomly tweeting Edison can be fun, but let's add some sensors to it.
For this instructable, I will be using the Grove Starter Kit for Edison to save myself a bit of time. I'm normally a proponent of breadboards and custom circuit boards, but hey, why not? You can easily replicate this by connecting a button yourself that has a 10k pull down resistor. Search for "Arduino button circuit" online to find it.
In this example, I will be connecting the button to D8. I will also be connecting a light sensor to A0 (again, search "Arduino light sensor" for explicit circuits). Connect them as shown in the picture if you have the same Grove kit.
Step 2: Let's Write Some Code!
Your code it all it's glory! You'll notice that many bits and pieces are "missing" from NeoCat's code. The Linux kernel handles a lot of that for you. I did remove the wait command (if you saw his/her code) as it always hangs. I don't know if this is due to a change in the Twitter API, his/her site, or something in the Edison.
(Note, if the Instructables website messes up the formatting, copy and paste everything, past it in the Arduino Edison IDE, do a search and replace for "<br>" and replace with nothing. Then go to Tools>>Auto Format. Your code will look happy again.)
#include <SPI.h> #include <Ethernet.h> #include <Twitter.h>
void setup(){ pinMode(8, INPUT); } void loop(){ if(digitalRead(8)){ tweetMessage(); delay(1000); } } void tweetMessage(){ Twitter twitter("your token here"); //Our message (in lolcat, of course) String stringMsg = "All ur lightz be ";
stringMsg += analogRead(0);
stringMsg += " out of 1023. Dey belongs to us nao."; //Convert our message to a character array char msg[140];
stringMsg.toCharArray(msg, 140);
//Tweet that sucker! twitter.post(msg); }
Step 3: Make Some Changes of Your Own and Tweet Like Mad!
Click "Upload" (the right pointing arrow button) and you are done! Viola!
My code is meant to be insanely simple so it is easy to follow. But, add your own sensors, actuators, random furry animals, and make it your own! | http://www.instructables.com/id/Tweet-with-your-Intel-Edison/ | CC-MAIN-2017-22 | en | refinedweb |
Optimizing React Performance with Stateless ComponentsBy Peter Bengtsson
This> } }
Editor’s Note: We’re trying out CodeSandbox for the demos in this article.
Let us know what you think!> } }
If you run this, you’ll notice that our little component gets re-rendered even though nothing has changed! It’s not a big deal right now, but in a real application components tend to grow and grow in complexity and each unnecessary re-render causes the site to be slower.
If you were to debug this app now with
react-addons-perf I’m sure you’d find that time is wasted rendering
Users->User. Oh no! What to do?!
Everything seems to point to the fact that we need to use
shouldComponentUpdate to override how React considers the props to be different when we’re certain they’re not. To add a React life cycle hook, the component needs to go be a class. Sigh. So we go back to the original class-based implementation and add the new lifecycle hook method:
Back to Being a Class Component
import React, { Component } from 'react' class User extends Component { shouldComponentUpdate(nextProps) { // Because we KNOW that only these props would change the output // of this component. return nextProps.name !== this.props.name || nextProps.highlighted !== this.props.highlighted } render() { const { name, highlighted, userSelected } = this.props console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }} >{name}</h3> </div> } }
Note the new addition of the
shouldComponentUpdate method. This is kinda ugly. Not only can we no longer use a function, we also have to manually list the props that could change. This involves a bold assumption that the
userSelected function prop doesn’t change. It’s unlikely, but something to watch out for.
But do note that this only renders once! Even after the containing
App component re-renders. So, that’s good for performance. But can we do it better?
What About React.PureComponent?
As of React 15.3, there’s a new base class for components. It’s called
PureComponent and it has a built-in
shouldComponentUpdate method that does a “shallow equal” comparison of every prop. Great! If we use this we can throw away our custom
shouldComponentUpdate method which had to list specific props.
import React, { PureComponent } from 'react' class User extends PureComponent { render() { const { name, highlighted, userSelected } = this.props console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }} >{name}</h3> </div> } }
Try it out and you’ll be disappointed. It re-renders every time. Why?! The answer is because the function
userSelected is recreated every time in
App‘s
render method. That means that when the
PureComponent based component calls its own
shouldComponentUpdate() it returns true because the function is always different since it’s created each time.
Generally the solution to that is to bind the function in the containing component’s constructor. First of all, if we were to do that it means we’d have to type the method name 5 times (whereas before it was 1 times):
this.userSelected = this.userSelected.bind(this)(in the constructor)
userSelected() {(as the method definition itself)
<User userSelected={this.userSelected} ...(when defining where to render the
Usercomponent)
Another problem is that, as you can see, when actually executing that
userSelected method it relies on a closure. In particular that relies on the scoped variable
user from the
this.state.users.map() iterator.
Admittedly, there is a solution to that and that’s to first bind the
userSelected method to
this and then when calling that method (from within the child component) pass the user (or its name) back. Here is one such solution.
recompose to the Rescue!
First, to iterate, what we want:
- Writing functional components feels nicer because they’re functions. That immediately tells the code-reader that it doesn’t hold any state. They’re easy to reason about from a unit testing point of view. And they feel less verbose and purer JavaScript (with JSX of course).
- We’re too lazy to bind all the methods that get passed into child components. Granted, if the methods are complex it might be nice to refactor them out instead of creating them on-the-fly. Creating methods on-the-fly means we can write its code right near where they get used and we don’t have to give them a name and mention them 5 times in 3 different places.
- The child components should never re-render unless the props to them change. It might not matter for tiny snappy ones but for real-world applications when you have lots and lots of these all that excess rendering burns CPU when it can be avoided.
(Actually, what we ideally want is that components are only rendered once. Why can’t React solve this for us? Then there’d be 90% fewer blog posts about “How To Make React Fast”.)
recompose is “a React utility belt for function components and higher-order components. Think of it like lodash for React.” according to the documentation. There’s a lot to explore in this library, but right now we want to render our functional components without them being re-rendered when props don’t change.
Our first attempt at re-writing it back to a functional component but with
recompose.pure looks like this:
import React from 'react' import { pure } from 'recompose' const User = pure(({ name, highlighted, userSelected }) => { console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }}>{name}</h3> </div> }) export default User
As you might notice, if you run this, the
User component still re-renders even though the props (the
name and
highlighted keys) don’t change.
Let’s take it up one notch. Instead of using
recompose.pure we’ll use
recompose.onlyUpdateForKeys which is a version of
recompose.pure, but you specify the prop keys to focus on explicitly:
import React from 'react' import { onlyUpdateForKeys } from 'recompose' const User = onlyUpdateForKeys(['name', 'highlighted'])(({ name, highlighted, userSelected }) => { console.log('Hey User is being rendered for', [name, highlighted]) return <div> <h3 style={{fontStyle: highlighted ? 'italic' : 'normal'}} onClick={event => { userSelected() }}>{name}</h3> </div> }) export default User
When you run that you’ll notice that it only ever updates if props
name or
highlighted change. If it the parent component re-renders, the
User component doesn’t.
Hurrah! We have found the gold!
Discussion
First of all, ask yourself if it’s worth performance optimizing your components. Perhaps it’s more work than it’s worth. Your components should be light anyway and perhaps you can move any expensive computation out of components and either move them out into memoizable functions outside or perhaps you can reorganize your components so that you don’t waste rendering components when certain data isn’t available anyway. For example, in this case, you might not want to render the
User component until after that
fetch has finished.
It’s not a bad solution to write code the most convenient way for you, then launch your thing and then, from there, iterate to make it more performant. In this case, to make things performant you need to rewrite the functional component definition from:
const MyComp = (arg1, arg2) => { ... }
…to…
const MyComp = pure((arg1, arg2) => { ... })
More from this author
Ideally, instead of showing ways to hack around things, the best solution to all of this, would be a new patch to React that is a vast improvement to
shallowEqual that is able to “automagically” decipher that what’s being passed in and compared is a function and just because it’s not equal doesn’t mean it’s actually different.
Admission! There is a middle-ground alternative to having to mess with binding methods in constructors and the inline functions that are re-created every time. And it’s Public Class Fields. It’s a
stage-2 feature in Babel so it’s very likely your setup supports it. For example, here’s a fork using it which is not only shorter but it now also means we don’t need to manually list all non-function props. This solution has to forgo the closure. Still, though, it’s good to understand and be aware of
recompose.onlyUpdateForKeys when the need calls.
For more on React, check out our course React The ES6 Way.
This article was peer reviewed by Jack Franklin. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be! | https://www.sitepoint.com/optimizing-react-performance-stateless-components/?utm_source=reactnl&utm_medium=email | CC-MAIN-2017-22 | en | refinedweb |
view raw
I am making a game and I wanted to know how to use a variable from a different file. Ex:
File 1:
var js = "js";
alert(js);
Can a javascript variable be used from a different file?
Yes, it can... As long as it's a global variable.
This is because all of the javascript files are loaded into a shared global namespace.
In your HTML, you will need to include the script that declares this variable first. Otherwise it will complain that the variable is undefined.
script1.js
var globalNumber = 10;
script2.js
alert(globalNumber); // Will alert "10"
index.html
<script src="script1.js"></script> <script src="script2.js"></script> | https://codedump.io/share/1QC3KIxfh916/1/can-a-javascript-variable-be-used-from-a-different-file | CC-MAIN-2017-22 | en | refinedweb |
batou 1.0b18.
1.0b12 (2013-11-04)
- Added branch argument to mercurial.Clone. Setting a branch automatically updates to the branch head on deploy. This is mostly useful for development environments.
- Create the ‘secrets’ directory if it doesn’t exist, yet. Also, disallow editing secret files for non-existing environments.
- Support continuing remote bootstrapping if we failed after creating the initial remote directory but were unable to use Mercurial.
- #12898: build.Configure component was broken when using the default prefix.
1.0b11 (2013-10-17)
#12897: Use non-SSL pypi mirror for downloading virtualenv to fix tests failing randomly on machines that (for some reason) can’t validate PyPI’s certificate.
#12911: Ensure that we can configure file owners when they don’t exist during configure phase yet.
#12912: Fix untested and broken file ownership management.
#12847: Clean up unicode handling for File and Content components and templating.
#12910: Remote deployments failed when using bundles for transfers if no changes needed bundling.
#12766: Allow bootstrapping a batou project in an existing directory to support migration from 0.2.
#12283: Recognize files as ‘is_template’ by default. Auto-detect source files in the definition directory if they have the same basename. This is what you want in 99% of all cases. Explicitly stating either the ‘content’ or ‘source’ parameter disables auto-detection.
Now you can write this:
File(‘foo’)
and have components/x/foo recognized as the source file and handled as a template.
Use ConfigParser instead of configobj which is effectively unmaintained (see) and support lists separated by newlines in addition to commas.
1.0b10 (2013-09-27)
- Package our own virtualenv instead of depending on the system-installed one. This should alleviate troubles due to old virtualenv versions that package distribute, which causes conflicts with recent setuptools versions (#12874).
- Update supervisor version to 3.0.
1.0b9 (2013-08-22)
- Update Package component so it ignores installed packages when installing. This way, we actually install setuptools even when distribute is installed. (Otherwise it’s a no-op since distribute tells pip that setuptools is already satisfied).
- Fix update process: wrong call to old ‘.batou/bin/batou’ failed and early bootstrapping would downgrade temporarily which is confusing and superfluous. Fixes #12739.
1.0b8 (2013-08-17)
- Remove superfluous mkdir call during remote bootstrap.
- Make batou init print that it’s working. Bootstrapping can take a while, so at least signal that something’s going on.
1.0b7 (2013-08-17)
- Depend on Python2.7 to be available on the PATH during early bootstrap. Otherwise our chances to get a 2.7 virtualenv are pretty small, too.
- Improve project template: ignore the work/ directory by default.
1.0b6 (2013-08-17)
- More MANIFEST inclusions: bootstrap-template.
1.0b5 (2013-08-17)
- Improve MANIFEST so we actually package the init template and other generated files, like version.txt and requirements.txt.
1.0b4 (2013-08-17)
- Provide a simple project-creation command, both for pip-installed batou’s as well as spawning new projects from existing ones. Fixes #12730
- Fix #12679: make timeouts configurable.
- Removed re-imports from batou main module to support light-weight self-installation and bootstrapping. I.e. ‘from batou import Component’ no longer works.
- Provide a single main command together with a ‘bootstrap’ wrapper that you can check into your project and that is maintained during updates automatically. It also provides fully automatic bootstrapping, installation, upgrading and other maintenance.
- Fix Python package installation version check.
- Don’t use bin/buildout bootstrap command anymore. PIP installs a sufficient bin/buildout so buildout can do the rest internally.
- Install zc.buildout during bootstrapping phase using PIP to avoid bootstrap.py problems.
- Shorten URLs in the Build component to their basename.
- Add ‘assert_cmd’ API to support simpler assertions for verify when needing to check the result of an external command.
- Switch to asking pip installing eggs instead of flat installations as namespaces seem to collide otherwise.
- Remove non-functional deprecated ‘md5sum’ attribute.
- Components are context managers now. If you provide __enter__ it will be called before verify() and if you provide __exit__ this will be called after update (always - even if update isn’t actually called). This allows you to manage temporary state on the target system more gracefully. See the DMGExtractor for an example.
- Major refactoring of internal data structures to simplify and improve test coverage. Some breakage to be expected:
- Components do not have a-edit’ wrapper script to allow re-encrypting without re-entering the editor.
- Consistently switch to using setuptools.
- Fix #12399: incorrect stat attributes for Owner and Group
- Add exclude parameter to Directory component.
- Add env parameter to Component.cmd() (and corresponding build_environment parameter to the Build component) to allow adding/overriding environment variables.
1.0b3 (2013-07-09)
- Enable logging in the remote core to see what’s going on on the remote side.
- Try to better format exceptions from the remote side.
- Try harder to get virtualenv back into a working state.
- Allow remote deployments from root of repository.
- Make PIP management more robust.
1.0b2 (2013-07-09)
- Add component to manage PIP within a virtual env.
- Add component to manage packages with PIP within a virtual env.
- Restructure buildout component to make it more robust regarding setuptools/distribute preparation. Also remove usage of bootstrap completely as we rely on virtualenv anyway.
1.0b1 (2013-07-09)
- Apply semantic versioning: initial development is over, so this is 1.0 now.
- Major revamp of secrets management:
- switch to GPG (instead of aespipe)
- turn secrets into a core feature, removing the need for a special component
- Add ‘–single’ to suppress parallel bootstrapping.18.xml | https://pypi.python.org/pypi/batou/1.0b18 | CC-MAIN-2017-22 | en | refinedweb |
And now for something completely different ...
The computing language known as Python was named after the famous and hilarious TV comedy series called "Monty Python's Flying Circus", not after the beautiful and charming non-venomous reptile.
The O'Reilly books (Learning Python and Programming Python) are recommended for second-year Python based undergraduate and HND modules. However, in situations where a student considers that the pre-requisite programming knowledge assumed for III and IV numbered taught modules (i.e. having knowledge of and the ability to apply at least one programming language) not to be secured, the following book might be considered by students intending to remedy the deficit:
Title: Learn to Program Using Python Author: Alan Gauld Publisher: Addison-Wesley, 2001, 288 pages.
Python is not yet (January 2002) taught for any first-year programming modules. If and when it is, Alan Gauld's book is likely to be adopted.
Title: Learning Python Authors: Mark Lutz & David Ascher Publisher: O'Reilly Description: A good guide to Python as a programming language, likely to be used by a student who already knows how to program using another language.
Title: Programming Python Author: Mark Lutz Publisher: O'Reilly Description: An extensive guide to creating powerful applications using the Python language, including web CGI applications, other Internet interfaced programs and applications involving GUIs. This book assumes the reader already knows Python basics.
Title: The Python Tutorial Author: Guido van Rossum URL: Description: An on-line tutorial guide to the Python language. Students of Python are likely to consider the time learning this one well spent.
Title: The Python Documentation Index Author: various authors URL: Description: An extensive resource index which accesses the above tutorial, the Python language reference, an extensive Python library reference and other on-line resources. The main documentation referred to in this index can be downloaded as a single bundle. Serious Python programmers are likely to need to make frequent and extensive use of this significant and free resource.
Title: Python Downloads URL: Description: Freely downloadable and easily installable versions of Python for platforms including Linux, Windows and Solaris. Active Python downloads include integrated development environment support. This site also provides access to the more advanced Komodo integrated development environment.
You don't have to edit, save and compile your code every time just to run a simple program containing a few lines of code. You can of course save your code in a file in order to run your Python program (or "script") as often as you want without having to rewrite it. But you don't have to do this just to try out a Python expression directly in the interpreter. As with some other "scripting" languages 10 lines of Python code can do a lot more useful work than would be achieved with 50 source code lines of a compiled systems-programming language such as 'C' or Java. The disadvantage is that the Python code is likely to require more memory and CPU time to operate.
In the following example, the Python interpreter is used interactively in "command-line" mode, and the program output is displayed directly:
>>> import math >>> radius=3 >>> area=math.pi*radius*radius >>> print area 28.2743338823
The advantages of code reuse (meaning you don't have to redo programs that others have already written) and object orientation are built into Python programming from the ground up, rather than tacked on at a later stage as with some other languages (e.g. C, Pascal and Perl). You'll find plenty of explanations of what this all means in books devoted to the subject, but for now I'll just mention that human beings naturally tend to use OO (Object Oriented) concepts such as classifications (classes), e.g shrubs , and things (objects) which we classify in this way (e.g. the red rosebush in my front garden). We also tend to classify classes into hierarchies of such, e.g. grasses, shrubs and trees are kinds of plant, which along with animals are kinds of living organism.
Computer programming languages which allow for classes and instances (or objects for example the general set of dwellings is a "class" and the specific instance of this class, my house) tend to match the way we think about things more closely than languages which restrict our thought in mechanical terms of variables, values and functions. This allows us to create and understand more interesting and complex programs and for programmers to encapsulate logic as classes which are more flexibly reusable by other programmers.
Python has a simple syntax and is easy to learn
A growing number of students are learning Python as a first programming language. The meaning of Python code is relatively intuitive, without needing as many additional comments as other languages would. Python forces programmers to get into the habit of laying out programs correctly, as the control of flow is directly determined by the indentation. You are more likely to understand your own Python programs when you need to maintain them in 6 months' time than your are with programs written using some other languages.
You have access to the source code for Python itself and can change and extend it, and are free to copy, buy and sell it. It runs on most modern operating platforms, such as Linux, Palm, Macintosh, Solaris, different versions of Windows and BeOS. It is easy to create Python programs which run identically on many platforms, including programs with complex GUIs..
The way a program is coded in Python tends to reflect the way we think about solving a problem.The first example asks the user for a temperature and prints: "too hot" if temperature > 80, "too cold" if temperature is < 60 and just right otherwise (if otherwise, 60 <= temperature <= 80) .
temp=input("enter temperature: ") if temp > 80: print "too hot" elif temp < 60: # elif is short for else if print "too cold" else: print "just right"
This Python program file must be indented consistently for the if, elif and else statements to work.
Here is what this Python program does when we run it 3 times with temperatures in the 3 ranges detected:
[rich@copsewood python]$ python temp.py enter temperature: 59 too cold [rich@copsewood python]$ python temp.py enter temperature: 65 just right [rich@copsewood python]$ python temp.py enter temperature: 89 too hot [rich@copsewood python]$
Our second example involves solving a minor problem.
When asked how we would define a prime number, we could say that this is a whole number greater than or equal to 2, which is not exactly divisible by any number except for 1 and itself.
Considering this definition further, to work out whether a number is exactly divisible we could look for any remainder of zero when dividing this number by all other numbers (let's say "values" to avoid confusion) in the range between 2 and the number minus 1.
If we find a remainder of zero then we know the number isn't prime.
If we check the values in the above range without finding a remainder of zero then the number is prime.
A Python program which uses this definition and approach (algorithm) to report whether a particular number input by the user is prime or not looks similar to our definition of a prime number:
# Python prime number testing program number=input("enter a number greater than 2: ") # ask user to enter a number for value in range(2,number-1): # check for factors between 2 and number-1 # check if remainder when dividing number by value is 0 if number % value == 0: # value is a factor of number if this is true print number, "is not prime" exit # quit program now print number, "is prime" # this must be true if we havn't quit the program yet
The lines or parts of lines in the above program after '#' characters are not part of the program, these are comments, or explanations of the code within the program that does the work.
This is what happens when we run the above program and input prime and non prime numbers:
$ python prime.py enter a number >= 2: 97 97 is prime $ python prime.py enter a number: 49 49 is not prime.
Simple programs can sometimes be tested using the python interpreter directly. You typically run the Python interpreter either from a start menu, or by clicking an icon on the desktop or through a file explorer or by running the command line or shell program provided by your operating system (MS-DOS on Windows or Bash on Linux/Unix) and entering the command:
python
You will then see information about the version of Python, when it was built and the operating system for which it was built followed by the Python prompt: >>> e.g:
Python 2.0 (#1, Apr 11 2001, 19:18:08) [GCC 2.96 20000731 (Linux-Mandrake 8.0 2.96-0.48mdk)] on linux-i386 Type "copyright", "credits" or "license" for more information. >>>
In the following example, the Python interpreter is used interactively in "command-line" mode. The program is typed as a number of lines each line directly after the >>> Python prompt. (You don't type the >>> characters, Python does.) You start a new line by pressing the <enter> key and the program output is displayed directly:
>>> import math >>> radius=3 >>> area=math.pi*radius**2 >>> print area 28.2743338823
Using the interactive prompt you could have entered just 'area' to print its value instead of 'print area'.
If your typing or spelling skills are such that you can't get the above example working easily because you keep typing things wrong or it takes you so long to find each letter that the whole excercise takes you more than a few minutes to get right, you could confirm the operation of Python more simply by trying:
>>> print "hello"
instead, in which case the system should respond with:
hello
You might also try saving yourself a little time by using shorter names, e.g: calling radius r , and area a instead:
>>> import math >>> r=3 >>> a=math.pi*r*r >>> print a 28.2743338823
All computer languages have areas of flexibility (e.g. in choice of variable names: a and r demonstrated above so long as you use the same name every time for the same variable) and things you have to get exactly right. For example you have to spell words such as "import" and "print" correctly. In Python as in most other programming languages variables and keywords are case sensitive, i.e. variables called a and A are references to different objects.
(Hint: For how many years into the future are you likely to be using a computer keyboard ? If your answer is more than 1 or 2, then the time you spend learning how to touch type in the next 2 months or so will be repaid very many times over. To do this you can download and install free programs that will teach you this skill if you spend half an hour a day for 5 days a week for the next month or two. Not having to think about what your hands are doing, because they touch type automatically, will also help you to think more about your program code, or the messages or reports you are writing.)
The above approach is all well and good for experiments and tests (of which you will have to carry out very many to become a competent programmer), but you won't want to retype a program with other uses every time you run it, once you have got it to work. For this purpose you will need to save the source code in a file.
Using a text editor (e.g. emacs, vi, notepad, or one provided by the Python integrated development environment or IDE) create a file with the following 4 lines of text:
import math radius=3 area=math.pi*radius*radius print area
and save it as a file called circle.py (If you are using Windows and Notepad you might need to enclose the filename in double quotes "circle.py" to prevent the system incorrectly and unhelpfully naming it circle.py.txt ).
On a system with a shell prompt (MS-DOS, Unix or Linux) you can then run your program using the command:
python circle.py
and should see the displayed output.
If this doesn't work first time try to compare what you actually typed with the program above. The error messages given by the interpreter should indicate approximately where the error occurred, e.g. through a line number. Having repeatedly to edit a file, save it and carry out test runs and observe the code and outputs carefully is a cycle you will have to get used to as a programmer in order to:
Programming is made more difficult on systems with neither command line shells nor a Python IDE (integrated development environment). In this situation you may be able to double click on the icon for your file using a file explorer. This will run your python program if it is saved with an extension .py or (e.g. on Apple Macs) other means of associating the file with the Python interpreter.
However, If you system has Python then you should also be able to use this in Python command line mode. Running the Python interpreter will enable you to run saved Python source files by importing them.E.g. having created and saved the above file circle.py :
>>> import circle 28.2743338823 >>>
Here, the import python command will look for a file with the name you import followed by .py ( or .pyc in some cases) and run it as a program To run the same program again, import doesn't work, you have to use reload instead.
If you are using a Python IDE, having saved your file you can run it by using a "Run" button or menu option within the IDE GUI.
If clicking a file icon runs the program file in a console which vanishes when the program exits before you have a chance to see the output you can make your program pause by forcing it to wait for keyboard input. To do this, edit your program and add the extra line of code at the end, then save and run:
import math radius=3 area=math.pi*radius*radius print area raw_input("press enter")
If you are using command line operation to edit,test and debug your Python programs, and you indicate to the kernel that your program is to be run by the Python interpreter, your program can then be run as if it were any other Unix command.
To do this first find out where the python interpreter is located using the command: which python
The result of this path search should be a pathname typically: /usr/bin/python
If you then put this interpreter pathname after the #! characters on the first line of your program e.g:
#!/usr/bin/python print "hello Python/Unix world\n"
save this as pyunix and use the command: chmod +x pyunix
to make it executable, you can then install the program in a directory on the system (e.g. in /usr/local/bin ) where your path environment variable finds commands and run it using the command: pyunix
Or run it from the current working directory as a command: ./pyunix
instead of having to say: python pyunix
If you use this approach, the first line has no effect if you run your program by other means, as Python just treats it as a descriptive comment. Also, if the kernel knows which interpreter to use from the file contents there is no reason (other than for human directory/folder reading purposes) to indicate the type of the file through a .py filename extension, as you may want to give your custom commands created using python programs shorter names.
Operator Operation + Add - Subtract * Multiply / Divide % Remainder (or modulus) ** Exponentiate (or raise left operand to power specified by right) = Assign
E.G: a=b+c
Takes the result obtained by adding the numbers referred to by b and c and makes a refer to this result.
If you combine operations, exponentiation is first, then multiplication, division and modulus happen before addition and subtraction. E.G: 8+3*2 evaluates to 14, not 22. You can force the order of operations using () brackets, e.g. (8+3)*2 evaluates to 22.
Be aware that the result of 7/2, currently 3, will change for Python versions 3.0 onwards, when 7/2 becomes the same as 7/2.0, which is 3.5 . Python 2.2 introduces the // operator which gives floor division (rounding down) for those who want to write code which depends upon the loss of the fraction when dividing one integer by another.
In general the = assignment causes the result of the expression on the right to be referred to by the the name on the left. There are some additional assignment operators:
+= -= *= /= %=
where in general a op= b is the same as a = a op b , and "op" is one of the operators: +,-,*,/,% e.g. a+=2 is the same as a = a+2
You might read this as "add 2 to the current value of a ".
Strings can be assigned to names and joined together using the + operator. In string context this kind of addition is sometimes called "concatenation". E.G:
>>>>>>> print a+b FredBloggs >>> print a+" "+b Fred Bloggs >>>
Strings can be quoted using "double"quotes or 'single'. This allows you more easily to embed quotes within strings. e.g.
a='Joe said "hello" to mary' b="Harry's game"
If you want to define strings containing newlines and arbitrary white space you can triple quote them e.g:
simple_html=""" <html> <title>trivial html example</title> <body>Hello HTML world!</body> </html>"""
\n and \t can be used to embed newlines and tabs e.g: print "hello world\ngreetings again" prints a newline between world and greetings. The print command gives you a single newline at the end of the output string by default. You can have 2 if you want, e.g. print "hello world\n"
To turn the default trailing newline off you can use a trailing comma after the quoted string. E.G. the program file:
print "no newline", print "before next print"
Outputs: no newline before next print
Python inserts a space in place of commas used to print multiple objects e.g:
>>> a=5 >>> b=3 >>> print a,b 5 3
To print out a literal backslash (\) you will need 2: >>> print "c:\\autoexec.bat" c:\autoexec.bat
Sometimes you will want to print out a number inside a string, or to substitute variable parts of a string
import math >>> print "PI is: %f" % math.pi PI is: 3.141593 >>>
You can change the precision to N decimal places using %.Nf e.g:
>>> print "PI to 3 decimal places is: %.3f" % math.pi PI to 3 decimal places is: 3.142
You can print integers using %d . If you give Python the wrong conversion letter, unlike 'C' it will try to do a sensible conversion before printing e.g:
>>> print "%f" % 123 123.000000 >>> print "%d" % math.pi 3
You can interpolate a string within a string using %s, and interpolate multiple values within a string by specifying these as a "tuple" which in Python is a round bracketed, comma separated list.
>>>>> age=42 >>> height=1.95 >>> print "%s is %d years old and %.2f metres tall" % (name,age,height) Fred is 42 years old and 1.95 metres tall
The values in the tuple are interpolated in the string in left to right order.
To get a literal % within an interpolated string use %% e.g:
>>> print "the current VAT rate of %.1f%% is not applied to books" % 17.5 the current VAT rate of 17.5% is not applied to books
If there is no value to interpolate a single % is printed directly.
>>> print "50%" 50% | http://bcu.copsewood.net/python/notes1.html | CC-MAIN-2017-22 | en | refinedweb |
In this blog (being my first one) I’m going to explain to you a UI5 application that receives and displays data from an IoT device we named Keggy – an Arduino with WiFi shield connected to a kegerator. Keggy also likes to tweet a lot – but more on this later.
Introduction we actually get real time data from the kegerator, we implemented a web app with OpenUI5. We wanted it to be a monitor, optimized for mobile devices, as the web app would be running on tablets. That being said, what would be a better fitting framework than OpenUI5?
Let’s start with the design of our XML-Views for the beginning, but first of all have a view on this wonderful kegerator and the application which can be seen on the devices in the back (just to get a first impression):
Concept
First of all, we want to depict what we are going to build. The Ardunio connected to the kegerator gives us information about:
- How much beer is poured at which point of time?
- When is the door being closed or opened?
- How much beer is left in the barrel?
- What is the temperature of the beer inside at which point of time?
With this data, we started building an UI that refreshes itself every three seconds and gives the user the opportunity to see what’s going on inside the barrel – everything being responsive enough and especially designed to run on a mobile device. So, what we want to do is create an XMLView that shows us information about the beer brand, type of beer, barrel size etc. in a bar at the top of the screen. It shall then provide the IoT kegerators data in different charts on the main area of the screen. The data is refreshed every third second. To save space, we are going to add another bar to the bottom of the screen to show the percentage of the capacity of our barrel and the date and time when the door has been opened/closed recently. We are going to use a lightweight and easy-to-use JS library called “Chart.js” which allows us to generate charts on the screen. Obviously, that saves a lot of time. Now that our goal is set, let’s get started!
XML-Views
We’ll build the UI with the Web-Development-Workbench of SAP HANA, but for a first approach Plunker (an easy-to-use online web development environment,) will serve us great. Its ability to directly show the results of your programming on the right side of the screen (refreshing automatically) is perfect for fast UI development.
For our app we’ll only want one MasterView which displays all of the information. An XMLView will work just fine for that, as it enforces the separation between model, view and controller. In case you didn’t know: An XMLView works descriptively which means that there is no functional logic in it. This helps us implement the MVC concept in a cleaner way. The MasterView is placed on a JSView which only sets up the initial processes and adds the MasterView as its first and only page.
It’ll need a lot of CSS customization afterwards because of several features and containers which cannot be added as simple and flawless as UI5 claims it to be. The following pictures show you the final UI of the app. The first one is displaying the temperature over time, while the second is displaying the beer flow for specific points of time.
Now that the view is defined we can start adding components. The view needs a sap.m.Page to display the content of course. Because we want a lot of information being displayed in the top, the page’s aggregation “customHeader” will provide a customizable sap.m.Bar that we can insert. It needs an ID to edit its design properties afterwards in the CSS file “style.css” that is going to be referenced in the “index.html”. The bar will have a lot of content so it makes sense to distribute the information to the bar’s aggregations contentLeft, contentMiddle and contentRight. On the left side there’s going to be a sap.ui.layout.VerticalLayout (or after the namespace identifier in the view’s properties <l:VerticalLayout>) which contains the barrel-size and unit, the beer type, brewery location and brewery type. The content in the middle will only be a sap.m.Image which is filled with the icon of our beer brand being currently used. Because the text of our picture is quite small, we want to do some CSS customization to give it more space on the screen. This will make our sap.m.Bar bigger, therefore the picture better readable and it will align our objects right. The right side contains a RatingIndicator for the beer type and an indicator for which chart is currently active. All information is retrieved by the “LocalModel.json” which is bound to the view and contains all relevant information displayed on the screen like size, unit, brand, rating and the measures provided by the ODATA service. The model is initially loaded from the controller and allows us to communicate with the view without having to use a lot of variables. It is also bound to the respective UI components. The following picture shows an excerpt:
The main area of the view contains a sap.m.Carousel which includes three pages. Those pages represent the different charts that can be displayed. Because of the size of one chart and the requirement to make it readable for the user on a mobile device you can swipe easily between the three screens. The carousel, unlike tabs e.g., is optimized for mobile devices and adds value to the UX. The following picture shows the way the carousel is described in our XMLView. Pure HTML code is integrated into the screen of the carousel. This allows us to use Chart.js without writing a new component.
The view’s footer is another customized sap.m.Bar which contains information about the door being opened or closed recently (just sap.m.Labels and sap.m.Texts bound to the model) on the left, a sap.m.ProgressIndicator in the middle which shows the current percentage of beer left in the barrel and the current temperature of the beer in the right content.
Controller-Implementation
The MasterView’s controller implements all the logic of receiving, formatting and passing data from the ODATA service to the view.
It initially sets the JSONModel for our view to have an appropriate binding from the beginning on. After the model is defined and we declared some global variables in the onInit() function, the onAfterRendering() function is invoked. This one is going to call an anonymous function after an interval of 3s which permanently loads the most recent temperature, pours and other activities via the following functions (which use Chart.js ):
loadPour():
This function loads the most recent pours with an AJAX call which then will update the complete consumption in the model as well as the beer left in the current barrel. Those changes regard the model only in the first place, but as the model’s data is changed the UI elements will also change after refreshing the model. That leads to the progressIndicator in our footer being updated e.g.
After the updates on the model are done, it is time to change the charts. Those are recreated every time new data is provided and filled with it. It’s a very descriptive procedure which I won’t elaborate right now. Let’s just say that you need to read the documentation of Chart.js carefully and find the properties of Chart.Line and Chart.Bar that are necessary.
loadTemperature():
This function loads the most recent temperature values with an AJAX call, updates the model to make sure the TextFields are labeled correctly and fills an array with temperature values and dates to then display them on another chart which is filled just as mentioned before.
loadActivities():
This function loads the most recent changes of the kegerator’s door whether it’s been opened or closed and applies the changes to the model (which then applies the changes to the UI). If there has been a change in the doors state the controller will fire a tweet via Twilio API
() complaining about the heat or thanking for closing. This works as follows: Twilio API sends an SMS from within JavaScript. This message then gets forwarded to IFTTT () where a tweet is posted every time a message from Twilios number arrives. The principle “if this then that” is applied.
tweetPour():
Another little easter egg in our application would be an API that allows us to send POST requests to Twitter. Twilio combined with IFTTT provide us with such a functionality so that we can easily send an AJAX POST request to Twilio, if the most recent pour date is higher than the last date. Twilio then forwards the message to IFTTT via SMS where a tweet is posted every time a message with specific information from Twilios number arrives.
Issues and Conclusion
Developing OpenUI5 can be fun once you’ve learned the way it’s used. You could even say that it’s fairly simple taking into consideration that JavaScript, XML, HTML and CSS are quite easy-to-learn languages. Nevertheless, in the beginning you have to be aware of struggles, such as using the right syntax in XML, JSON and JS, always keeping in mind the namespaces of your used libraries in views, finding and modifying the desired properties deep in the heart of container-trees and many more. Luckily, there’s a lot of information which can be found on SCN, Stackoverflow or other forums. Google is a pretty helpful companion on your way to being a professional UI5 developer 😉
Another lesson learned is the variety of opportunities to customize OpenUI5. Being able to react to data model changes with Twitter posts (via Twilio and IFTTT) is a great thing. The flawless integration of an external library (Chart.js) was essential in creating dynamic charts with ease. Plunker as a web development environment is able to integrate the UI5 framework. It is a great way of testing your app or little functions as it usually runs a lot faster than the HANA Web-based Development workbench and a lot more stable than the current WebIDE.
For any questions on the blog, please reach out via mail 🙂
Wonderful work done ! a must commerical product! Congratulations.
Beer and development. what a fine combination 🙂 | https://blogs.sap.com/2015/06/13/meet-keggy-the-iot-kegerator-powered-by-openui5-and-sap-hana/ | CC-MAIN-2019-09 | en | refinedweb |