diff options
Diffstat (limited to 'TAO/orbsvcs')
-rw-r--r-- | TAO/orbsvcs/examples/FaultTolerance/RolyPoly/README | 48 | ||||
-rw-r--r-- | TAO/orbsvcs/examples/ImR/Combined_Service/readme | 38 | ||||
-rw-r--r-- | TAO/orbsvcs/examples/RtEC/Kokyu/README | 26 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Basic/README | 31 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Bug_2415_Regression/README | 11 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Reconnecting/README | 42 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Sequence_Multi_ETCL_Filter/README | 10 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Sequence_Multi_Filter/README | 11 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Structured_Filter/README | 15 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/Structured_Multi_Filter/README | 17 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Notify/ThreadPool/README | 10 | ||||
-rw-r--r-- | TAO/orbsvcs/tests/Sched_Conf/README | 24 |
12 files changed, 148 insertions, 135 deletions
diff --git a/TAO/orbsvcs/examples/FaultTolerance/RolyPoly/README b/TAO/orbsvcs/examples/FaultTolerance/RolyPoly/README index 786b42a81d9..0a19d5285f3 100644 --- a/TAO/orbsvcs/examples/FaultTolerance/RolyPoly/README +++ b/TAO/orbsvcs/examples/FaultTolerance/RolyPoly/README @@ -1,14 +1,14 @@ - +$Id$ Overview -RolyPoly is a simple example that shows how to increase application -reliability by using replication to tolerate faults. It allows you +RolyPoly is a simple example that shows how to increase application +reliability by using replication to tolerate faults. It allows you to start two replicas of the same object which are logically seen as one object by a client. Furthermore, you can terminate one of the replicas without interrupting the service provided by the object. -RolyPoly is using request/reply logging to suppress repeated +RolyPoly is using request/reply logging to suppress repeated requests (thus guaranteeing exactly-once semantic) and state synchronization (to ensure all replicas are in a consistent state). Since replicas are generally distributed across multiple @@ -27,7 +27,7 @@ following crash point numbers are defined: returning reply to the client. Essential difference between crash point 1 and 2 is that in -the second case there should be reply replay while in the +the second case there should be reply replay while in the first case request is simply re-executed (this can be observed in the trace messages of the replicas). @@ -35,27 +35,27 @@ in the trace messages of the replicas). Execution Scenario In this example scenario we will start three replicas. For one -of them (let us call it primary) we will specify a crash point -other than 0. Then we will start a client to execute requests -on the resulting object. After a few requests, primary will -fail and we will be able to observe transparent shifting of -client to the other replica. Also we will be able to make sure -that, after this shifting, object is still in expected state -(i.e. the sequence of returned numbers is not interrupted and +of them (let us call it primary) we will specify a crash point +other than 0. Then we will start a client to execute requests +on the resulting object. After a few requests, primary will +fail and we will be able to observe transparent shifting of +client to the other replica. Also we will be able to make sure +that, after this shifting, object is still in expected state +(i.e. the sequence of returned numbers is not interrupted and that, in case of the crash point 2, request is not re-executed). Note, due to the underlying group communication architecture, the group with only one member (replica in our case) can only -exist for a very short period of time. This, in turn, means -that we need to start first two replicas virtually at the same -time. This is also a reason why we need three replicas instead -of two - if one replica is going to fail then the other one -won't live very long alone. For more information on the reasons -why it works this way please see documentation for TMCast +exist for a very short period of time. This, in turn, means +that we need to start first two replicas virtually at the same +time. This is also a reason why we need three replicas instead +of two - if one replica is going to fail then the other one +won't live very long alone. For more information on the reasons +why it works this way please see documentation for TMCast available at $(ACE_ROOT)/ace/TMCast/README. -Suppose we have node0, node1 and node2 on which we are going -to start our replicas (it could be the same node). Then, to +Suppose we have node0, node1 and node2 on which we are going +to start our replicas (it could be the same node). Then, to start our replicas we can execute the following commands: node0$ ./server -o replica-0.ior -c 2 @@ -66,7 +66,7 @@ When all replicas are up we can start the client: $ ./client -k file://replica-0.ior -k file://replica-1.ior -In this scenario, after executing a few requests, replica-0 +In this scenario, after executing a few requests, replica-0 will fail in crash point 2. After that, replica-1 will continue executing client requests. You can see what's going on with replicas by looking at various trace messages printed during @@ -76,7 +76,7 @@ execution. Architecture The biggest part of the replication logic is carried out by -the ReplicaController. In particular it performs the +the ReplicaController. In particular it performs the following tasks: * management of distributed request/reply log @@ -97,9 +97,9 @@ ReplicaController: implemented by the servant. This two model can be used simultaneously. In RolyPoly interface -implementation you can comment out corresponding piece of code to +implementation you can comment out corresponding piece of code to chose one of the strategies. --- +-- Boris Kolpackov <boris@dre.vanderbilt.edu> diff --git a/TAO/orbsvcs/examples/ImR/Combined_Service/readme b/TAO/orbsvcs/examples/ImR/Combined_Service/readme index e10a95336ef..6a4a7db34cd 100644 --- a/TAO/orbsvcs/examples/ImR/Combined_Service/readme +++ b/TAO/orbsvcs/examples/ImR/Combined_Service/readme @@ -1,28 +1,30 @@ +$Id$ + Test Description: -The test consists of several processes and the usual run_test.pl script. +The test consists of several processes and the usual run_test.pl script. -controller.exe -- This is a simple corba wrapper around the ServiceConfigurator - which takes -c <cmd> and -r options to run a command and - reload the conf file respectively. +controller.exe -- This is a simple corba wrapper around the ServiceConfigurator + which takes -c <cmd> and -r options to run a command and + reload the conf file respectively. -combined_service.exe -- It combines the tao imr locator, activator, and a dynamic - server in a single process. You can use any service - configurator command line options, and it also writes - out a combined.ior file that can be use d with the controller above. +combined_service.exe -- It combines the tao imr locator, activator, and a dynamic + server in a single process. You can use any service + configurator command line options, and it also writes + out a combined.ior file that can be use d with the controller above. -test_server.exe -- This is a simple tao server that exposes two imr-ified objects - called TestObject1 and TestObject2. You must start it with - -orbuseimr 1 as usual. +test_server.exe -- This is a simple tao server that exposes two imr-ified objects + called TestObject1 and TestObject2. You must start it with + -orbuseimr 1 as usual. -dynserver.dll -- This is the same server as above, except for use with the ServiceConfigurator. - It exposes DynObject1 and DynObject2. This program is not currently used as +dynserver.dll -- This is the same server as above, except for use with the ServiceConfigurator. + It exposes DynObject1 and DynObject2. This program is not currently used as part of the run_test.pl -test.exe -- This is a simple client that invokes the test() operation on the Test object. - Start it with -orbinitref Test=... It can be used against any of the - four objects above. +test.exe -- This is a simple client that invokes the test() operation on the Test object. + Start it with -orbinitref Test=... It can be used against any of the + four objects above. -There are also comments within the run_test.pl that describe the -test and expected results at various stages. +There are also comments within the run_test.pl that describe the +test and expected results at various stages. diff --git a/TAO/orbsvcs/examples/RtEC/Kokyu/README b/TAO/orbsvcs/examples/RtEC/Kokyu/README index f7a98f7acc7..ee61b15e3ef 100644 --- a/TAO/orbsvcs/examples/RtEC/Kokyu/README +++ b/TAO/orbsvcs/examples/RtEC/Kokyu/README @@ -1,13 +1,13 @@ -# $Id$ +$Id$ Shows how to use the scheduling service in conjunction with the real-time event channel. The test also uses the Kokyu -dispatching module within the RTEC, which provides the -dispatching queues for the isolation of events based on -their preemption priority generated by the scheduler. The -test has two consumers and two suppliers. The test also -demonstrates how to use timers in the EC to trigger timeout -events for timeout consumers which inturn act as suppliers +dispatching module within the RTEC, which provides the +dispatching queues for the isolation of events based on +their preemption priority generated by the scheduler. The +test has two consumers and two suppliers. The test also +demonstrates how to use timers in the EC to trigger timeout +events for timeout consumers which inturn act as suppliers to other consumers. The following shows the test setup. @@ -21,10 +21,10 @@ HI_CRIT |-----| The event-channel cooperates with the scheduling service to compute a schedule and assign priorities to each event. The event channel will use different queues for those events, each queue -serviced by threads at different priorities. In the above +serviced by threads at different priorities. In the above test case, there will be two dispatching queues, one for each flow. The 1Hz flow will have higher priority than the 1/3Hz flow -wirh plain RMS scheduling. With MUF scheduling, the HI_CRIT +wirh plain RMS scheduling. With MUF scheduling, the HI_CRIT flow will have higher priority than the LO_CRIT flow. The example can be run as follows: @@ -35,8 +35,8 @@ Please make sure you run the example with root privileges. Expected output for RMS ----------------------- -You should see the 1Hz events dispatched by a higher priority -thread than the 1/3Hz events. Sample output is shown below. Here +You should see the 1Hz events dispatched by a higher priority +thread than the 1/3Hz events. Sample output is shown below. Here 2051 is the thread id of the thread dispatching 1/3Hz events and 1026 is the thread id of the thread dispatching 1Hz events. The latter runs at a higher real-time thread priority than the @@ -50,9 +50,9 @@ Consumer (27703|2051) we received event type 17 Expected output for MUF ----------------------- -You should see the 1/3Hz events dispatched by a higher priority +You should see the 1/3Hz events dispatched by a higher priority thread than the 1Hz events since the former is more critical -than the latter. Sample output is shown below. Here +than the latter. Sample output is shown below. Here 2051 is the thread id of the thread dispatching 1Hz events and 1026 is the thread id of the thread dispatching 1/3Hz events. The latter runs at a higher real-time thread priority than the diff --git a/TAO/orbsvcs/tests/Notify/Basic/README b/TAO/orbsvcs/tests/Notify/Basic/README index ed9a0128716..d79ebc2ecfb 100644 --- a/TAO/orbsvcs/tests/Notify/Basic/README +++ b/TAO/orbsvcs/tests/Notify/Basic/README @@ -1,6 +1,7 @@ +$Id$ - Basic Tests - =========== +Basic Tests +=========== Updates: ------- @@ -16,7 +17,7 @@ Connects/Disconnects consumers and suppliers in a loop to test connect and disconnect to admin objects. Command line parameters: - + "-count <testcount>", "-consumers <number_of_consumers>", "-suppliers <number_of_suppliers>", @@ -28,18 +29,18 @@ Creates and destroys EC and Admin objects. Command line parameters: "-count testcount" -where <testcount> is how many times we want to create/destroy. +where <testcount> is how many times we want to create/destroy. IdAssignment: ------------ This test exercies Id generation by creating ec and admin objects and using the assigned ids to lookup these objects and destroy them. -Command line parameters: -"-iter <count>", count is how many times to repeat this test. -"-ec_count <count>", count is number of ec objects to create -"-ca_count <count>", count is number of consumer admin (ca) objects to create -"-sa_count <count>\n", count is number of supplier admin (sa) objects to create +Command line parameters: +"-iter <count>", count is how many times to repeat this test. +"-ec_count <count>", count is number of ec objects to create +"-ca_count <count>", count is number of consumer admin (ca) objects to create +"-sa_count <count>\n", count is number of supplier admin (sa) objects to create AdminProperties @@ -55,17 +56,17 @@ command line parameters: -consumers [consumers] -suppliers [suppliers] -event_count [event_count] --ConsumerDelay [delay in secs] +-ConsumerDelay [delay in secs] // sleep period per push for the consumer created to test MaxQueueLength -InitialDelay [delay in secs] Events: ---------- -This test creates 1 structured supplier and 2 structured consumers. +This test creates 1 structured supplier and 2 structured consumers. Each consumer should receive all the events send by the supplier. The uses the default ConsumerAdmin and default Supplier Admin if the -use_default_admin option is specified. - + command line options: -use_default_admin -events [number of events to send] @@ -73,9 +74,9 @@ command line options: MultiTypes: ----------- Creates a Supplier and Consumer each for the 3 Client types that send -and receive Any, Structured and Sequence event types. +and receive Any, Structured and Sequence event types. Each type of the supplier then sends an event each to the Notification -channel. All 3 types of consumers should receive 3 events each. +channel. All 3 types of consumers should receive 3 events each. command line options: none. @@ -86,7 +87,7 @@ Creates 1 Any Supplier and 1 Any Consumer. Events received by the supplier must be equal to the count send. command line options: --events [number of events to send] +-events [number of events to send] Filter: ------ diff --git a/TAO/orbsvcs/tests/Notify/Bug_2415_Regression/README b/TAO/orbsvcs/tests/Notify/Bug_2415_Regression/README index 0aac6bd0c6c..5270ee03606 100644 --- a/TAO/orbsvcs/tests/Notify/Bug_2415_Regression/README +++ b/TAO/orbsvcs/tests/Notify/Bug_2415_Regression/README @@ -1,12 +1,13 @@ +$Id$ + Sequence Event ETCL Filter Test =============================== - Description ----------- This test checks push supplier and push consumer ETCL event filter mechanisms. -The supplier sends a number of events specified by the consumer. The consumer +The supplier sends a number of events specified by the consumer. The consumer can filter or not filter the events and can use multiple consumers. The consumer may specify 'and' and/or 'or' relations on the filterable data contained within an event. @@ -18,19 +19,19 @@ Usage The test consists of a Supplier and Consumer. The usage for each as is follows: -$ ./Sequence_Supplier +$ ./Sequence_Supplier usage: ./Sequence_Supplier -o <iorfile> -e <# of events> $ ./Sequence_Consumer -\? usage: ./Sequence_Consumer -k <ior> -l <low expected events> -h <high expected events> -To run this test, run the run_test.pl perl script. +To run this test, run the run_test.pl perl script. This script is designed to test various aspects of the filtering mechanism. Expected Results ---------------- -The test script will display an error if for any test that fails. +The test script will display an error if for any test that fails. Otherwise, the test passed. diff --git a/TAO/orbsvcs/tests/Notify/Reconnecting/README b/TAO/orbsvcs/tests/Notify/Reconnecting/README index 723c0a4f730..496bea91bd8 100644 --- a/TAO/orbsvcs/tests/Notify/Reconnecting/README +++ b/TAO/orbsvcs/tests/Notify/Reconnecting/README @@ -31,10 +31,10 @@ This directory contains: but not event persistence. ns_st_both.conf -- configures the Notification Service for single thread operation with support for both topological, - and event persistence. + and event persistence. ns_mt_both.conf -- configures the Notification Service for multi- threaded operation with support for both topological, - and event persistence. + and event persistence. event.conf -- configures the Notification Service for event persistence without topology persistence. This is an invalid configuration and should cause the @@ -67,9 +67,9 @@ the Supplier. -nonamesvc Don't use the Naming Service to find EventChannelFactory - -channel filename Where to store a channel number so the Supplier can + -channel filename Where to store a channel number so the Supplier can find it - -any or -str or -seq What type of event supplier will send (pick one, + -any or -str or -seq What type of event supplier will send (pick one, default: -any) -expect n How many events are expected. -fail n Simulate a recoverable failure every n events. @@ -77,7 +77,7 @@ the Supplier. used, then serial number checking is disabled. This allows testing the consumer with multiple Suppliers. - -disconnect Disconnect from notfication service cleanly + -disconnect Disconnect from notfication service cleanly (no reconnect will be possible) -v Verbose output. @@ -122,7 +122,7 @@ Service). Structured events and Sequence events are events supported only by the Notification Service. See the TAO Developer's Guide or the CORBA specification for more details. -Only one of these three options should be specified. If none of these +Only one of these three options should be specified. If none of these is specified, the default is "-any". Command line option: -send n @@ -137,17 +137,17 @@ After it has received that many events, the Consumer will shut down. Command line option: -fail n ------------------------------ -This Consumer-only option tells the Consumer to throw an exception +This Consumer-only option tells the Consumer to throw an exception (CORBA::UNKNOWN) every n events. This simulates a recoverable error in the consumer. After throwing the exception, the consumer continues to listen for incoming events. It expects the event it was processing to be retransmitted. Because of the retransmission, the use of the -fail option may be -counterintuitive. If the consumer options are "-expect 10 -fail 4" then -it will receive events 0, 1, 2, and fail on event 3. It will then -receive 3, 4, 5, and fail on event 6. Then it will receive 6, 7, 8, -and fail on event 9. Finally it will receive the retransmission of event +counterintuitive. If the consumer options are "-expect 10 -fail 4" then +it will receive events 0, 1, 2, and fail on event 3. It will then +receive 3, 4, 5, and fail on event 6. Then it will receive 6, 7, 8, +and fail on event 9. Finally it will receive the retransmission of event 9 and exit. Command line option: -pause n @@ -234,7 +234,7 @@ during the test. The default if none of these options is present is "-any". run_test.pl: command line option -v -------------------------------------------- This option controls the verbosity of the test script and the Supplier and -Consumer applications. When it is present, a detailed step-by-step +Consumer applications. When it is present, a detailed step-by-step report is produced by the test. run_test.pl: Test #1: Supplier reconnection. @@ -246,8 +246,8 @@ The Consumer is configured to receive 20 events. The Supplier is configured to send ten events. After sending ten events, the Supplier exits -- simulating a Supplier failure. -The test script starts a new copy of the Supplier. The new Supplier is -configured to send ten events starting with event number 10. +The test script starts a new copy of the Supplier. The new Supplier is +configured to send ten events starting with event number 10. It uses information saved by the previous supplier to reconnect to the same channel, admin, and proxy in the Notification Services. The Suppler sends the remaining ten events then exists. The Consumer having @@ -262,9 +262,9 @@ The Notification Service from the previous test is still running and the saved reconnection information for both the Supplier and Consumer is still available. -The test script starts a Consumer configured to receive 20 events and a +The test script starts a Consumer configured to receive 20 events and a Supplier configured to send twenty events. Both clients use the reconnection -information from the previous test to reconnect to the Notification Service. +information from the previous test to reconnect to the Notification Service. Twenty events are sent successfully, then both clients exit and the test is complete. @@ -292,13 +292,13 @@ run_test.pl: Test #4: The Reconnection Registry ----------------------------------------------- This test starts with the Notification Service from the previous test. -The script starts a new Consumer that expects to receive 20 events. The +The script starts a new Consumer that expects to receive 20 events. The Consumer reconnects to the Notification Server. -The script starts a Supplier. It is configured to send 10 events then -pause waiting for a Notification Service initiated reconnection before +The script starts a Supplier. It is configured to send 10 events then +pause waiting for a Notification Service initiated reconnection before sending the remaining 10 events. -Both clients register with the Reconnection Registry to receive reconnection +Both clients register with the Reconnection Registry to receive reconnection callbacks. The test script waits for the Supplier to pause. It then kills the @@ -335,7 +335,7 @@ communication or Consumer failures. Known Problems as of Feb 2004. ------------------------------ -Sequence events are not working. It is unclear whether this is a problem in +Sequence events are not working. It is unclear whether this is a problem in the test or in the Notification Service itself. Known Problems as of Mar 2004. diff --git a/TAO/orbsvcs/tests/Notify/Sequence_Multi_ETCL_Filter/README b/TAO/orbsvcs/tests/Notify/Sequence_Multi_ETCL_Filter/README index 0aac6bd0c6c..d8f94c0e9df 100644 --- a/TAO/orbsvcs/tests/Notify/Sequence_Multi_ETCL_Filter/README +++ b/TAO/orbsvcs/tests/Notify/Sequence_Multi_ETCL_Filter/README @@ -1,3 +1,5 @@ +$Id$ + Sequence Event ETCL Filter Test =============================== @@ -6,7 +8,7 @@ Description ----------- This test checks push supplier and push consumer ETCL event filter mechanisms. -The supplier sends a number of events specified by the consumer. The consumer +The supplier sends a number of events specified by the consumer. The consumer can filter or not filter the events and can use multiple consumers. The consumer may specify 'and' and/or 'or' relations on the filterable data contained within an event. @@ -18,19 +20,19 @@ Usage The test consists of a Supplier and Consumer. The usage for each as is follows: -$ ./Sequence_Supplier +$ ./Sequence_Supplier usage: ./Sequence_Supplier -o <iorfile> -e <# of events> $ ./Sequence_Consumer -\? usage: ./Sequence_Consumer -k <ior> -l <low expected events> -h <high expected events> -To run this test, run the run_test.pl perl script. +To run this test, run the run_test.pl perl script. This script is designed to test various aspects of the filtering mechanism. Expected Results ---------------- -The test script will display an error if for any test that fails. +The test script will display an error if for any test that fails. Otherwise, the test passed. diff --git a/TAO/orbsvcs/tests/Notify/Sequence_Multi_Filter/README b/TAO/orbsvcs/tests/Notify/Sequence_Multi_Filter/README index 13486ab0797..33fe7f71367 100644 --- a/TAO/orbsvcs/tests/Notify/Sequence_Multi_Filter/README +++ b/TAO/orbsvcs/tests/Notify/Sequence_Multi_Filter/README @@ -1,12 +1,13 @@ +$Id$ + Sequence Event Filter Test ============================ - Description ----------- This test checks push supplier and push consumer event filter mechanisms. -The supplier sends a number of events specified by the consumer. The consumer +The supplier sends a number of events specified by the consumer. The consumer can filter or not filter the events and can use multiple consumers. The consumer may specify 'and' and/or 'or' relations on the filterable data contained within an event. @@ -19,7 +20,7 @@ The test consists of a Supplier and Consumer. The usage for each as is follows: $ ./Sequence_Supplier -\? -usage: ./Sequence_Supplier -ORBInitRef <Naming Service Location> +usage: ./Sequence_Supplier -ORBInitRef <Naming Service Location> $ ./Sequence_Consumer -\? usage: ./Sequence_Consumer -l <low> -h high -d <discard policy> -c <constraint string> @@ -28,12 +29,12 @@ The "low" value specified the number of whole batches of events expected. The " The "constraint string" allows the TCL filter string to be specified on the command line. -To run this test, run the run_test.pl perl script. +To run this test, run the run_test.pl perl script. This script is designed to test various aspects of the filtering mechanism. Expected Results ---------------- -The test script will display an error if for any test that fails. +The test script will display an error if for any test that fails. Otherwise, the test passed. diff --git a/TAO/orbsvcs/tests/Notify/Structured_Filter/README b/TAO/orbsvcs/tests/Notify/Structured_Filter/README index 879ed382a1c..b25462c995d 100644 --- a/TAO/orbsvcs/tests/Notify/Structured_Filter/README +++ b/TAO/orbsvcs/tests/Notify/Structured_Filter/README @@ -1,12 +1,13 @@ +$Id$ + Structured Event Filter Test ============================ - Description ----------- This test checks push supplier and push consumer event filter mechanisms. -The supplier sends a number of events specified by the consumer. The consumer +The supplier sends a number of events specified by the consumer. The consumer can filter or not filter the events and can use multiple consumers. If filtered, the consumer will receive 1/3 of the total events. @@ -17,20 +18,20 @@ Usage The test consists of a Supplier and Consumer. The usage for each as is follows: -$ Structured_Supplier -usage: ./Structured_Supplier -ORBInitRef <Naming Service Location> +$ Structured_Supplier +usage: ./Structured_Supplier -ORBInitRef <Naming Service Location> $ Structured_Consumer -\? usage: ./Structured_Consumer [-f] -n <num events> -c <num consumers> \ -ORBInitRef <Naming Service Location> -The -f option applies an event filter to the consumer. The -c option lets +The -f option applies an event filter to the consumer. The -c option lets the user specify the number of consumers to use. -To run this test, run the run_test.pl perl script. +To run this test, run the run_test.pl perl script. Expected Results ---------------- -The test script will display an error if for any test that fails. +The test script will display an error if for any test that fails. Otherwise, the test passed.
\ No newline at end of file diff --git a/TAO/orbsvcs/tests/Notify/Structured_Multi_Filter/README b/TAO/orbsvcs/tests/Notify/Structured_Multi_Filter/README index 30fd2d5d9f7..4af5c861fb7 100644 --- a/TAO/orbsvcs/tests/Notify/Structured_Multi_Filter/README +++ b/TAO/orbsvcs/tests/Notify/Structured_Multi_Filter/README @@ -1,15 +1,16 @@ +$Id$ + Structured Event InterFilterGroupOperator Test ============================================== - Description ----------- -This test checks push supplier and push consumer event logical operators -between the Supplier/Consumer admins and their proxies. The supplier sends +This test checks push supplier and push consumer event logical operators +between the Supplier/Consumer admins and their proxies. The supplier sends a number of events specified by the consumer. The supplier and consumer can filter or not filter the events, and can AND and OR the proxy and admin -filters. +filters. Usage @@ -18,22 +19,22 @@ Usage The test consists of a Supplier and Consumer. The usage for each as is follows: -$ Structured_Supplier +$ Structured_Supplier usage: ./Structured_Supplier [-f] -o <AND_OP | OR_OP> \ -ORBInitRef <Naming Service Location> $ Structured_Consumer -\? -usage: ./Structured_Consumer [-f] [-s] -n <num events> -c <num consumers> \ +usage: ./Structured_Consumer [-f] [-s] -n <num events> -c <num consumers> \ -o <AND_OP | OR_OP> -ORBInitRef <Naming Service Location> The -f option applies an the event filter to the supplier or consumer. The -s option alerts the consumer to supplier filtering. The -c option lets the user specify the number of consumers to use. -To run this test, run the run_test.pl perl script. +To run this test, run the run_test.pl perl script. Expected Results ---------------- -The test script will display an error if for any test that fails. +The test script will display an error if for any test that fails. Otherwise, the test passed. diff --git a/TAO/orbsvcs/tests/Notify/ThreadPool/README b/TAO/orbsvcs/tests/Notify/ThreadPool/README index d690937f041..0575459a378 100644 --- a/TAO/orbsvcs/tests/Notify/ThreadPool/README +++ b/TAO/orbsvcs/tests/Notify/ThreadPool/README @@ -1,3 +1,5 @@ +$Id$ + ThreadPool test =============== @@ -19,7 +21,7 @@ supplier.conf: This creates the following - - An EventChannel with a threadpool. -- A SupplierAdmin (SA1)with a threadpool. +- A SupplierAdmin (SA1)with a threadpool. - Another SupplierAdmin (SA2) with no threadpool. - A ProxyConsumer(1) is connected to SA1 with a threadpool. @@ -33,11 +35,11 @@ consumer.conf: ------------- This creates: -- A ConsumerAdmin (CA1)with a threadpool. +- A ConsumerAdmin (CA1)with a threadpool. - Another ConsumerAdmin (CA2) with no threadpool. An RT POA is created in which the ProxySuppliers are activated. - + - A ProxySupplier(1) is connected to CA1 with a threadpool. - A ProxySupplier(2) is connected to CA1 with no threadpool. - A ProxySupplier(3) is connected to CA2 with no threadpool. @@ -49,4 +51,4 @@ Expected Result: ============== if a request reaches a threadpool that it was not supposed to, an error message is printed. otherwise some housekeeping messages are -generated when the test runs.
\ No newline at end of file +generated when the test runs.
\ No newline at end of file diff --git a/TAO/orbsvcs/tests/Sched_Conf/README b/TAO/orbsvcs/tests/Sched_Conf/README index e787e7d690a..bb932c88e79 100644 --- a/TAO/orbsvcs/tests/Sched_Conf/README +++ b/TAO/orbsvcs/tests/Sched_Conf/README @@ -1,34 +1,36 @@ +$Id$ + Overview: The scheduling service can run in one of two different modes of operation, an off-line configuration mode, and a run-time execution mode. - The application uses the Scheduler_Factory to specify in which mode + The application uses the Scheduler_Factory to specify in which mode it would like to use the scheduling service. - + In the configuration mode, the application registers RT_Infos containing operation characteristics with the off-line scheduler, and also specifies operation dependencies. The Event Channel also registers RT_Infos for its own operations, and specifies any additional dependencies introduced by - subscription or event correllation. - + subscription or event correllation. + Once all operations are registered, the application invokes the scheduler's compute_scheduling method. The scheduler generates a "schedule" consisting of operation priorities and sub-priorities, and - determines whether or not the schedule is feasible. The scheduler also + determines whether or not the schedule is feasible. The scheduler also produces queue specification information that can be used to configure the dispatching module's number and kinds of queues (this automatic dispatching module configuration will appear in a TAO release *very* soon). The application then may ask the config scheduler to dump it's schedule to a header file. - + The Sched_Conf.cpp file in this directory is an example of how this is done. Building and running the Sched_Conf executable will produce a header file called Sched_Conf_Runtime.h, which is included by Sched_Conf_Runtime.cpp. - - The dumped header file contains tables with the static scheduling and - configuration information. The Sched_Conf_Runtime application passes this - information to the run-time scheduler at start-up. The application may also - re-register its operations to verify the correct operations were loaded. The + + The dumped header file contains tables with the static scheduling and + configuration information. The Sched_Conf_Runtime application passes this + information to the run-time scheduler at start-up. The application may also + re-register its operations to verify the correct operations were loaded. The Sched_Conf_Runtime does this, and in fact exercises a number of methods of the run-time scheduler to ensure it gives correct responses for the table of operations with which it was instantiated. |