Quantcast
Channel:
Viewing all 5681 articles
Browse latest View live

Forum Post: 3rd Party OPC Server, connecting from Application Station via OPC Mirror

$
0
0

I have all the relevant user accounts created (CPX_OPC which is used for running OPCMirror on DV, CIMUSER as the 3rd party OPC Server runas account)  in both the remote OPC Server (Cimplicity and a workgroup) and DV app station (domain), but I can't get the unsolicited callbacks to work, I get an advise fail, and event logs (at the DV application station) lists a failed logon (cimuser).

UNC works both ways and I can read/browse OPC points from the Cimplicity OPC server using OPCWatchit, but I do receive an advise fail error.  OPC Mirror reports the pipe as active, but monitoring the pipe items gives stale data.

Obviously this is related to the failing logon from the remote OPC server, preventing the OPC group advise from working.

SID and anonymous account translation is enabled in group policy, but still I get failed logons.

What am I missingTongue Tied


Forum Post: Conflict Resolution

$
0
0

My neighbor and good friend Hayden Hayden has just written a book entitled "Conscious Choosing for Flow" and I would like to recommend it to the community.  It isn't a technical book, but rather a book on ways to deal with conflict.  Hayden is a successful entrepreneur and currently a coach for executives.  His thesis and the subtitle of the book is "Transforming Conflict into Creativity."  He says his book is targeted at business managers and HR professionals, but I think his message can be more broadly used than that.  Whenever people interact, there is a good chance there will be some conflict somewhere along the way.  Rather than looking to conflict management or negotiation, he offers a third way that he describes as conflict transformation.  He believes and describes how you can consciously choose to turn any conflict into something positive, dynamic, and creative.  His approach is built around STAR... Stop, Think, Act, Review, which is not too far from the DMAIC approach that many engineers know and use.  And not just dry reading, it includes exercises you can use to explore or validate the concepts as they are presented.  It can be as useful to your personal life as it is to your professional life.  His book is available on Amazon in paperback and kindle versions. 

Forum Post: How good is your level control?

$
0
0

Is it good enough?  Is it too good?  Do you even know?  Should you care?

Well yes, you probably should care.  Most level processes are non-self-regulating or integrating processes.  Everything you probably learned about tuning PID self-regulating loops like flow, pressure, and temperature does not work quite the same on integrating processes. So it is quite common for level loops to be tuned "by the seat of the pants" or trial and error.  Furthermore, most level loops are tuned to achieve good setpoint response and yet most level loops have one setpoint (typically 50% of the tank height) and rarely is the setpoint ever changed.  It is usually more important to consider the response to load disturbances.  Even if and sometimes especially when the level is tightly controlled, regardless of how it was tuned, it is likely that the underlying disturbance and resulting variability is amplified rather than attenuated.  That is never a good thing.

Control loops are intended to control processes with more gradual (low frequency) disturbances.  They are not the right tool for attenuating high frequency variability.  That is one reason we have surge tanks, which can attenuate high frequency variability in in flow or out flow.  Yet level controls on many surge tanks are tuned to prevent almost any deviation in level.  If the in flow is varying, then the out flow will vary the same.  This essentially is in conflict with the intended purpose of the surge tank.  And the truth is that all tanks are surge tanks.  Some may be undersized and others may be over-sized, but they are all essentially "wide spots" in the pipe.  To take advantage of the surge capacity, it is necessary to know the potential variability or worst case disturbance of the wild flow and the allowable limits on the level.  Then we can tune the level control to be able to respond th the worst case while keeping the level in bounds.

You need a tuning methodology.  The one Emerson's control performance consultants use is lambda tuning.  The premise is that for any linear process with a feedback control loop, the control loop can be tuned to provide a first order closed loop response using the right PID tuning constants.  The information required to tune the process is the process gain, the process dead-time, and the process time constants.  Lambda is the closed loop time constant and defines the speed of response of the loop under control.  Interacting self-regulating loops can be dynamically decoupled by making the lambda of one loop sufficiently larger than the other.  There is a minimum lambda that can be defined to avoid unstable or oscillatory response under closed loop control.  But in the context of level control, the selection of lambda defines the speed of response which is related to the arrest time and deviation for a disturbance.  Lambda tuning of integrating processes reduces the variability of the manipulated flow and takes maximum advantage of the surge capacity in the vessel without risking loss of containment.

I would like to offer some examples I have seen often enough to mention.  One is the base or bottom level control in a distillation process.  It is quite common to see the base level controller tuned for very tight, aggressive control.  The result is that the bottom flow can be quite variable and in the extreme can see the bottom flow oscillating between high flow and no flow as fast as the control valve can move.  This is obviously not good for the control valve, but it can  be detrimental to the process as well.  The bottom of the column is at a high temperature and often is is beneficial to recover some of the heat before that stream is sent to the next step in the process.  If the heat recovery is used to preheat the column feed, for example, you can see how that will introduce variability into the feed of the column and be disruptive.  I have found this to be quite common on fractionator columns in refinery crude units.  It would be much better to reduce and minimize the variability in bottom product flow, even if the level varies a bit in the base of the column.

Another distillation example is seen at the top of the column.  It isn't too common to control the level in the reflux accumulator by manipulating the reflux flow, but it is sometimes necessary to use that configuration.  Sometimes the distillate product flow, which is being used for composition control, will be feed forward into the reflux flow loop to improve level control in a way analogous to 3 element steam drum level control.  Regardless, a poorly tuned level control will create variability in the reflux flow which obviously has an immediate effect on composition and temperatures at the top of the column.  Lambda tuning with as large of a lambda value as can be tolerated, will minimize the variability created by variable reflux flow.  As long as the reflux accumulator level stays within limits, the rest of the control loops can be successful.

Another process which is usually characterized as integrating is pressure control of a gas where there is no phase change.  Like liquid volume is the integral of liquid flow, pressure is the integral of gas flow.  Sometimes the disturbances are greater and/or the vessel pressure limits are tighter, equivalent to an undersized surge tank, but the process is inherently integrating and analogous to liquid level control.  The same techniques and formulas apply.  In one example I saw a few years ago, a distillation tower was being fed directly from a reactor effluent.  The feed flow was cascaded to the reactor pressure control.  The pressure controller was tuned too aggressively and that resulted in a variable feed flow to the column.  This limited the ability of the column controls to achieve good composition control as the product quality variable had exactly the same frequency as the feed flow.  We could dampen the amplitude of the product quality, but could not eliminate the variability until we re-tuned the pressure controls using lambda tuning.

Sometimes even lambda tuning alone is not sufficient to achieve satisfactory control.  In this case, feed forward makes a lot of sense.  Steam drum level control is often implemented with 3-element control in which steam flow is essentially a feed forward signal to the boiler feed water flow and the level control trims the feed forward controls.  It is never a good thing to boil a steam drum dry or to get water into the team header and that is why steam drum controls are often designed with 3 element drum level control.  The alternative would be to have a larger steam drum, but as a pressure vessel, the cost of increasing the size of the steam drum is much higher than implementing a straight-forward control strategy.  In another example where level was difficult to control, the probably was dead time.  Dead time in any loop is the hardest dynamic element to overcome.  In this case, a hopper was being fed by granular solids and there was a rotary drum used to provide mixing.  No matter how fast the different feeds were changed, they had to move through the rotary drum which was constant speed.  This provided a significant amount of dead time in the loop.  The contents of the hopper were fed at a controlled rate to the next process.  The level in the hopper had an important affect on the density and packing of the material on the hopper bottom conveyor which affect the downstream process.  and even worse, if the level in the hoper is too high, material in the hopper can bridge  and there will suddenly be no feed on the hopper bottom conveyor.  We could have resolved this with feed forward control using PID to trim the level.  But in this case, we used Predict MPC control and configured the feed flow as a disturbance (feed forward) variable which worked quite well.

So to evaluate your level control, you should look at the level behavior, but you must also look at the behavior of the manipulated flow.  If you want to learn more about Lambda tuning and integrating processes, there is usually a workshop and/or short course discussing it at Emerson Exchange.  If you want help, please ask for the assistance of one of Emerson's Control Performance Consultants.  And Emerson's Education Center offers courses in Modern Loop Tuning and Control Engineering that provide all the information you will need to tune up your level controllers to achieve "best" control performance.  And as always, your comments and feedback are appreciated.  I like to learn new things, too.

Blog Post: 에머슨 초음파 누출 감지기, 해양 부문 적용에 대한 DNV 승인 획득

$
0
0
에머슨 초음파 누출 감지기, 해양 부문 적용에 대한 DNV 승인 획득 Rosemount Analytical GDU-Incus 초음파 가스 누출 감지기는 고도로 어려운 가스 감지 요구 사항을 충족하여, 선박, 제품, 인력의 보호를 도모합니다. 에머슨 프로세스 매니지먼트의 Rosemount Analytical GDU-Incus 초음파 누출 감지기가 국제적 컨설팅 및 인증 기관으로 널리 알려진 DNV(노르웨이 선급협회) 승인을 획득했습니다. 금번 DNV 인증으로 본 감지기의 해양 선박에서의 사용 적합성이 다시 한번 증명되었다고 할 수 있으며, 그 사용 대상으로는 가스 누출이 신속하게 감지될 경우 생산 손실 이상의 고충을 겪을 수 있는 LNG 및 LPG 운반선, 원유 유조선, .......

Forum Post: Multi-point temperature applications

$
0
0

Many plants have process units where multipoint temperature sensor arrays are used to capture temperature profiles to detect hot-spots, or where multiple single temperature points are within close proximity. Multi-input temperature transmitters are ideal for applications where there are many temperature measurements clustered together. Applications include:

  • High resolution      temperature profiles of tanks using multipoint temperature sensor arrays for      computation of density to calculate volume and mass of the product.
  • High resolution reactor      temperature profiles using multipoint temperature sensor arrays to      identify hot-spots and channeling to prevent product or catalyst damage,      and control reaction efficiency.
  • Column temperature profile      with sensors at every tray to optimize separation and product quality.
  • Multiple points throughout      a furnace to determine how efficiently the furnace uses energy to improve      energy usage to reduce operating costs.
  • Motor winding temperatures      to ensure they are operating within specifications, thus extending service      life and preventing unnecessary downtime.
  • Bearing temperature on      critical compressors, pumps, fans, agitators, and conveyor belts etc. to      alert when they exceed suggested operating temperatures to prevent      potential damage, cascading into shutdowns of larger processing equipment.
  • Heat exchanger efficiency      by measuring inlet and outlet temperatures for steam and product to detect      degradation due to fouling to determine if cleaning is needed.
  • Boiler tube surface      temperature to detect slagging or soot deposits hampering heat transfer      and predicting fatigue to prevent boiler shutdowns due to tube ruptures,      improving efficiency and plant availability.

 To condition these sensor signals in the past you used to have to decide between accuracy, using many single-point measurement transmitters, or low-cost using control system temperature input cards or temperature multiplexers. However, multi-input temperature transmitters provide both the precision of field mounted transmitters, and economy using wireless or using only a single pair of wires from the multi-input temperature to the junction box and is two-wire loop powered  Thus, no separate electrical power is required. The solution can be intrinsically safe, non-incendive, and flame/explosion-proof, making it suitable for all hazardous areas. All sensor signals are carried on the same two wires or over the air.

 Some reactors and heat exchangers around plants may not be continuously monitored, relying on manual data collection because they were never instrumented due to the high cost of temperature input cards and compensation wires, or single point transmitters, wiring, and analog input cards. Modern plants are now built with multi-input temperature transmitters at lower cost, and existing plants can be modernized with multi-input temperature transmitters where measurements are missing.

 Around-the-clock automatic device diagnostics monitoring alerts personnel to problems like sensor failures.

 The right temperature is important for the operation of many processes. The wrong temperature will impact plant throughput, quality, and yield. Temperature is also important for maintenance, as high temperature is a leading indicator of problems in motors and machinery. If left unattended, improper temperatures can result in plant downtime and maintenance costs. Deploying transmitters to cover these missing measurements therefore makes sense.

 A single gateway can be used to integrate hundreds of temperature points into an existing control system.

 Read about one such modernization case here:

http://www2.emersonprocess.com/siteadmincenter/PM%20Central%20Web%20Documents/QBRExxonMobil3feb.pdf

 What other applications are there where there are multiple temperature points in close proximity of each other such that it would make sense to use temperature transmitters with 4 or 8 inputs?

Forum Post: How to check PM lists in ACS6048

$
0
0

Hi Supports,

We are using the ACS6048 and PM3000 in our firm. And we encountered a problem about checking PM configuration in the ACS6048.

So Could you tell us how to show all of serial ports with PDU ID and Outlets on one page.

Then we can find out the issue that the same PDU ID and Outlets add to different serial ports.

Or how to limit each PDU ID and Outlets just can be added to one serial port.

Thanks,

-Xin

Forum Post: RE: OPC license release

$
0
0

Hi,

is the tag count increasing or the connection amounts?

Forum Post: RE: Configure Process History View

$
0
0

Where is your event chronicle located?  Based on your description it sounds like the Pro+, because PHV can enumerate all known event chronicles.  If you look in DeltaV explorer under the Alarms and Events field of each workstation or application station node, you can see which are enabled as event chronicles. If there are more enabled than you can see in the PHV list, then the chronicles may not have been downloaded and are not working.

Obviously, localhost will only work if the workstation where you launch PHV is also a chronicle.

Defaulthost is tricky.  It represents what you as a user has selected for the client workstation as your default chronicle.   The default chronicle is used for event charts by default.  It is user-profile based, and could potentially vary for each user on each workstation.  The "default" default chronicle is localhost, I believe.

Now for testing, lets do the following:

1. **Non-Admin** user should log in to Windows and DeltaV on a workstation.

2. Open PHV and select File-->Set Default Data Servers

3. Under Event Chronicle, Select Pro+ for the Events Data Server.  Ensure 'save as my Application Startup Data Server' is checked.

4. Open a new Event file, File-->New-->Event.

5. Select Events-->Configure Events.  

6. In the Configure Event Chronicle dialog, see if the Events drop down contains any current datasets.

If you don't see current data sets, repeat this process for the admin user.  If he DOES see extra chronicles, then as I said, it would seem to be a permissions issue on the current data sets in SQL.


Forum Post: RE: electromagnetic flow transmitter error

$
0
0

Hi Mohsen,

That is a normal response from the transmitter if the sensor/pipe is empty. The empty pipe detection function is supposed to detect an empty pipe (which would also be an electrode circuit open) condition, report that status message, and force the flow reading to zero.

The conductive process fluid completes the electrode circuit (the part of a magmeter that picks up the induced voltage). Without a sensor/pipe full of a conductive process fluid, the circuit is open and deemed empty if the empty pipe detection is active.

The status message should clear once the sensor/pipe is full of a conductive process fluid.

Additionally, the 8732C is an obsolete transmitter. It has been replaced by the 8732E. The C level revision has been obsolete for over 5 years.

Best Regards,

Mike

Forum Post: RE: new MPUIQ modules - compatibility with MPU and DSR units

Forum Post: RE: versions of RSA can supported by DSView4

File: AMS and Moxa.pdf - 8/27/2014 7:24 AM

$
0
0
This attachment "AMS and Moxa.pdf" was uploaded to this Group's Media Gallery on 8/27/2014 at 7:24 AM via email.

File: nport_ia-51505250.pdf - 8/27/2014 7:24 AM

$
0
0
This attachment "nport_ia-51505250.pdf" was uploaded to this Group's Media Gallery on 8/27/2014 at 7:24 AM via email.

Forum Post: RE: AMS 12.0 with MUX MTL 4841 and NPort 6450 from MOXA

$
0
0
See attached…
 
 
 
[collapse]
From: frmezano [mailto:bounce-frmezano@community.emerson.com]
Sent: Tuesday, August 26, 2014 10:32 AM
To: AO@community.emerson.com
Subject: [EE365 Asset Optimization Track] AMS 12.0 with MUX MTL 4841 and NPort 6450 from MOXA
 

Hi all,

When I was as Plantweb Eng. I installed a couple of AMS DM with Comtrol, now I received a call from another FSO about an architecture they are trying to install with an AMS v12.0

They have the transmitters connected to an MTL4841 and the output of this MUX is connected to a NPort 6450 from MOXA, and then connected to DeltaV, but they have a lot of communication problems with AMS DM, is this a valid architecture?

As far as I know the MOXA converter is not approved for the AMS DM, could this be the cause of the communication failures?

Please advise.

[/collapse]

Forum Post: RE: DSR1024 SPC Port

$
0
0
Thanks. .but not PM10 or PM20i..am refering to PM1000 2000 3000 series instead. Thanks

Forum Post: Shale Impact on the U.S. Economy and Rest of The World

$
0
0

Seems like each day the impact of unconventional oil & gas revolution in the U.S. continues to amplify across the U.S. and the world.  I recently ran across an ExxonMobil blog that stated it's a billion-dollar per day shot in the arm for the U.S. economy as result of new supplies of Oil and gas.  The blog goes on to state that domestic energy supplies have increased 1,300% from January 2010 to April 2012.  A more recent article from USA Today  pegs the benefits at $2,000 per U.S. household and 3.3 million jobs created by 2020.  Energy costs for households have also reduced due to cheaper oil and gas.  All of this is a great trend that will continue to make the news and drive the economy.  Lastly saw a Twiiter post from the ONS show in Norway this week that stated investments in the north sea E&P were being re-evaluated based on the North America shale. Oh how the world has changed in the last 5 years.

File: Aborting phase through logic

$
0
0

Hi DeltaV Experts

I am operating phases through DeltaV Batch.  I have a condition(Rupture Disc) where I have to abort the Phase through logic.  For this requirement  I added a condition block to the failure monitor composite & a parameter Fl_01 to monitor this condition from a particular step in the run phase.

I wrote in calc block to move abort into Xcommand parameter. But the phase would not move to Abort Phase as I found that when operating phases through Batch executive, the owner of the phase would be DeltaV Batch & Xcommand can be written only when the Owner is external. 

Can any one guide me on how to move to aborting phase from the running phase through logic. I am using PCSD here.

Please ignore the attachment . I attached as it was not allowing me to post without attachment

Regards,

Manzoor

Forum Post: RE: Content Based Analysis during PAS modernization

$
0
0

I am referring to competitive systems like Foxboro, Honeywell, Bailey and Siemens Moore APACS.  Although, the same analysis can apply to Provox and RS3. What tools did you use to migrate with and what purpose did they serve?  For the complex loops, were Functional Specs written?

I am keenly interested in where tools were used, how and what they are.  Where are these tools, who owns them?  We have the ability to do analysis that will tell us the 'class' (repetitive, think DeltaV module templates) and custom work.  I believe this type of capability would be very useful to a migration team and I'm trying to promote their use.  Is there anyone using this template and custom approach?

Often, the granularity of the legacy control configuration is small (think Bailey function blocks).  In Foxboro I/A, the AIN, PID and AOUT blocks simplify down to the PID controller in DeltaV.  It is good that your customer was able to remove 'dead', needless or overly complex functionality when they converted over to DeltaV.

Forum Post: RE: Creating an Alarm List

$
0
0

I also use the bulk edit method to pull the alarms for all my modules. I then use Excel on the .txt file that is generated to create a list for the process engineer in each area to review. We also use the DeltaV alarm help so the spreadsheets are used to perform alarm rationalization. After rationalization you can update the information in the spreadsheets and reverse the process to update the Process Alarms in DeltaV.

Forum Post: RE: Updating Dynamos

$
0
0

If the new dynamo has a different size, the location may be a bit off when using the dynamo update tool. In that case you still need to check every display separately if everything is lined up right. Update tool works great anyway.

On the second question. Bulkedit can't add, remove or rename parameters from modules, only change values of existing parameters. It also can't delete modules from the database.

If the modules are class-based, it's of course simple to modify the class and then change the values with a bulkedit in all the instances.

If the modules are template-based, I think the only way is to replace the modules. Basically delete the old ones and bulk-edit new ones. Tip: work with class-based whenever possible.

Viewing all 5681 articles
Browse latest View live