- Category: TufinOS
In February 2020 Tufin has released TufinOS 2.21. This version is available for download now in the Tufin Portal (authentication required). TufinOS 2.21 is available as upgrade package only (tufinos-update-2.21-1395.run.tgz). So if you need to set up a new system, installing TufinOS 2.18 from ISO or USB is necessary before upgrading to 2.21.
New features and updates of TufinOS 2.21 are (e.g.):
- PostgreSQL 11 (11.6-1PGDG.rhel6) has been added
- ncdu and tmux rpms from EPEL have been added
- Updated RAID driver for ASR-8805 to version 220.127.116.11012 (GEN-3.5)
- Updated Microsemi Adaptec ARCCONF Command Line Utility to version 3.03.23668 (GEN-3.5)
- Updated PostgreSQL 9.4 to version 9.4.25-1PGDG.rhel6
- Updated PHP to version 5.6.40-1.w6
- Additionally 35 RPMs based on the latest version of CentOS 2.19 have been updated
Please be aware that only TufinOS 2.19 and 2.21 are supported by Tufin now, i.e. older versions will also get no security related updates.
Additional information about Security Fixes included in TufinOS 2.21 is available. When hardeing TufinOS please regard hints given by Tufin.
Be sure that your TOS version is compatible with the new release of PostgreSQL! You should check it in Tufin Knowledge Center before trying to upgrade.
- Category: SecureTrack
Besides standard functionality, Tufin offers extra tools like "Reporting Pack". This requires a special library, called PS Scripts. First of all, you need to download the file from the Tufin Portal (authentication required):
- PS Script 5.5.7 (for Reporting Tool) Setup
(credentials for access to SecureTrack and SecureChange are requested)
After having downloaded This file, it's necessary to install the package - and please remember to create a backup of your Tufin Server before doing so!
Then install the library (as root or with sudo on e.g. SecureTrack Server for Reporting Pack):
- # /bin/sh setup_tufin_ps_scripts-5.5.7.run -W
Be sure not to forget the "-W" (upper case) when installing the libary. Credentials needed are "Super Admin" for SecureTrack and "Security Administrator" for SecureChange.
To check a successful installation of the library, run the command
# ls /opt/tufin/securitysuite/ps/conf/WEB_ENABLED
If this file exists, everything is fine. You can also check if the service is running using the command
# /etc/init.d/tufin-ps-web status
The service should be running. If not, you may try to start it via CLI.
To check the version of the library, use
# cat /opt/tufin/securitysuite/ps/PS-version, Logs are stored in the directory /var/log/ps/Tufin_PS_Logger.log.
If all work is done, you can install Reporting Pack or use the library for Tufin PS or your own scripts.
- Category: Version update
Tufin has just released TOS R19-3, the third and final version of the Tufin Orchestration Suite in 2019.
TOS 19-3 is available as GA now, delivering some improvements, e.g.
Change Automation and Orchestration
- Rule Modification Workflow
With this workflow it's possible to modify the fields Source and Destination within an existing rule. Here new as well as existing objects can be added or removed. This feature is fully integrated in SecureTrack Policy Browser and delivers full API support
Supported devices are Check Point R80, Cisco FMC, Palo Alto Panorama, Cisco ASA, and Juniper SRX
- Group Ticket Notifications
Teams can work better now with this feature. The requester of a ticket can now specify a group of users that will receive all E-Mail notifications
- Palo Alto Panorama FQDN Objects in Access Request
FQDN can be used now, so it's no more necessary to convert names to IP addresses when used in an Access Request
- Check Point R80 - Support of IPv6 addresses
Access Requests now can use IPv6 addresses in source and/or destination. This is true for new as well as existing rules. Besides this, also new IPv6 objects can be created. Manual Target Selection in SecureChange is required
Devices and Platforms
- Check Point R80 syslog
Usually, Check Point Log/Management Servers deliver their logs to SecureTrack using LEA. If wanted, now these logs also can be sent by syslog to SecureTrack
- Cisco ACI Visibility
The ACI policy is now shown in SecureTrack, including EPGs, VRFs, Contracts, Subjects, ... So an instant view of policy details is possible
- Cisco ACI Path Analysis
ACI devices are included in SecureTrack Topology, so the traffic flow in and out of the ACI device is shown
- Cicso FMC Visibility
Now FMC zones are shown in retrieved FMC rules, e.g. in Policy Prowser, View Policy etc.
Improvements regarding speed of revision retrieval
- PAN Panorama syslog
Panorama can be configured now to send syslog by TCP/TLS instead of UDP
- PAN Panorama Device Groups
Panorama Device Groups (DG) can now be migrated to non-default SecureTrack domains from any level in the group hierarchy, improving management of Domains
- VMware NSX-T
SecureTrack and Secure Change now support NSX-T. It includes Change Tracking, Clean Up, Violations, Policy Browser, Reports, Topology, etc.
- Check Point R80
- Adding or Updating Managed Devices (CMA or SMC) via API
- Adding new device (CMA or SMC) via API
- Palo Alto Panorama
- Support of URL Filtering using API
- SecureChange Designer
- Enhancements for Set Rule location via API
- Rule Modification Workflow
- Support of many features regarding the Rule Modification Workflow via API
- Getting Application Interfaces is possible now using API
Further improvements as well as corrections are included.
The latest version of the Tufin Orchestration Suite can be found at the Tufin Portal: https://portal.tufin.com
- Category: SecureTrack
When F5 devices are monitored with Tufin SecureTrack, every part of the configuration can be found here (except ACLs).
It might happen that in SecureTrack > Menu > Settings > Administration > Status the status sign is yellow, stating "Error: Wrong arguments". At the first glance, there seems to be a problem regarding authentication or version of F5. But this isn't necessarily so complicated.
Looking at the Client Log in /var/log/st for example this information can be found:
FAULT: 14713 20191221 08:24:06.041 what() -> Error occurred when pulling configuration from the device: Wrong arguments
send_error "\nsent username\n"
send_error "\nsent username\n"
11255 20191221 08:24:05.030 Error occurred when pulling configuration from the device: Wrong arguments
FAULT: 11255 20191221 08:24:05.031 what() -> Error occurred when pulling configuration from the device: Wrong arguments
If you find the error mentioned above, just check the connection. Even if it isn't obvious, a connection time out might have occurred.
"Wrong arguments" is also displayed if there is no ssh connection possible between SecureTrack Server and F5.
- Category: SecureTrack
When upgrading Check Point Management, MDS or CMA from R77.x to R80.x it seems quite easy to upgrade it in SecureTrack, too. Just go to Menu > Settings > Monitoring and select the device that has been upgraded. The Menu on the right side shows the option "Upgrade to R80", you select it and provide the credentials for the API user at the Check Point Management. After that, the device monitoring has been changed to R80 and everything runs fine.
What happens if "st stat" or Menu > Admininstration > Status shows an error:
myDevice 10.1.1.1 14 CMA 1001 valid Error: Upgrade device to R80 in Settings > Manage Devices > Monitored Devices
The situation seems a little bit confusing - the upgrade has been done in Check Point as well as Tufin SecureTrack, but the status shows an error as if the Management Server has not been upgraded in SecureTrack.
Reason for this error
The reason for this error is a change of the management type in the data base - and the change has not taken place in the data base. This is not the "normal way", but it might happen that the change is not recognized in SecureTrack. The differences are:
SmartCenter R77.x is referred as cp_smrt_cntr
SmartCenter R80.x is referred as cp_smc_r80plus
CMA R77.x is referred as cp_cma
CMA R80.x is referred as cp_domain_r80plus
First - as every time you work on the data base: Perform a backup (!)
For a Check Point CMA the necessary next procedure looks like this:
- Check the ID of the device using "st stat". This example uses the Management ID 14
- Check the current status of the device:
[root@TufinOS]# psql -Upostgres securetrack -xc "select cp_type from managements where management_id=14"
-[ RECORD 1 ]---
cp_type | cp_cma
- Update the variable in the data base and re-check the status
[root@TufinOS ~]# psql -Upostgres securetrack -xc "update managements set cp_type='cp_domain_r80plus' where management_id=14"
[root@TufinOS ~]# psql -Upostgres securetrack -xc "select cp_type from managements where management_id=14"
-[ RECORD 1 ]--------------
cp_type | cp_domain_r80plus
- Restart the monitored device
[root@TufinOS~]# st restart 14
Stopping SecureTrack process for server myDevice - 10.1.1.1 (Id: 14)
SecureTrack process stopped for server 10.1.1.1 (Id: 14)
SecureTrack for myDevice - 10.1.1.1 (Id: 14) was started successfully
After some seconds the status should have been changed - this can be checked either "st stat" or the WebUI.
The error shown above should not be shown any more. If other errors are shown, you need to continue troubleshooting. Maybe these links help:
- Category: TOS classic
If TOS is configured to run as a cluster, a Virtual Cluster IP (VIP) is used for communication with SecureTrack and/or SecureChange server. Besides this, further interfaces are needed to configure a cluster, e.g. for Heartbeat. If the network interface of the Heartbeat is down, the cluster will do a failover. At first glance, this isn't a problem because users can still work using the VIP. But, for bringing TOS back to cluster mode with data replication, a maintenance window is recommended. The database sync takes some time and during this time the VIP is unreachable.
So if a cluster member is e.g. moved from one switch to another, a failover occurs. If this isn't wanted, the failure detection can be (temporarily) disabled by typing the command on the active member:
# hactl --pause-auto-failover
Run the command hactl status on both nodes after a few minutes and make sure the status shown is "unmanaged"
Then, replace the switch. If done so and having connected all cables on the active cluster member run the command:
# hactl --resume-auto-failover
After some (short) time, the status should be checked again using hactl status. It should be normal again, showing correct distribution of active / standby member as before.
Page 6 of 19