26.4 C
New York
Monday, June 30, 2025

Buy now

spot_img

Improve Agentforce knowledge safety with Non-public Join for Salesforce Information Cloud and Amazon Redshift – Half 3


Information safety is a excessive precedence, significantly as organizations face rising cybersecurity threats. Sustaining the safety of buyer knowledge is prime precedence for AWS and Salesforce. With AWS PrivateLink, Salesforce Non-public Join eliminates widespread safety dangers related to public endpoints. Salesforce Non-public Join now works with Salesforce Information Cloud to maintain your buyer knowledge safe when utilizing with key companies like Agentforce.

In Half 2 of this collection, we mentioned the structure and implementation particulars of cross-Area knowledge sharing between Salesforce Information Cloud and AWS accounts. On this put up, we talk about the way to create AWS endpoint companies to enhance knowledge safety with Non-public Join for Salesforce Information Cloud.

Answer overview

On this instance, we configure PrivateLink for an Amazon Redshift occasion to allow direct, non-public connectivity from Salesforce Information Cloud. AWS recommends that organizations use an Amazon Redshift managed VPC endpoint (powered by PrivateLink) to privately entry a Redshift cluster or serverless workgroup. For particulars about finest practices, confer with Allow non-public entry to Amazon Redshift out of your consumer purposes in one other VPC.

Nonetheless, some organizations may want to make use of PrivateLink managed by themselves—for instance, a Redshift managed VPC endpoint will not be but out there in Salesforce Information Cloud, and it’s good to handle your PrivateLink connection. This put up focuses on the answer to configure self-managed PrivateLink between Salesforce Information Cloud and Amazon Redshift in your AWS account to ascertain non-public connectivity.

The next structure diagram exhibits the steps for establishing non-public connectivity between Salesforce Information Cloud and Amazon Redshift in your AWS account.

To arrange non-public connectivity between Salesforce Information Cloud and Amazon Redshift, we use the next assets:

Conditions

To finish the steps on this put up, you should have already got Amazon Redshift working in a non-public subnet and have the permissions to handle it.

Create a safety group for the Community Load Balancer

The safety group acts as a digital firewall. The one site visitors that reaches the occasion is the site visitors allowed by the safety group guidelines. To boost the safety posture, you solely need to permit site visitors to Redshift cases. Full the next steps to create a safety group in your Community Load Balancer (NLB):

  1. On the Amazon VPC console, select Safety teams within the navigation pane.
  2. Select Create safety group.
  3. Enter a reputation and outline for the safety group.
  4. For VPC, use the identical digital non-public cloud (VPC) as your Redshift cluster.
  5. For Inbound guidelines, add a rule to permit site visitors to ingress the listening port 5439 on the load balancer.

  1. For Outbound guidelines, add a rule to permit site visitors to your Redshift occasion.

  1. Select Create safety group.

Create a goal group

Full the next steps to create a goal group:

  1. On the Amazon EC2 console, beneath Load balancing within the navigation pane, select Goal teams.
  2. Select Create goal group.
  3. For Select a goal sort, choose IP addresses.

  1. For Protocol: Port, select TCP and port 5436 (in case your Redshift cluster runs on a special port, change the port accordingly).
  2. For IP deal with sort, choose IPv4.
  3. For VPC, select the identical VPC as your Redshift cluster.
  4. Select Subsequent.

  1. For Enter an IPv4 deal with from a VPC subnet, enter your Amazon Redshift IP deal with.

To find this deal with, navigate to your cluster particulars on the Amazon Redshift console, select the Properties tab, and beneath Community and safety settings, broaden VPC endpoint connection particulars and duplicate the non-public deal with of the community interface. If you happen to’re utilizing Amazon Redshift Serverless, navigate to the workgroup dwelling web page. The Amazon Redshift IPv4 addresses might be situated within the Community and safety part beneath Information entry if you select VPC endpoint ID.

  1. After you add the IP deal with, select Embody as pending beneath, then select Create goal group.

Create a load balancer

Full the next steps to create a load balancer:

  1. On the Amazon EC2 console, select Load balancers within the navigation pane.
  2. Select Create load balancer.
  3. Select Community.
  4. For Load balancer identify, enter a reputation.
  5. For Scheme, choose Inside.
  6. For Load balancer deal with sort, choose IPv4.
  7. For VPC, use the VPC that your goal group is in.

  1. For Availably Zones, choose the Availability Zone the place the Redshift cluster is working.
  2. For Safety teams, select the safety group you created within the earlier step.
  3. For Listener particulars, add a listener that factors to the goal group created within the final step:
    1. For Protocol, select TCP.
    2. For Port, use 5439.
    3. For Default motion, select Redshift-TargetGroup.
  4. Select Create load balancer.

Guarantee that the registered targets within the goal group are wholesome earlier than continuing. Additionally be sure that the goal group has a goal for all Availability Zones in your AWS Area or the NLB has the Cross-zone load balancing attribute enabled.

Within the load balancer’s safety setting, be sure that Implement inbound guidelines on PrivateLink site visitors is off.

Create an endpoint service

Full the next steps to create an endpoint service:

  1. On the Amazon VPC console, select Endpoint companies within the navigation pane.
  2. Select Create endpoint service.
  3. For Load balancer sort, select Community.
  4. For Accessible load balancers, choose the load balancer you created within the final step
  5. From Supported Areas, choose a further area if Information Cloud isn’t hosted in the identical AWS area because the Redshift occasion.  For extra settings go away Acceptance required.

If that is chosen, later, when the Salesforce Information Cloud endpoint is created to hook up with the endpoint service, you will have to come back again to this web page to simply accept the connection. If not chosen, the connection will likely be constructed immediately.

  1. For Supported IP deal with sort, choose IPv4.
  2. Select Create.

Subsequent, it’s good to permit Salesforce principals.

  1. After you create the endpoint service, select Permit principals.
  2. In one other browser, navigate to Salesforce Information Cloud Setup.
  3. Below Exterior Integrations, entry the brand new Non-public Join menu merchandise.
  4. Create a brand new non-public community path to Amazon Redshift.

  1. Copy the principal ID.

  1. Return to the endpoint service creation web page.
  2. For Principals so as to add, enter the principal ID.
  3. Copy the endpoint service identify.
  4. Select Permit principals.

  1. Return to the Salesforce Information Cloud non-public community configuration web page.
  2. For Route Title, enter the endpoint service identify.
  3. Select Save.

The route standing ought to present as Allocating.

If you happen to opted to simply accept connections within the earlier step, you’ll now want to simply accept the connection from Salesforce Information Cloud.

  1. On the Amazon VPC console, navigate to the endpoint service.
  2. On the Endpoint connections tab, find your pending connection request.

  1. Settle for the endpoint connection request from Salesforce Information Cloud.

Navigate to the Salesforce Information Cloud setup and wait 30 seconds, then refresh the non-public join route so the standing exhibits as Prepared.

Now you can use this route when making a reference to Amazon Redshift. For extra particulars, confer with Half 1 of this collection.

Amazon Redshift federation PrivateLink failover

Now that now we have mentioned the way to configure PrivateLink to make use of with Non-public Join for Salesforce Information Cloud, let’s talk about Amazon Redshift federation PrivateLink failover eventualities.

You may select to deploy your Redshift clusters in three totally different deployment modes:

  • Amazon Redshift provisioned in a Single-AZ RA3 cluster
  • Amazon Redshift provisioned in a Multi-AZ RA3 cluster
  • Amazon Redshift Serverless

PrivateLink depends on a buyer managed NLB linked to service endpoints utilizing IP deal with goal teams. The goal group has the IP addresses of your Redshift occasion. If there’s a change in IP deal with targets, the NLB goal group should be up to date to the brand new IP addresses related to the service. Failover habits for Amazon Redshift will differ based mostly on the deployment mode you use.

This part describes PrivateLink failover eventualities for these three deployment modes.

Amazon Redshift provisioned in a Single-AZ RA3 cluster

RA3 nodes assist provisioned cluster VPC endpoints, which decouple the backend infrastructure from the cluster endpoint used for entry. Once you create or restore an RA3 cluster, Amazon Redshift makes use of a port throughout the ranges of 5431–5455 or 8191–8215. When the cluster is ready to a port in considered one of these ranges, Amazon Redshift mechanically creates a VPC endpoint in your AWS account for the cluster and attaches community interfaces with a non-public IP for every Availability Zone within the cluster. For the PrivateLink configuration, you utilize the IP related to the VPC endpoint because the goal for the frontend NLB. You may establish the IP deal with of the VPC endpoint on the Amazon Redshift console or by doing a describe-clusters question on the Redshift cluster.

Amazon Redshift is not going to take away a community interface related to a VPC endpoint until you add a further subnet to an current Availability Zone or take away a subnet utilizing Amazon Redshift APIs. We advocate that you just don’t add a number of subnets to an Availability Zone to keep away from disruption. There is perhaps failover eventualities the place further community interfaces are added to a VPC endpoint.

In RA3 clusters, the nodes are mechanically recovered and changed as wanted by Amazon Redshift. The cluster’s VPC endpoint is not going to change even when the chief node is changed.

Cluster relocation is an optionally available characteristic that enables Amazon Redshift to maneuver a cluster to a different Availability Zone with none lack of knowledge or adjustments to your purposes. When cluster relocation is turned on, Amazon Redshift may select to relocate clusters in some conditions. Particularly, this occurs the place points within the present Availability Zone stop optimum cluster operation or to enhance service availability. You can too invoke the relocation operate in circumstances the place useful resource constraints in a given Availability Zone are disrupting cluster operations. When a Redshift cluster is relocated to a brand new Availability Zone, the brand new cluster has the identical VPC endpoint however a brand new community interface is added within the new Availability Zone. The brand new non-public deal with needs to be added to the NLB’s goal group to optimize availability and efficiency.

Within the case {that a} cluster has failed and may’t be recovered mechanically, you must provoke a restore of the cluster from a earlier snapshot. This motion generates a brand new cluster with a brand new DNS identify, connection string, and VPC endpoint and IP deal with for the cluster. You need to replace the NLB with the brand new IP for the VPC endpoint of the brand new cluster.

Amazon Redshift provisioned in a Multi-AZ RA3 cluster

Amazon Redshift helps Multi-AZ deployments for provisioned RA3 clusters. Through the use of Multi-AZ deployments, your Redshift knowledge warehouse can proceed working in failure eventualities when an surprising occasion occurs in an Availability Zone. A Multi-AZ deployment deploys compute assets in two Availability Zones, and these compute assets might be accessed by way of a single endpoint. Within the case of a failure of the first nodes, Multi-AZ clusters will make secondary nodes main and deploy a brand new secondary stack in one other Availability Zone. The next diagram illustrates this structure.

Multi-AZ clusters deploy VPC endpoints that time to community interfaces in two Availability Zones, which needs to be configured as part of the NLB goal group. To configure the VPC endpoints within the NLB goal group, you’ll be able to establish the IP addresses of the VPC endpoint utilizing the Amazon Redshift console or by doing a describe-clusters question on the Redshift cluster. In a failover state of affairs, VPC endpoint IPs is not going to change and the NLB doesn’t require an replace.

Amazon Redshift is not going to take away a community interface related to a VPC endpoint until you add a further subnet in to an current Availability Zone or take away a subnet utilizing Amazon Redshift APIs. We advocate that you just don’t add a number of subnets to an Availability Zone to keep away from disruption.

Amazon Redshift Serverless

Redshift Serverless offers managed infrastructure. You may carry out the get-workgroup question to get the workgroup’s VpcEndpoint IPs. IPs needs to be configured within the goal group of the PrivateLink NLB. As a result of this can be a managed service, the failover is managed by AWS. Through the occasion of an underlying Availability Zone failure, the workgroup may get a brand new set of IPs. You may steadily question the workgroup configuration or DNS document for the Redshift cluster to examine if IP addresses have modified and replace the NLB accordingly.

Automating IP deal with administration

In eventualities the place Amazon Redshift operations may change the IP deal with of the endpoint wanted for Amazon Redshift connectivity, you’ll be able to automate the replace of NLB community targets by monitoring the outcomes for cluster DNS decision, utilizing describe-cluster or get-workgroup queries, and utilizing an AWS Lambda operate to replace the NLB goal group configuration.

You may periodically (on a schedule) question the DNS of the Redshift cluster for IP deal with decision. Use a Lambda operate to check and replace the IP goal teams for the NLB. For an instance of this answer, see Hostname-as-Goal for Community Load Balancers.

For legacy DS2 clusters the place the IP deal with of the chief node should be explicitly monitored, you’ll be able to configure Amazon CloudWatch metrics to watch the HealthStatus of the chief node. You may configure the metric to set off an alarm, which alerts an Amazon Easy Notification Service (Amazon SNS) matter and invokes a Lambda operate to reconcile the NLB goal group.

For backup and restore patterns, you’ll be able to create a rule in Amazon EventBridge triggered on the RestoreFromClusterSnapshot API motion, which invokes a Lambda operate to replace the NLB with the brand new IP addresses of the cluster.

For a cluster relocation sample, you’ll be able to set off an occasion based mostly on the Amazon Redshift ModifyCluster availability-zone-relocation API motion.

Conclusion

On this put up, we mentioned the way to use AWS endpoint companies to enhance knowledge safety with Non-public Join for Salesforce Information Cloud. If you’re at the moment utilizing the Salesforce Information Cloud zero-copy integration with Amazon Redshift, we advocate you observe the steps offered on this put up to make the community connection between Salesforce and AWS safe. Attain out to your Salesforce and AWS assist groups for those who want further assist to implement this answer.


In regards to the authors

Yogesh Dhimate is a Sr. Companion Options Architect at AWS, main expertise partnership with Salesforce. Previous to becoming a member of AWS, Yogesh labored with main firms together with Salesforce driving their business answer initiatives. With over 20 years of expertise in product administration and options structure Yogesh brings distinctive perspective in cloud computing and synthetic intelligence.

Avijit Goswami is a Principal Options Architect at AWS specialised in knowledge and analytics. He helps AWS strategic clients in constructing high-performing, safe, and scalable knowledge lake options on AWS utilizing AWS managed companies and open supply options. Outdoors of his work, Avijit likes to journey, hike, watch sports activities, and hearken to music.

Ife Stewart is a Principal Options Architect within the Strategic ISV section at AWS. She has been engaged with Salesforce Information Cloud during the last 2 years to assist construct built-in buyer experiences throughout Salesforce and AWS. Ife has over 10 years of expertise in expertise. She is an advocate for variety and inclusion within the expertise area.

Mike Patterson is a Senior Buyer Options Supervisor within the Strategic ISV section at AWS. He has partnered with Salesforce Information Cloud to align enterprise goals with progressive AWS options to realize impactful buyer experiences. In his spare time, he enjoys spending time together with his household, sports activities, and outside actions.

Drew Loika is a Director of Product Administration at Salesforce and has spent over 15 years delivering buyer worth by way of knowledge platforms and companies. When not diving deep with clients on what would assist them be extra profitable, he enjoys the acts of creating, rising, and exploring the nice outside.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles

Hydra v 1.03 operacia SWORDFISH